Another cloud project that I decided to embark on was to host a WordPress website fully on an EC2 instance, in order to gain hands on experience with deploying resources on the cloud. I used the same domain as for my static website project, but this time the aim was to connect this to an EC2 instance with WordPress and create a website on it, while remaining completely within the EC2 free tier. Here’s a quick summary on how I did it.
Creating and preparing the EC2 Instance
EC2 Instance specs: t2.micro instance from an Ubuntu AMI, configured with 10GiB gp2.
Key pair: using the .pem format, in order to SSH from a Mac into the instance to install the required software and runtimes
Security Groups Configuration: to allow for SSH as well as HTTP/HTTPS connections, I configured three inbound rules to allow traffic from any IP on the relevant ports (22, 80 and 443).
Elastic IP: This is not actually necessary, however I made the decision to use an Elastic IP as I wanted the public IP of my instance to remain the same in case the instance was stopped or terminated (= less maintenance in Route 53). An important thing to note here is that an Elastic IP is free as long as its associated with an instance that is running, which means aside from the domain cost, this whole project remained in the free tier.
Route 53: I created an alias record to connect to our EC2 instance via our Elastic IP. In AWS, you cannot use an EC2 instance as the target for an alias record as you can with other services such as S3 or API Gateway (which have fixed endpoints that AWS manages). However you can still point an alias record to an IPv4 address, and because of this I used an Elastic IP to replicate the nature of the other services, to ensure that if I ever had to stop and start the instance, I wouldn’t have to change the record in Route 53, meaning less future maintenance.
Connecting to the instance and installing resources
The first step was to SSH into the instance. This is the code I used, while replacing “yourkeypair.pem” and “yourip” with my relevant key pair and the Elastic IP of my instance:
ssh -i yourkeypair.pem ubuntu@yourip
Once connected, I then installed an NGINX web server, a MariaDB SQL database, as well as the PHP runtimes required to run WordPress. Combined with the Ubuntu OS (which is an offshoot of Linux), this makes up a very well known software pattern known as the LEMP Stack.
L – Linux OS
E – NGINX (pronounced “engine x”)
M – MySQL database (in this case, MariaDB)
P – PHP
It’s an open source development framework used for web applications, and is perfect for WordPress. I updated the Ubuntu server packages and then installed the full stack, here’s the code I used:
sudo apt update
sudo apt upgrade
sudo apt install nginx mariadb-server php-fpm php-mysql
Setting up WordPress and the database
Installing WordPress: I now had the basic components of the stack installed, the next step was to install WordPress. Here’s the code I used:
cd /var/www
sudo wget https://wordpress.org/latest.tar.gz
sudo tar -xzvf latest.tar.gz
sudo rm latest.tar.gz
sudo chown -R www-data:www-data wordpress
sudo find wordpress/ -type d -exec chmod 755 {} \;
sudo find wordpress/ -type f -exec chmod 644 {} \;
Important to note:
- These commands extract and install the contents of the “latest.tar.gz” directory (latest version of WordPress) into the /var/www/ directory and changes permissions of files and directories within the newly created /var/www/wordpress/ folder
- WordPress recommends to run the last two lines of code in order to give the correct permissions for files and directories. Without going into specifics, the code searches for files and directories within the WordPress directory and assigns certain read/write/execute permissions for its owners, as well as other users trying to access it
Configuring the database: I then configured the database to prepare it for use with WordPress. I used the following code:
# Securing the MariaDB database
sudo mysql_secure_installation
# Accessing the MariaDB console
sudo mysql -u root -p
# Creating a database for WordPress
create database example_db default character set utf8 collate utf8_unicode_ci;
create user 'example_user'@'localhost' identified by 'example_pw';
grant all privileges on example_db.* TO 'example_user'@'localhost';
flush privileges;
exit
Important to Note:
- “example_db” is the name of your database, “example_user” is the name of your user, and “example_pw” is the password for the user you’re creating
Configuring the NGINX Web Server to work with WordPress and our domain
The next step was to configure our web server to redirect requests for our domain to our WordPress site and provide the correct content within its responses using the PHP runtime. In order to do this, I needed to access the wordpress.conf file and insert the correct logic. I used this code…
cd /etc/nginx/sites-available/
sudo vim wordpress.conf
…which is a command to access the directory that the wordpress.conf sat in, and edit the file in a visual editor. Here’s the code I inserted within the file:
upstream php-handler {
server unix:/var/run/php/php7.4-fpm.sock;
}
server {
listen 80;
server_name netwits.io www.netwits.io;
root /var/www/wordpress;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass php-handler;
}
}
Important to Note:
- In a sentence, the above code now allows the web server to direct webpage requests for domains “netwits.io” and “www.netwits.io,” to a WordPress installation at “/var/www/wordpress,” – with special PHP requests being forwarded to another part of the server called PHP-FPM through a Unix socket.
- When using this code, you’d need to replace the domains in this code specified in the server block to your own domains, and ensure that the php fpm.sock file specified in the upstream handler block is accurate to your own installation (particularly the version number).
- The wordpress.conf was located in the sites-available folder, which is a folder to show all the potential sites that can be served by NGINX. However, in order to actually enable the site, the file would need to also exist in the sites-enabled folder. This could be done by copying the files, but to save space, I created a symbolic link between the folders to save disk space, but to also ensure that any updates to the file would be present in both folders. Once I created these links and tested the syntax, I restarted the NGINX server to apply the updates.
Configuring WordPress and adding the SSL certificate
After all of the above steps, I was now able to open WordPress through my custom domain. I was greeted with a similar screen to the one below, where I added all the relevant information for my database for it to connect to WordPress:
Once submitted, and after some additional PHP installation from within the WP console, I now had a fully working WordPress website hosted from EC2. However, this website could only be accessed from HTTP and not HTTPS because I hadn’t yet installed an SSL certificate. Because of the previous Route 53 configuration, I wasn’t able to use AWS’s Certificate Manager (ACM), a service that provisions and manages SSL certificates while automating the renewal process. ACM doesn’t work with EC2 due to the direct low level access you have with the instance. In my case, I had direct access to the instance and installed WordPress on it, but for our web server to use an SSL certificate, I’d need the private keys for the SSL certificate, which ACM doesn’t provide. For this cloud project, I decided to get a free SSL certificate from Let’s Encrypt and configure the web server to use this certificate, using the following code:
sudo apt install snapd
sudo snap install core; snap refresh core
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo certbot --nginx
Lessons Learned
After completing the above: I now had a fully working WordPress website on EC2, with no costs associated with the EC2 instance, secured with an SSL certificate. Here are some of the key skills and lessons I learnt from this project:
- AWS services: I was able to get hands-on experience with various services such as Route 53, EC2, and ACM, as well as the limitations of these AWS services as well as the underlying reasons behind it. This project also inspired me to start thinking about the implications of potential alternatives, such as hosting the website on a EC2 instance fronted by an Application Load Balancer, and using ACM; or what the implications would be if the website needed to scale beyond one single instance and what application logic would be needed if an Auto Scaling Group was used.
- Experience with Linux and the Command Line: accessing an instance from the command line is very different to most people usually interact with a computer, and going through the process of logging into the instance via SSH and installing the relevant software gave me much needed hands on experience with Linux shell scripting, and understanding all of the relevant commands and their functions. Such experience will be invaluable as a Solutions Architect as I’ll be able to more comfortably interact with the work of DevOps Engineers to help with code reviewing, troubleshooting etc.
- The architecture behind the LEMP stack: seeing how the different services within the stack interact with each other was very insightful, and because different software stacks are often quite similar in their interaction, it’s empowered to understand the real-world application of such softwares compared to just learning the theory.
Leave a Reply