Welcome to Cykod. We are a fully-integrated, self-funded web-development startup located in Boston, MA.

Blogging in the Cloud: Scaling the hell out of a Wordpress 3 Install

Got a high traffic Wordpress install you want to scale to handle a large number of hits or add in high-availability? Well, Wordpress 3, W3 Total Cache and Amazon's cloud offering makes it (relatively) easy to get a high-availability Wordpress cloud setup that can serve just about any traffic load you throw at it.

Building our inbound marketing platform on EC2 made my realize how darn easy it is to create a high-availability site these days (meaning any single component failing doesn't bring the site down) Even just a few years ago this used to take a good bit of IT knowledge to get a load balancer, replicated database servers with heartbeat and auto fail-over set up. Now it's an hour or so of configuration, most of which can be done from the AWS dashboard.

While we primarily work in our own open-source Rails CMS Webiva (which, cough...has multi-server scaling built in...cough) the popularity and simplicity of Wordpress means we still recommend it and are doing installs on occasion, and in this case we needed to scale a site for a new campaign that was going out to an email list of a few million. Given the number of resources sunk into the campaign and the fact that the organization didn't have a good estimate of the traffic or virality of the message, they wanted to be better-safe-than-sorry. So we scaled the hell out of a Wordpress blog for a couple of days - throwing a gaggle of servers, caching and a CDN at it to make sure it could handle whatever traffic the Internet gods decided to provide.

This is a rather lengthy tutorial, but if you're interested in this sort of thing I'd recommend following along with a dummy setup to get a sense of the issues involved. We're going to go through this with a fake domain name and fake blog, you can do the same or try to use a real domain name. Provided you don't take too long to complete the steps, you'll only blow a couple bucks max.

Prerequisites: An AWS account, access to a SSH client, some familiarity with Linux, and ability to modifiy your /etc/hosts (or wherever it is in windows) file for testing purposes.

Here's the pieces we're going to use:

  1. Multi-AZ RDS deployment - this gives us a high-availability Mysql server with auto-failover
  2. A "admin" EC2 server - this is where admins will log into to modify the site
  3. 0-X Web app servers - this is where the unwashed masses will view our site
  4. 1 Amazon load balancer - this will keep our site running happily even under heavy traffic load
  5. Total Cache plugin for wordpress to copy all our files and assets over to S3, and then onto the Cloudfront CDN

Your monthly costs for this sort of set-up will depend on the number of instance sizes, but you'll be looking at a minimum of the equivalent of 4 small instances + a load balancer for a High Availability setup, so expect to drop at least $270/mo in EC2 costs, in addition to bandwidth costs, so it's not cheap, but it's still less then a few dedicated boxes most anywhere else.

 If you want to cheaper - you can skip the RDS server and just host the Mysql database on one of the EC2 instances - but you'll lose the high availability of a multi-az RDS server.

Note: Amazon just released micro instance sizes - these are significantly cheaper than small instances - 1/8th the cost - so these might be a good choice for a less expensive cloud. I haven't had a chance to play around with them so I don't no what sort of performance they get. That server type isn't available for RDS, so you could either RYO micro-mysql server or skip the multi-AZ deployment if you wanted to go cheaper, but you would be sacrificing High-Availability (one machine goes down and you're bonked)

Step 0: Get your Ducks in a row

Get yourself a Amazon AWS account, you can signup at aws.amazon.com, and enable access on your account to for S3, Cloud front, and EC2 and put in a credit card so you can get started. You'll need to activate each service to be able to do so. Not doing so ahead of time will probably cause some confusion later in the tutorial.

Step 1: Launch a Mysql RDS Database

Alright on to the real stuff - first step is to login to the AWS Ec2 console from the aws.amazon.com page.

We're going to launch the RDS Database first as it takes a while to start-up.

Click on the 'Amazon RDS' tab at the top of the EC2 Console and click 'Launch DB Instance'

Configure the database along the lines of what's shown in the screenshot (you can leave Multi-AZ deployment off for testing, but might want to use it if you're doing this for real - it provides auto-failover if your Db machine goes down so your site doesn't miss a beat) using whatever you like for the instance identifier, master user name and password. Write down the user name and password somewhere. Click 'Continue'

AWS Launch DB Instance Wizard

Enter a database name - we're using 'wordpress' but you can set it whatever you like and click 'Continue'

AWS Launch DB Instance Wizard step 2 - wordpress database

You can leave the management options as they are - just click 'Continue' on the next page.

Everything should now be all set, click 'Launch DB Instance' to launch the server.

AWS Launch DB Instance Wizard: step three review

Step 2: Set up the DB security Group

Next we need to set up the security group. Click on DB Security groups and check the `default` group. We're going to add authorization for our EC2 security group and for our current machines IP address (for testing purposes)

First add the 'EC2 Security Group' we set up, you'll need the security group name and your AWS account id - which you can find in small type the top right of any account page after you log-in to aws.amazon.com. Next select 'CIDR/IP' and cut and paste the address of your current machine (which is nicely shown below the text box) and click add.

AWS Setup DB security group

Step 3: Launch a EC2 Server

Click on the 'Amazon EC2' tab

We're going to start up a Ubuntu Server AMI. For available Ubuntu Server AMI's, the Alestic AMI's are the way to go. Check-out the Alestic site for available AMIs - you can click on the availability zone you want to pull from and pick either a 32-bit or a 64-bit EBS boot Ubuntu server AMI. If you are only going to run small or medium instances - you can go 32-bit, but otherwise you'll need a 64-bit image. To test this out, it might be worth testing with a couple of small (32-bit) instances so you don't blow too much cash trying stuff out.

We want a EBS instance for a number of reasons - one it's easier to bundle from, secondly we can bring instances up and down as necessary, and thirdly in-case of a machine crash, we can restart the instance with the same setup. You can also keep a couple of "Hot Spare" EC2 instances in the stopped state that you'll only pay storage costs for and can bring up when needed (either manually or automatically)

I'm going 32-bit (for testing purposes - for our actual deployment we go large instances and 64-bit) for the us-east-1 AZ - so I'm going to launch AMI: ami-1234de7b

From the EC2 Dashboard - click on 'Instances' and then 'Launch Instance'. Click on 'Community AMIs' and then paste in that AMI id 'ami-1234de7b' into the search field and click 'Select' once you find the AMI.

AWS Request instances Wizard

Launch a single small instance for the time being in 'us-east-1d' and click continue.

AWS Request Instances Wizard: choose type

Use the default Kernal ID and RAM Disk ID, so you can just click continue on the next page.

Create a new key pair for testing purposes - let's call it test. Enter that name and click 'Create & Download your Key Pair'

This will download a 'test.pem' file what we will use to login to the machine. Keep this file handy (and safe) as it lets you login to your EC2 machines. NOTE: If you lose your .pem file you are in a serious bind as Amazon does not, I repeat not, store them. Make a copy and put it in a safe place.

Let's also create a new security group called test, that allows http, https and ssh (click select at the bottom for each of those to add them)

The dialog looked a little strange for me - but you should be able configure it correctly.

If everything looks good - click 'Launch'

Once the machine is up in the running state we're going to add an 'Elastic IP' to the machine so that we'll have a IP address we control and can move among machines as necessary. To do this, from the EC2 Console, click on Elastic IPs on the left panel. Next click 'Allocate New address" and confirm that we want a new address. Now check the box of the new IP address that shows and click the associate button. Select the instance we just created and click associate. This will change the public DNS of the running instance, but it takes a couple of minutes to update.

Step 4: Set up our EC2 Instance

If you launched a small instance it should be available relatively quickly. As soon as it's in the running state, let's log into the machine with ssh. Use the "Public DNS" name of the machine - displayed as a column in the console - and your 'test.pem' file to login to the machine as follows (replacing 'ec2-184-72-146-25.compute-1.amazonaws.com' with your machine's public dns

ssh -i test.pem ubuntu@ec2-184-72-146-25.compute-1.amazonaws.com

(If you get an error about your key being too open - run `chmod 700 test.pem` to fix the perms and login again)

Once you're in - let's install a few packages we'll need for Wordpress to run correctly and for the caching we're going to be doing to work:

sudo apt-get install libapache2-mod-php5 php5-mysql postfix mysql-client unzip php5-memcache php5-curl memcached php5-gd
sudo a2enmod rewrite headers expires

When the postfix options come up, just press return twice.

(This will pull in apache2, php5 and postfix, some modules and enable a few apache modules)

Just to make sure everything is config'd correctly let's try to login to our RDS server with the mysql command line tool. Find out your end point name by clicking on the 'Amazon RDS' tab and then on the 'DB instances' item on the side. Once your RDS instance is in the available state, you'll see a 'Endpoint' column with a hostname to use for the db.

mysql -u [Master User name] --password=[Master Password] -h [Endpoint hostname]

In my case this would be (I'm using 'wordpress' as the db name, username and password - please, for the sake of everything that is good and just in this world, don't use those credentials on a real site)

mysql -u wordpress --password=wordpress -h wordpresstest.cytkniw1nwrn.us-east-1.rds.amazonaws.com wordpress

If everything is hunky-dorry you should be presented with the mysql prompt. If not, check the security group stuff above.

Step 5: Set up a new user

We're going to install wordpress on our EBS volume, which has about 14 GB of free space. EC2 EBS instances have two types of storage - EBS and Ephemeral storage. The EBS storage sticks around, while the Ephemeral, like it's name says, doesn't. If you have a lot of image or media files, you may need to use the ephemeral storage for your media, but be aware this doesn't carry over into your snapshot.

Let's create a new user called wp, whose home directory is where we're going to do our wordpress install, and login as that user:

sudo groupadd wp
sudo useradd -d /home/wp -s /bin/bash -m -g wp wp
sudo cp -r ~ubuntu/.ssh/ ~wp/
sudo chown wp:wp /home/wp/.ssh/
sudo su - wp

(The second to last two lines let us login with the same test.pem file to the wp user.)

Step 6: Doing a fresh Wordpress install

Now let's download the latest version of wordpress, unzip it and make a 'logs' directory:

wget http://wordpress.org/latest.zip
unzip latest.zip
chmod a+w wordpress/
# this makes main wordpress dir writeable for config - a little bit of a security risk, but easier to config temporarily
chmod a+w wordpress/wp-content/
# let us upload files
rm latest.zip
mkdir logs
exit
# we should be the `ubuntu` user again

Now we need to get apache configed with wordpress, open up /etc/apache2/sites-available/default with your favorite editor:

sudo vi /etc/apache2/sites-available/default

and replace what's there with the following:

<VirtualHost *:80>
  ServerAdmin webmaster@localhost
  php_admin_value open_basedir "/home/wp/wordpress:/tmp"
  DocumentRoot /home/wp/wordpress/
  ErrorLog /home/wp/logs/error.log
  LogLevel warn
  # Don't let anything in wp-content/uploads be executed as php

  <Directory "/home/wp/wordpress/wp-content/uploads">
      Order allow,deny
      Allow from all
      <IfModule mod_php5.c>
          php_admin_flag engine off
      </IfModule>
      AddType text/plain .html .htm .shtml .php .php3 .phtml .phtm .pl
  </Directory>
  CustomLog /home/wp/logs/access.log combined
</VirtualHost>

Now let's restart apache2:

sudo apache2ctl restart

Now open up a browser and put in the public dns name you SSH'd into earlier and let's see what happens:

 

If everything is setup correctly you should be presented with the option to create a wp-config file, click 'Create a configuration file' and then "Let's go" and fill in the info based on your database name, master user name and password and use your RDS "EndPoint" as your database host.

Click 'Submit', and if everything is correct you should be ready to go.

If it all went ok, you should be presented with a success message and the option to 'Run the install', which will take you through the one step necessary to set up your wordpress blog.

Note, we have a little bit of a security risk in ~wp/wordpress/ but we're going to leave that open until we set up our cache

Congratulations! You now have a functional cloud-hosted wordpress install. We've only got a couple steps left to make this scalable - we need to set up s3/cloudfront support, set up a load balancer and create a couple of clones of our server. Jump down to step 6.

Step 7: Setting up S3 support.

We're going to use the ridiculously powerful 'w3 total cache' plugin to serve files off of cloudfront. Make sure you're still logged into the server via SSH as the `wp` user, and run

sudo su - wp # if necessary
cd ~
wget http://downloads.wordpress.org/plugin/w3-total-cache.0.9.1.2.zip
unzip w3-total-cache.0.9.1.2.zip
mv w3-total-cache wordpress/wp-content/plugins/

Now go into your Wordpress admin backend and click on 'Plugins' and activate the 'W3 Total Cache' plugin. Then click on 'Settings'.

We're going to cache everything all-to-hell. Let's start with "General Settings",

Page Cache -> Enable and set to Memcached
Minify -> Enable and set to Memcached
Database Cached -> Enable and set to Memcached
Object Cache -> Enable and set to Memcached
CDN -> Enable and set to Cloudfront
Browser Cached -> Enable

Click 'Save Changes'

Quick note: the reason we use Memcached everywhere is that we need to support multiple servers - if you are on a single server and your configuration allows it turning on "disk enhanced page caching" will actually result in better performance as it will write static files to disk where it can and then serve them with apache, obviating the need to hit PHP at all. 

We now need to configure a few more settings - let's start with the CDN. Go to aws.amazon.com, click on 'Account' in the top right and then 'Security Credentials' and get your 'Access Key ID' and 'Secret Access Key' and fill them into your wordpress config, now pick a unique bucket name - bucket names are shared across all of S3, so pick something unique that matches your domain name, click 'Create bucket' and then 'Save Changes'

Now follow the prompts at the top of the page to upload your wp-includes, theme files, minify tiles and custom files.

You can now hit the "Deploy" button to deploy your changes and check your site from the front end (using the public DNS name) to make sure everything is working as expected - you should be serving all your assets off of the amazon CDN.

Step 8: Server name hack

We're now going to put in a little server name hack - the goal of which is to allow our wordpress blog to be served off of multiple hostnames - one for the front end and a different one for the backend.

We need to do this so that we can be sure that the admin panel will only use one server to upload files from the backend, so while the front end will serve off of the load balancer, the back end will serve off of 1 specific server. The reason for this is that we want media uploads to stay on a single server. We're going to put in a little apache2 mod_rewrite hack as well as well as a use a simple plugin to make this happen.

Note: this is the only part of the set up I really don't like - if someone else has a more elegant solution please point me to it! Some other options include turning on stickiness in the load balancer and running a background cron to automatically copy files between all your servers. This, however makes it more difficult to easily bring up and down servers.

Fetch the copy of the plugin described here that I made here and install it:

cd ~wp
wget http://static.cykod.com/wp/multiple_server_names.php
mv multiple_server_names.php wordpress/wp-content/plugins/

Now activate the module from the wordpress from the plugins page.

We now need to decide on the domain name that this site is going to show up on. Because of the way that Amazon's load balancer works - we can't just point our myblogname.com at an IP address an be done. We're going to have to be a little smarter than that. Here's what we're going to do:

myblogname.com - point this to a couple of our elastic IP's on our EC2 instances, we'll redirect to www.myblogname.com
www.myblogname.com - we'll point to this with a CNAME to the domain name of our load balancer
admin.myblogname.com - we'll point this with an 'A' record to the IP address of our first admin instance

For now, just open up your /etc/hosts file and add in the following entries (replace myblogname.com with whatever you'd like to use for name - since we're editing /etc/hosts you can use any name, including one you don't actually own) Replace 204.236.234.251 what whatever your server's IP address is.

204.236.234.251 myblogname.com
204.236.234.251 www.myblogname.com
204.236.234.251 admin.myblogname.com

Now edit the `/etc/apache2/sites-enabled/default` file on the ec2 instance and add in the following lines (replacing myblogname.com with whatever name you're using):

RewriteEngine on

RewriteCond %{HTTP_HOST} !^admin.myblogname.com$
RewriteRule ^/wp-admin/(.*)$ http://admin.myblogname.com/wp-admin/$1 [QSA,L,R=301]

RewriteCond %{HTTP_HOST} !^admin.myblogname.com$
RewriteRule ^/wp-login.php$ http://admin.myblogname.com/wp-login.php [QSA,L,R=301]

RewriteCond %{HTTP_HOST} ^myblogname.com$
RewriteRule ^(.*)$ http://www.myblogname.com/$1 [QSA,L,R=301]

RewriteCond %{HTTP_HOST} ^admin.myblogname.com$
RewriteCond %{REQUEST_URI} !^/(wp-content|wp-includes|wp-admin|wp-login.php)
RewriteRule ^(.*)$ http://www.myblogname.com/$1 [QSA,L,R=301]

This will automatically kick us over to admin.myblogname.com when we go to login or hit a page in the admin section.

Step 7: Memcached settings

There's one slight kerfuffle in the way memcached is configured - it's currently set up to use the server at 'localhost'. While this is fine for now when we only have 1 server, it's not going to work as we scale out to multiple servers. What we're going to need to do is put 1 or more actual ip address or domain name's into the memcached server location so that any servers we add will talk to talk to the same set of servers, otherwise cache expiration won't work correctly.

First we need to make memcached available on all ports, from SSH shell, let's open up the memcache configuration on the ec2 server:

sudo vi /etc/memcached.conf

And comment out the line that says '-l 127.0.0.1' (line 35 in my case) by putting a # in front of it.

Now restart memcached:

sudo /etc/init.d/memcached restart

Since we have Memcached set up as the cache for a whole bunch of pieces you'll need to modify this setting in a bunch of places.

We're going to set the memcached server name to the "Private DNS" of the current server. You can find this from EC2 console by checking the running server and looking in the bottom panel for the field labeled 'Private DNS' - in my case it's set to 'ip-10-205-2-23.ec2.internal' so the value we're going to put in is:

ip-10-205-2-23.ec2.internal:11211

Which is the private ip address + the port that memcache runs on by default.

Enter this value in the approriate fields under the following subpage in "Performance" and click 'Save Changes' on each page.

Page Cache
Minify
Database Cache
Object Cache

When we scale extra servers all those servers will hit that one memcache server for their cache. If you have the traffic to make it necessary, you can add in multiple memcache servers to distribute requests - the system will automatically pick one of the servers based on a hash of the key value.

Step 8: Minor admin book-keeping - Logrotate

Because we're using custom log files, let's add an entry to logrotate so that our log files won't eat up our entire server's EBS block

Open up a new file in /etc/logrotate.d called wordpress as root:

sudo vi /etc/logrotate.d/wordpress

And paste in the following text:

/home/wp/log/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
copytruncate
create 0666 wp wp
}

Now we'll only keep 14 days of logs and won't overrun our ebs storage with log files. Of course each server will be keeping it's own set of log files. If that's a problem you can use something like syslog-ng to pack it all together, but since many people rely on client-side analytics like Google Analytics these days it might not be an issue for you (it's not for us)

Now while you're in there, truncate your /home/wp/logs directory so you don't get any stale data in your EBS image.

Step 9: Creating a EBS image of our server.

We're now ready to create a snapshot of our server, and turn on multiple servers. If you've been following along just to play around with this - I've got a little bit of bad news, unless you actually modify the DNS of www.myblogname.com or whatever domain name you are using - you're not going to get to see the full wordpress cloud in action. The reason for this is that you'll need a proper CNAME record point from your blog domain to the load balancer. You can fake it though if you like - just point the name of your blog to the name of the load balancer and update your wordpress site name.

Ok, let's set up an ebs image to let us run multiple copies of the server so as to easily scale our site. From the AWS EC2 console, click the checkbox on your EC2 server and select 'Create Image (EBS Image)' from the 'Instance Actions' dropdown, follow the prompts to create a snapshot.

Your server will be temporarily unavailable while a snapshot is being taken. Click on the 'AMIs' link in the browser and wait until the AMI is finished bundling (Be ready to wait for a bit as this can take a while)

Step 10: Creating or multi-server cloud

Once the snapshot is done, we can launch new instances of our server with the click of a button. Click on "Launch Instance", then the "My AMIs" tab and you should see your AMI bundled up nice. Click select, then select "2" for the number of instances and press continue, and follow the prompts, making sure to use the same key pair and security group as when we launched the first time. Finally click "Launch" to launch the new servers.

If this were a production cloud, you would want to make sure you balance the servers out between multiple Availability zones (like us-east-1a and us-east-1b) so that a problem or internet connectivity issue in a single zone doesn't take down you site.

We're going to associate the new servers with their own 'Elastic IP's" as well - so follow the same steps as outlined in step 3. The reason we want to us Elastic IP's is that when we set up a round robin DNS we want to make sure that we don't lose an IP address that we have associated with our site when a machine goes down.

While we're waiting for those servers to start up, let's create a load balancer - click on "Load Balancers" on the left column of the EC2 console and then click the big 'Create Load Balancer' button.

Give the balancer a name (doesn't really matter what it is) and click "Continue"

Set the ping path to '/readme.html' (or any other static html file) and leave the rest of the options as is and click "Continue" again.

Check the boxes next to your 3 servers and click "Continue" one more time.

Finally click "Create" to finalize the creation of the load balancer.

Now let's see that load balancer in action - get the DNS name of the load balancer from the EC2 console and go to your general settings in your wordpress site. Modify both the WordPress address and the Site address to be the domain name of your load balancer, in my case this was:

http://word-press-test-574557464.us-east-1.elb.amazonaws.com

You should be able to view the site and have it disburse your requests across all your web servers. You can login and view the logs/ directory that your requests are getting divided among the servers. The load balancer is also smart enough to make clients sticky if you like, so that all requests from an individual user will stay on and individual server. You should turns this on once you've seen that requests are being distributed correctly and can do so from the AWS Console, just edit the load balancer and make it "sticky" (This will keep per-user session data stored correctly - I don't believe W3 Total Cache changes the session store to memcached - but this isn't something we tested in our case)

Once that's working, and if you're doing this for real, set the DNS of your www.myblogname.com to point at your load balancer and myblogname.com to round robin at two of your elastic ips, and you be off and running with a high-availability blog setup.

Conclusion:

First the Bad - this set up has a few strikes against it:

1. Some wordpress plugins don't play so nice with Total Cache and with installing them on multiple servers can be pain. Ideally you should be able to install the plugin package on all your servers, run the install and be good to go, unfortunately some plugins (I'm looking at you "CForms") write to the file system which makes this is little difficult and require manual work-arounds. In addition since Wordpress stores a lot of mutable files (like themes) in the file system, you'll end up copying over files to all the servers a fair amount whenever there are changes to the site. (We added a rsync script to handle this)

2. The admin server hack - forcing uploads to a single admin server is not ideal. It would be nice if files could exist authoritatively on S3 to avoid this and so that in the event of the admin server going down, all the files wouldn't have to be manually pulled from the bucket.

3. Updates are painful in general. Since taking a snapshot to create an AMI takes down the server you are snapshotting, you need to do the following steps to refresh your AMI: create a new server from your old AMI, push all the updates and all the files to the new server manually, take a snapshot of that server.

For a site that's going to spend a long time on the cloud and isn't just a temporary deployment (like ours was), you'll need some additional tools to keep everything up to date without a lot of manual hassle.

The Good:

In the end the campaign went off without a hitch and the setup handled the pressure no problem - once the initial spike was over we dropped down to two severs for the duration of the deployment. Wordpress properly configured with Total Cache is a beast, able to handle a ton of traffic even without scaling beyond a single server, but being able to easy add additional servers with a single click (or with auto-scaling) is a big plus. You'll be able to scale just about as far as you could ever need to with this sort of setup simply by upgrading the size of your EC2 and RDS instances and adding more memcached servers. At some point you could reach a theoretical limit based on your RDS database, but without without a huge amount of database writes this would be a pretty darn big number.

What's next: This is actually just the first of a series of posts on deploying to the cloud. The next post will be a (even more lengthy) tutorial on how to create a repeatable cloud infrastructure with Ruby and Chef that solves the three major problems outlined above. Subscribe to our RSS feed or follow us on Twitter @cykod if you're interested.

Posted Thursday, Sep 16 2010 01:05 PM by Pascal Rettig

Comments    Leave a comment

Posted by David at 03:56AM on September 24 2010

This is a very helpful post. I’m developing a WP-based site and have the staging environment on EC2/S3/RDS. For production, I’m going to want to balance the frontend across multiple servers.

One question comes to mind… How did you deal with multiple wp-crons running? Or did it matter for your app? In my case, wp-cron does a bunch of CPU-heavy work, so I want to segregate it on a distinct server. My thought was to disable wp-cron in the WP config file for all servers but the cron runner. I was wondering if you found a more elegant solution.

Thanks for the great post!

Posted by Pascal Rettig at 12:54PM on September 24 2010

Hi David,

We didn’t use wp-cron, but if you need it I would just recommend running a normal ‘cron’ via crontab on one of the servers that executes the wp-cron code via PHP CLI – that also means if you’re doing heavy lifting you don’t need to worry timeouts,etc.

Posted by David at 07:58PM on September 24 2010

Pascal,

Yes, that makes complete sense. I was over-thinking it!

Thanks,

David.

Posted by hoberion at 04:04AM on April 05 2011

“For available Ubuntu Server AMI’s, the Alestic AMI’s are the way to go”

Why not the official ubuntu server AMI’s from canonical?

Posted by hoberion at 04:06AM on April 05 2011

ah, never mind, I see.. the EBS boot

btw thx for the writeup! (saves me alot of time!)

Posted by Hung Bui at 08:56AM on April 05 2011

Amazing!!

Posted by Joshua Rusch at 02:54PM on June 26 2011

Without getting into the exact details and syntax…you can use mod rewrite and mod proxy to keep your admin site on a single server without having to mess around with URLs…

Something like

RewriteCond %{SERVER_ADDR} !youradminip
RewriteCond %{REQUEST_URI} ^/(wp-admin|wp-login.php)
RewriteRule ^(.*)$ http://youradminip/$1 [P,L]

It’s also important to set
ProxyPreserveHost On

that way when you proxy to the IP address of the admin server, it sends the www.myblog.com host request header along with it.

Just a thought. I don’t actually have a setup like this.

The syntax/setup might not be 100%, but you seem like you could figure out any other details if you want to try it :)

Posted by Flo at 02:13PM on December 06 2011

I actually have the whole wp-content directory mounted to S3 using s3fs.

http://code.google.com/p/s3fs/wiki/FuseOverAmazon

I would assume this would eliminate the domain issues you’re describing, or perhaps even the need for an admin server…

Posted by Simon Chen at 02:17PM on January 18 2012

I try the idea that the whole wp-content directory mounted to S3 using s3fs, it seems very slow, don’t know if any other person do this?

Posted by dom at 06:45PM on May 10 2012

Has this how to improved or evolved over time? Have the 3 strikes been solved in any way?

Posted by Ben at 08:25AM on June 12 2012

Somebody got the issues here solved? I here s3 is not a good oppurtunity even when used as s3fs, file consistency wont be guaranteed. SB tried glusterfs?

Posted by Rob at 03:13PM on July 12 2012

Great post, but using WP to serve the end-user is a resource hog. Have you considered adding Varnish? You can easily server over 10M visitors/day with a single ec2 micro-instance.

The best of Varnish is actually scaling it further; instead of rolling out complex replicas of WP, you can just another high performance Varnish cache server, point it to proxy the original WP site, and done …

Get started here: http://www.robgonda.com/2012/05/23/new-blog-new-theme-better-performance/

Posted by Romi at 02:20PM on October 03 2012

Hi,

Do you have some details on how to setup the CDN? That part of your documentation above is a bit vague and I am not conceptualizing how everything tights together. I assume you create the Access key and secret key, then create a S3 bucket but you also need to then create a AWS CDN I guess as Download Distribution and if so what are the values you used?

Any help is much appreciated.

Posted by Pascal Rettig at 12:49PM on October 04 2012

@Romi – For Cloudfront as CDN, just create a new cloudfront download distribution from the console, point it at your S3 bucket use that distribution URL.

Posted by Romi at 06:32PM on October 16 2012

Hi,

The wp-admin URL redirection doesn’t work for some reason. Eventhough I installed the plugin to support multiple host names.

Assuming my blog listens to http://www.domain.com and I want to access the admin as follow:

http://admin.domain.com/wp-admin I get to the page but when try to log in it keeps reseting the logging page and doesn’t log me in. If I go to http://www.domain.com/wp-admin everything works just fine. Any ideas what the problem may be?

Romi

Leave a Comment

Display Name:


Your Email (Optional, not displayed):

Add a Comment: