The Hosting Debate
Matt has been blogging recently about the options faced by people maintaining several reasonably sized web sites. I figure I’ll add my 2 cents on the issue, if only to follow up on my earlier post about the joys of dedicated server-dom.
For many, there’s little ‘choice’ in moving to a dedicated platform, their hand is forced by load and demand issues on their sites. That’s a fair amount of the people using dedicated servers out there, but there’s also the contingent who switch for the other advantages a dedicated environment can bring. Personally, I moved my sites to remove points of failure, and maximise the amount of control I have over my environment. I can say from experience that there not much more frustrating than having your sites down and there being nothing you can do about it, especially when they’re critical to your business
The jump to a dedicated server is a key decision, and it can be jarring. For a start, the prices start at around $80-100 a month, which is a significant leap up from the average shared/virtual hosting accounts available. It often entails significant rethinking of how the target sites are developed, deployed and configured and generally absorbs time in research finding the best deals, hosts and reputations in the industry. I can’t say there’s any specific way to make this easier, but going into the process having exactly established needs can help a lot. If you know you need the ability to serve X concurrent requests and push Y GB of traffic a day, then that makes the process a lot more of a basic process to meet needs.
To write directly in response to Matt’s first post, I’d say that if choosing a dedicated host, ensuring that they’re as big and well recommended as possible is a good start. I did about 2 months of looking around the market, and found that a lot of smaller providers were reselling capacity in datacenters owned by the bigger players, even if (you were lucky) it was colocated hardware instead of the stock dedicated fare. EV1, ServerMatrix and ServerBeach came up frequently as the biggest players in the dedicated server market, and each operate from their own datacenters (ServerBeach is Rackspace’s budget line, ServerMatrix ditto, but for ThePlanet). Colocation is an interesting option, but one that leaves you open to complex issues should the hardware fail for any reason (paying support technicians to reboot/deal with problems can get uneconomical). Letting the provider take care of the hardware just seems to make a lot more sense to me when analysing the cost/benefits.
As an aside, I’ve found in my own experience that memory really matters on apache/php/mysql configurations. After performance optimisations my server’s
mysqld processes take around 33MB of memory each, and Apache somewhere around 19-20MB for each worker. Even just with my 10-15 sites on the server, it happily uses 900MB of ram keeping these processes around to keep actions smooth and serve requests as quickly as possible. I’d say being able to keep a stack of worker processes around has been where the performance increase is most notable, and lots of RAM is a great thing…
The host-at-home option is ever popular, and something I’ve been doing for quite a while as a staging server when working on new projects. It holds a lot of the general stuff I want to show other people but can’t really be bothered to upload to the dedicated server. Unfortunately my connection is rather standard UK fare, and with a paltry 256kbit upstream the server can be pretty slow for anyone equipped with broadband. (My ISP aren’t helping matters much by upping my connection to 10240/385 from 2048/256 next month.. I wonder if I’ll even be able to send enough TCP ACK’s with that upload to get 10Mbit down..) It’s an option I wouldn’t seriously consider for anything more important than personal file space. The constant availability of sites on the web is important, and compromising that just isn’t a viable option (people will get sick of websites being variably up or down very quickly..)
As for my own dedicated server, I couldn’t be happier with it. It’s run like clockwork (fingers crossed) since I finished configuring it, and I’ve just tweaked around the edges since to get it exactly to my liking. It’s tuned to use a lot of resources to make things happen quickly, but it also seems to scale well to higher loads using the latent capacity configured, so I’m pleased. I do run a very lean configuration, my mail/dns/etc are all handled by other services & organisations, so the box is purely left to run the apache/php/mysql configuration. I’d like to think it provides a pleasingly small attack profile (pretty much nothing except apache+sshd running to connect to) and generally makes things more efficient.