Supermicro X9SCL/X9SCM: update the BIOS for IPMI

As I mentioned in my previous post, I had not much trouble getting my server to work in the Serverius datacenter with the exception of IPMI. This feature is built-in on the motherboard I used: a Supermicro X9SCL-F (the “F” part apparently indicates IPMI support).

IPMI allows a number things for which you’d normally have to have extra hardware for. By simply using your browser (or the Supermicro IPMIView software) you can view sensor readings (temperatures, fan speeds etc), reboot the server if the OS has crashed and it even does console redirection which means you don’t need a separate KVM-over-IP device to access the console or change BIOS settings remotely.

Furthermore this particular Supermicro board doesn’t need a dedicated network port for IPMI, so you can share IPMI with one of the the LAN ports. To enable IPMI you only need to specify an IP in the BIOS and the NIC will happily route traffic to the right device (server itself or IPMI). It seemed to work great at home.

But at the datacenter I noticed one particular problem: there was no way to specify the default gateway for the IPMI device, except from the OS itself (which it would then reset again upon reboot). This would make the IPMI feature impossible to use (except maybe from adjacent servers on the the same netmask).

The solution turned out to be a BIOS upgrade (not an IPMI firmware upgrade) for the motherboard. My board had v1.0b and a quick Google search showed the latest version (v1.0c) fixed the gateway issue.

So now all I needed to do was upgrade the BIOS, except that the only method available for that was using DOS. Needless to say it took me some trouble finding a way to get DOS to boot from a USB flash drive (fortunately this method worked) and then to upgrade the BIOS with the server still in the rack.

Anyway, the lesson learned: if you’re building a server with a similar motherboard don’t forget to update the BIOS before you go to the datacenter.

The only problem that now remains is that when I do a reboot the IPMI connection “goes away” for 30-60 seconds, making it impossible to get in to the BIOS (once IPMI works again the server is already fully booted). This appears to be a known problem for Cisco devices (which is what my server is hooked up to). I’m not sure yet if I’ll ask the datacenter to enable “portfast” on the port or wait until I get a private (half) rack in the future so that I also own and operate the switch myself.

Finally entered the world of Colocation

I’m not entirely sure how long ago I started using dedicated servers (only one for the first few years though) for my sites, but it must be around 10 years or more. My account on WHT.com is over 10 years old, and in the oldest posts there suggest I had a Cobalt RAQ at the time, while I’m pretty sure that wasn’t the first dedicated server I ever used.

The advantage and disadvantage of dedicated servers is that you’re just renting the hardware. On the one hand this means your host is responsible for any hardware trouble, which can be quite convenient at times. On the other hand, if you need really powerful hardware (high-end CPU, lots of RAM, fast RAID) you’re paying a hefty sum each month for that hardware.

The advantage of colocation is that you can bring your own hardware and the price will typically remain the same (though you might pay more for power usage and rack space for really big and powerful servers). The downside of course is that if anything goes wrong (beyond the scope of “remote hands” from the datacenter at least) you’re responsible for it, which might mean a trip to the datacenter (which will take time, and it may be hard getting replacement parts in the middle of the night or in weekends).

For my first colocated server I’ve settled for a Supermicro 510T-200B case with Supermicro X9SCL-F motherboard as base. For CPU I picked the Intel G620T which is a Sandy Bridge dual-core CPU with roughly the same power as Core2Duo E8400, but with only a 35W TDP. For RAM I just picked cheap 8 GB Kingston ValueRAM and the hard-drives are a cheap OCZ Vertex 2 60 GB SSD for OS and simple 2.5″ 320 GB SATA drive for backups.

The result is a quite small and low-power server which should be fast enough for most purposes. Having said that, the case was a bit tight so next time I’ll probably spend a little more on the case and get a Supermicro 813MT-350CB instead, which also has space for four normal 3.5″ drives and uses rails instead of cage nuts.

The server is colocated (as of Wednesday August 24) at Serverius, who had a very good deal on WHT: 1U, 25TB and 1 ampere power (my server will use only about 25% of that) for 39 euro per month (excl. VAT). They are clearly overselling on bandwidth with this deal, but I don’t mind as I don’t really need that much anyway.

Now for the embarrassing part: when I showed up I seemed I had forgotten to take with me just about everything except the server itself. Serverius normally requires customers to bring their own power cable, CAT cable and cage nuts (for which you also need a screwdriver) and I didn’t bring any of that. Well I did bring a power cable, but the wrong one: the required the extension cable kind, not power cables with a normal plug.

But fortunately they were courteous enough to lend me all that. And while they seem very boringly simple cage nuts are actually quite expensive, which has to do with the requirements they have to operate at or something. Anyway I just have to remember to bring my own (to replace what I borrowed) next time 🙂

Although the spot they allocated me in the rack was quite hard to get to (near the top of the rack, just below their speedtest server) I was able to install the server without to much trouble. The real problem was getting IPMI to work. But for more on that, see my next post.

At this time I’ll see how things go for a few weeks and if everything goes as well as I hope it will, I’ll probably start colocating most of my servers, possibly even in a private half rack. The only ones I might keep renting are those that need loads of bandwidth, as it is still simply cheaper to rent those at providers who oversell their bandwidth (or otherwise can get better bandwidth deals than that would be possible with colocation).

Multiple DNS servers with PowerDNS and MySQL replication

With DNS it is essential to have at least two and preferably more DNS servers for your domains in geographically separated locations. Putting all your DNS servers on the same server is asking for trouble: even if your server goes down for only a little while (like a reboot) some visitors may perceive your sites due to negative DNS caching (where a visitor’s ISP resolving DNS server will “remember” your site “does not exist” for a while).

There are of course several commercial DNS hosting providers that can solve this problem for you, but most of these charge by how much DNS traffic your domains generate. For certain types of very popular sites (like image & file hosting sites) that may be costly because of the level of DNS traffic they generate. Or perhaps you simply want to maintain your own DNS servers.

The solution is to have several DNS servers powered by PowerDNS using MySQL as a backend, and synchronizing the DNS servers not using any DNS specific mechanism but simply through MySQL replication.  As the main DNS server you could use your own server, and as secondary servers you can use other servers or cheap VPS servers.

As PowerDNS supports caching using MySQL as a backend is not going to be a performance issue unless you really have a lot of different domains you want to provider DNS for (and in that case, just get beefier hardware or more servers). For information on how to setup PowerDNS with MySQL, see the official documentation.

To setup MySQL replication I recommend this guide, although the part on the first page about getting a snapshot of the master server (using the lock/unlock commands) is a bit obsolete if you used InnoDB to create the tables for PowerDNS. With InnoDB you can get a snapshot without any interruption with a single command:

mysqldump --opt --single-transaction --flush-logs --master-data=1 pdns > pdns.sql

The “master-data=1” bit even includes the right CHANGE MASTER command in the dumped SQL file, so you don’t need to manually specify the master position and only need load the SQL dump and restart the slave. Be aware though that MySQL replication might sometimes break (for example if one of the servers was uncleanly reboot) so occasionally check if MySQL replication is still working from phpMyAdmin or using the SHOW SLAVE STATUS command. For DNS troubleshooting I highly recommend intoDNS.com.

I apologize this post is not a straight-forward how-to, but hopefully it will point you in the right direction.

Free image hosting: lessons learned (Part 1)

This is probably going to be a series of posts, hence the “Part 1” in the title. I have now ran the same free image hosting site (ImageHost.org) twice: the first time was in the fall/winter of 2004 (only a few months indeed) and the second time was while a lot longer: from fall 2007 to early 2011. During those times I have encountered numerous problems, which I’d like to share.

First like any venture, think before you begin. Do at least some planning ahead about what level of service you want to provide and how you might go about solving certain problems. For example while there are undoubtedly some standard scripts you could use, you probably want to be able to write your own scripts in order to create a unique experience and also to be able to fix problems as they arise, rather than having to ask/pay someone else to fix them for you.

One such technical problem is scaling. While you can probably start with your image hosting service on a VPS or entry-level server with limited resources, eventually you will reach a point when the daily amount of new uploads and image requests may be to much for a single server to handle. So plan how you are going to scale beyond a single server from the start.

Also think about how you are going to pay for hosting. Free image image hosts consume a lot of resources: both in terms of hard drive space and bandwidth. In order to prevent your server’s HDDs from melting you might also want to invest in loads of RAM (for file caching) as well. Those kind of servers don’t come to cheaply. I can’t say I’m an expert in this area as lack of profitability was one of the reasons I closed the site earlier this year.

But I can give some advice: try to find some way to discourage hotlinking to images (rather than viewing the images on your site where they’ll be accompanied by ads) and if necessary don’t hesitate to ban users or (referring) sites that do excessive hotlinking if they don’t otherwise contribute to your service in other ways.  While hotlinking is a key feature of a free image host it is also something that is only a money drain as no ads are displayed when an user hotlinks to an image. The square box type ads work best by the way, so find a way to put them beside an image (as above or below they’ll probably won’t get noticed).

Also expect your service to be abused. Not always on a daily basis, but some abuse can be severe. For example, what if a spammer uses an automated script to uploads thousands of images and uses it in an spam campaign? Expect to receive some nasty e-mails, and if you don’t respond soon enough your servers might even be turned off (even if they were not the source of the actual spam e-mails). Furthermore no matter how well you try to educate users about your service’s rules, expect there to be always those that upload adult content (including the disturbing/illegal kind), gore/extreme content or copyrighted material. If you never want to be exposed to images like that, you’d better quit.

This is it for this edition; expect future posts to go a bit more in dept regarding certain (technical) issues.

 

How to calculate kWh (kilowatt-hour)

A colocation provider I am interested in quotes €0.15 per “kWh” for power usage. They also mentioned “1 ampere equals about 166 kWh”. I wondered how they calculated these numbers and how I can calculate power usage in kWh myself.

The first thing you might want (or need) to calculate is how much Watt (W) or Ampere (A) a device (like a server) uses. Power usage can greatly depend on the components used and the load on the server (idle or full). The simple formulas are as follows:

Watt = Volt * Ampere
Ampere = Watt / Volt

So if you have a device that uses 1 Ampere at 230 Volt (default voltage in the Netherlands) then it’ll use 230 Watt. Per hour it will use 230 Watt-hour (Wh).

The k in kWh refers to “kilo” or 1000, so 230 Wh equals 0.23 kWh. And 166 kWh (see quote in first paragraph) is actually 166000 Wh. But where did they get that figure? The answer is that they calculate power usage per month. There are 720 hours in a 30 day month (24 hours * 30 days). So their calculation was (rounded up):

1 Ampere * 230 Volt * 24 hours * 30 days = 166 kWh per month

Anyway, so to calculate the monthly kilowatt-hour (kWh) usage of a device (assuming you already know how much Watt it uses; this can be easily found out using a Wattmeter) the formula is:

kWh per month = Watt * 24 hours * 30 days