Posts tagged IPMI
As I mentioned in my previous post, I had not much trouble getting my server to work in the Serverius datacenter with the exception of IPMI. This feature is built-in on the motherboard I used: a Supermicro X9SCL-F (the “F” part apparently indicates IPMI support).
IPMI allows a number things for which you’d normally have to have extra hardware for. By simply using your browser (or the Supermicro IPMIView software) you can view sensor readings (temperatures, fan speeds etc), reboot the server if the OS has crashed and it even does console redirection which means you don’t need a separate KVM-over-IP device to access the console or change BIOS settings remotely.
Furthermore this particular Supermicro board doesn’t need a dedicated network port for IPMI, so you can share IPMI with one of the the LAN ports. To enable IPMI you only need to specify an IP in the BIOS and the NIC will happily route traffic to the right device (server itself or IPMI). It seemed to work great at home.
But at the datacenter I noticed one particular problem: there was no way to specify the default gateway for the IPMI device, except from the OS itself (which it would then reset again upon reboot). This would make the IPMI feature impossible to use (except maybe from adjacent servers on the the same netmask).
The solution turned out to be a BIOS upgrade (not an IPMI firmware upgrade) for the motherboard. My board had v1.0b and a quick Google search showed the latest version (v1.0c) fixed the gateway issue.
So now all I needed to do was upgrade the BIOS, except that the only method available for that was using DOS. Needless to say it took me some trouble finding a way to get DOS to boot from a USB flash drive (fortunately this method worked) and then to upgrade the BIOS with the server still in the rack.
Anyway, the lesson learned: if you’re building a server with a similar motherboard don’t forget to update the BIOS before you go to the datacenter.
The only problem that now remains is that when I do a reboot the IPMI connection “goes away” for 30-60 seconds, making it impossible to get in to the BIOS (once IPMI works again the server is already fully booted). This appears to be a known problem for Cisco devices (which is what my server is hooked up to). I’m not sure yet if I’ll ask the datacenter to enable “portfast” on the port or wait until I get a private (half) rack in the future so that I also own and operate the switch myself.
I’m not entirely sure how long ago I started using dedicated servers (only one for the first few years though) for my sites, but it must be around 10 years or more. My account on WHT.com is over 10 years old, and in the oldest posts there suggest I had a Cobalt RAQ at the time, while I’m pretty sure that wasn’t the first dedicated server I ever used.
The advantage and disadvantage of dedicated servers is that you’re just renting the hardware. On the one hand this means your host is responsible for any hardware trouble, which can be quite convenient at times. On the other hand, if you need really powerful hardware (high-end CPU, lots of RAM, fast RAID) you’re paying a hefty sum each month for that hardware.
The advantage of colocation is that you can bring your own hardware and the price will typically remain the same (though you might pay more for power usage and rack space for really big and powerful servers). The downside of course is that if anything goes wrong (beyond the scope of “remote hands” from the datacenter at least) you’re responsible for it, which might mean a trip to the datacenter (which will take time, and it may be hard getting replacement parts in the middle of the night or in weekends).
For my first colocated server I’ve settled for a Supermicro 510T-200B case with Supermicro X9SCL-F motherboard as base. For CPU I picked the Intel G620T which is a Sandy Bridge dual-core CPU with roughly the same power as Core2Duo E8400, but with only a 35W TDP. For RAM I just picked cheap 8 GB Kingston ValueRAM and the hard-drives are a cheap OCZ Vertex 2 60 GB SSD for OS and simple 2.5″ 320 GB SATA drive for backups.
The result is a quite small and low-power server which should be fast enough for most purposes. Having said that, the case was a bit tight so next time I’ll probably spend a little more on the case and get a Supermicro 813MT-350CB instead, which also has space for four normal 3.5″ drives and uses rails instead of cage nuts.
The server is colocated (as of Wednesday August 24) at Serverius, who had a very good deal on WHT: 1U, 25TB and 1 ampere power (my server will use only about 25% of that) for 39 euro per month (excl. VAT). They are clearly overselling on bandwidth with this deal, but I don’t mind as I don’t really need that much anyway.
Now for the embarrassing part: when I showed up I seemed I had forgotten to take with me just about everything except the server itself. Serverius normally requires customers to bring their own power cable, CAT cable and cage nuts (for which you also need a screwdriver) and I didn’t bring any of that. Well I did bring a power cable, but the wrong one: the required the extension cable kind, not power cables with a normal plug.
But fortunately they were courteous enough to lend me all that. And while they seem very boringly simple cage nuts are actually quite expensive, which has to do with the requirements they have to operate at or something. Anyway I just have to remember to bring my own (to replace what I borrowed) next time 🙂
Although the spot they allocated me in the rack was quite hard to get to (near the top of the rack, just below their speedtest server) I was able to install the server without to much trouble. The real problem was getting IPMI to work. But for more on that, see my next post.
At this time I’ll see how things go for a few weeks and if everything goes as well as I hope it will, I’ll probably start colocating most of my servers, possibly even in a private half rack. The only ones I might keep renting are those that need loads of bandwidth, as it is still simply cheaper to rent those at providers who oversell their bandwidth (or otherwise can get better bandwidth deals than that would be possible with colocation).