Configure exim4 to send mail to other hosts

If you have just installed Debian (or other Linux) you might be stumped as to why sending e-mails from web software (such as forum activation e-mails) is not working. The reason is that by default the exim4 mail software is setup not to forward any mail to any remote hosts. This of course is an excellent anti-spam measure, but not very useful if you really need a server to send mail out to the world.

The solution is really simple. Edit the file /etc/exim4/update-exim4.conf.conf and change the following line:

dc_eximconfig_configtype='local'

into:

dc_eximconfig_configtype='internet'

Then restart exim4 and you’re done! To test if it really works, first create a file called “testmail” that looks like this:

Subject: exim4 mail test
Testing...
(blank line)

The “(blank line)” must obviously be a real blank line. Just press enter a few times and save the file. Then try to send it with the following command:

sendmail -v you@workingemail.com < testmail

Change the e-mail address into your own working e-mail address (like a Gmail account or something). The output of the sendmail command will give you a very detailed overview on how the e-mail was sent (or not). If the test was successful you’ll find the mail in your mailbox in a few minutes.

Solving “IPv6 addrconf: prefix with wrong length 48” permanently

If you have a recent distribution of Linux, you might find the message “IPv6 addrconf: prefix with wrong length 48” repeated a lot in syslog. If you Google this error message you’ll quickly find that this is because IPv6 auto configuration (sort of like DHCP) is failing. Now if you don’t want to bother with IPv6 yet or if you use static IPv6 (like my servers do) you don’t need IPv6 auto configuration.

A quick fix to solve the problem (as mentioned on sites like these) is to run the following commands:

echo 0 > /proc/sys/net/ipv6/conf/eth0/autoconf
echo 0 > /proc/sys/net/ipv6/conf/eth0/accept_ra

And yes, that solves the problem – until the next reboot that is. The permanent solution mentioned on that site however, does not work (as also confirmed by this IPv6 howto). The reason is that referring to all network interfaces using “all” in the following lines in /etc/sysctl.conf somehow doesn’t work:

net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.all.accept_ra = 0

The simple solution is to refer to each network interface specifically. My servers have both eth0 and eth1 (2 NICs) so I setup /etc/sysctl.d/ipv6.conf as follows:

net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.default.accept_ra = 0
net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.all.accept_ra = 0
net.ipv6.conf.eth0.autoconf = 0
net.ipv6.conf.eth0.accept_ra = 0
net.ipv6.conf.eth1.autoconf = 0
net.ipv6.conf.eth1.accept_ra = 0

If you have only one network interface, you can omit the “eth1” lines. Alternatively you can use pre-up commands as described in the IPv6 howto, though I think my solution is prettier.

How to prevent cron & PowerDNS clogging syslog

On a default Debian installation both cron and PowerDNS will log into /varlog/syslog. If you are running very frequent cron jobs (like every 5 minutes) or an active PowerDNS server (or recursor), you’ll find syslog will be completely clogged with mostly unimportant messages. The solution of course, is to have these two services output log messages to their own log files.

In Debian Linux, you’ll need to change a few configuration files. First in open /etc/rsyslog.conf and change the following line:

*.*;auth,authpriv.none          -/var/log/syslog

into this (basically add local0 and cron to the list of things not to log into syslog):

*.*;local0,cron,auth,authpriv.none          -/var/log/syslog

Then uncomment the line just below that (remove # sign):

cron.*                         -/var/log/cron.log

If you do not run PowerDNS you can skip to the end of this post. If you do run PowerDNS (server or recursor) create the file /etc/rsyslog.d/pdns.conf (for example using the command nano -w /etc/rsyslog.d/pdns.conf) with the following contents:

local0.* -/var/log/pdns.log

Then update your PowerDNS configuration to make use of this file by changing the following section in either /etc/powerdns/pdns.conf and/or /etc/powerdns/recursor.conf

#################################
# logging-facility      Facility to log messages as. 0 corresponds to local0
#
logging-facility=0

As you can see, uncomment the logging-facility line and set it to 0. After this reboot PowerDNS.

In order for the PowerDNS log file not to grow out of control, you might want to add it to the list of log files that should be rotated by editing /etc/logrotate.d/rsyslog and adding /var/log/pdns.log to the list of log files (I typically add this line below /var/log/messages just before the opening { bracket):

/var/log/messages
/var/log/pdns.log
{

Finally restart rsyslog by running /etc/init.d/rsyslog restart

Supermicro X9SCL/X9SCM: update the BIOS for IPMI

As I mentioned in my previous post, I had not much trouble getting my server to work in the Serverius datacenter with the exception of IPMI. This feature is built-in on the motherboard I used: a Supermicro X9SCL-F (the “F” part apparently indicates IPMI support).

IPMI allows a number things for which you’d normally have to have extra hardware for. By simply using your browser (or the Supermicro IPMIView software) you can view sensor readings (temperatures, fan speeds etc), reboot the server if the OS has crashed and it even does console redirection which means you don’t need a separate KVM-over-IP device to access the console or change BIOS settings remotely.

Furthermore this particular Supermicro board doesn’t need a dedicated network port for IPMI, so you can share IPMI with one of the the LAN ports. To enable IPMI you only need to specify an IP in the BIOS and the NIC will happily route traffic to the right device (server itself or IPMI). It seemed to work great at home.

But at the datacenter I noticed one particular problem: there was no way to specify the default gateway for the IPMI device, except from the OS itself (which it would then reset again upon reboot). This would make the IPMI feature impossible to use (except maybe from adjacent servers on the the same netmask).

The solution turned out to be a BIOS upgrade (not an IPMI firmware upgrade) for the motherboard. My board had v1.0b and a quick Google search showed the latest version (v1.0c) fixed the gateway issue.

So now all I needed to do was upgrade the BIOS, except that the only method available for that was using DOS. Needless to say it took me some trouble finding a way to get DOS to boot from a USB flash drive (fortunately this method worked) and then to upgrade the BIOS with the server still in the rack.

Anyway, the lesson learned: if you’re building a server with a similar motherboard don’t forget to update the BIOS before you go to the datacenter.

The only problem that now remains is that when I do a reboot the IPMI connection “goes away” for 30-60 seconds, making it impossible to get in to the BIOS (once IPMI works again the server is already fully booted). This appears to be a known problem for Cisco devices (which is what my server is hooked up to). I’m not sure yet if I’ll ask the datacenter to enable “portfast” on the port or wait until I get a private (half) rack in the future so that I also own and operate the switch myself.

Finally entered the world of Colocation

I’m not entirely sure how long ago I started using dedicated servers (only one for the first few years though) for my sites, but it must be around 10 years or more. My account on WHT.com is over 10 years old, and in the oldest posts there suggest I had a Cobalt RAQ at the time, while I’m pretty sure that wasn’t the first dedicated server I ever used.

The advantage and disadvantage of dedicated servers is that you’re just renting the hardware. On the one hand this means your host is responsible for any hardware trouble, which can be quite convenient at times. On the other hand, if you need really powerful hardware (high-end CPU, lots of RAM, fast RAID) you’re paying a hefty sum each month for that hardware.

The advantage of colocation is that you can bring your own hardware and the price will typically remain the same (though you might pay more for power usage and rack space for really big and powerful servers). The downside of course is that if anything goes wrong (beyond the scope of “remote hands” from the datacenter at least) you’re responsible for it, which might mean a trip to the datacenter (which will take time, and it may be hard getting replacement parts in the middle of the night or in weekends).

For my first colocated server I’ve settled for a Supermicro 510T-200B case with Supermicro X9SCL-F motherboard as base. For CPU I picked the Intel G620T which is a Sandy Bridge dual-core CPU with roughly the same power as Core2Duo E8400, but with only a 35W TDP. For RAM I just picked cheap 8 GB Kingston ValueRAM and the hard-drives are a cheap OCZ Vertex 2 60 GB SSD for OS and simple 2.5″ 320 GB SATA drive for backups.

The result is a quite small and low-power server which should be fast enough for most purposes. Having said that, the case was a bit tight so next time I’ll probably spend a little more on the case and get a Supermicro 813MT-350CB instead, which also has space for four normal 3.5″ drives and uses rails instead of cage nuts.

The server is colocated (as of Wednesday August 24) at Serverius, who had a very good deal on WHT: 1U, 25TB and 1 ampere power (my server will use only about 25% of that) for 39 euro per month (excl. VAT). They are clearly overselling on bandwidth with this deal, but I don’t mind as I don’t really need that much anyway.

Now for the embarrassing part: when I showed up I seemed I had forgotten to take with me just about everything except the server itself. Serverius normally requires customers to bring their own power cable, CAT cable and cage nuts (for which you also need a screwdriver) and I didn’t bring any of that. Well I did bring a power cable, but the wrong one: the required the extension cable kind, not power cables with a normal plug.

But fortunately they were courteous enough to lend me all that. And while they seem very boringly simple cage nuts are actually quite expensive, which has to do with the requirements they have to operate at or something. Anyway I just have to remember to bring my own (to replace what I borrowed) next time 🙂

Although the spot they allocated me in the rack was quite hard to get to (near the top of the rack, just below their speedtest server) I was able to install the server without to much trouble. The real problem was getting IPMI to work. But for more on that, see my next post.

At this time I’ll see how things go for a few weeks and if everything goes as well as I hope it will, I’ll probably start colocating most of my servers, possibly even in a private half rack. The only ones I might keep renting are those that need loads of bandwidth, as it is still simply cheaper to rent those at providers who oversell their bandwidth (or otherwise can get better bandwidth deals than that would be possible with colocation).

Go to Top