I recently discovered an awesome blog about Python, Linux, and System Administration. Chris Siebenmann is a sysadmin for the University of Toronto Unix Systems Group. I could spend hours reading his archives. Smart guy. Articles are short and informative to the max. Hope you enjoy it:
When a cron job generates output on either stdout or stderr, the output gets mailed to the owner of the crontab. Now, you can specify an alternative email address by setting the
MAILTO environment variable in the crontab, but this applies to all jobs in the crontab.
So, if you want the output of different jobs to get mailed to different users, then you can just redirect the stdout and stderr of each job to the mail command like this:
13 02 * * * /bin/backup 2>&1 | /usr/bin/mail -s "Cron <root@exobox> /bin/backup" \ firstname.lastname@example.org
“mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.” And it’s really cool.
Like ping, it sends “echo” packets from your machine to the target machine to measure latency and packet loss along the network path, but it continuously displays updated statistics in real time as it operates.
Like traceroute, it shows the names or IP addresses of each machine along the network path, also updating these statistics for each machine.
Here’s some (frozen) sample output from the ncurses mode (terminal mode) mtr:
My traceroute [v0.71] exobox (0.0.0.0) Thu Dec 21 16:15:01 2006 Keys: Help Display mode Restart statistics Order of fields quit Packets Pings Host Loss% Snt Last Avg Best Wrst StDev 1. 0.0% 20 0.4 0.3 0.2 0.7 0.1 2. 0.0% 20 1.6 1.4 1.0 2.7 0.4 3. 0.0% 19 1.3 1.3 1.0 2.7 0.4 4. 10.0.0.25 0.0% 19 1.1 1.2 1.0 2.7 0.4 5. 184.108.40.206 0.0% 19 1.4 2.6 1.2 16.0 3.3 6. bj141-130-121.bjtelecom.net 0.0% 19 1.4 1.6 1.4 2.5 0.3 7. 220.127.116.11 0.0% 19 199.3 12.0 1.3 199.3 45.4 8. 18.104.22.168 0.0% 19 1.7 35.7 1.3 186.5 56.2 9. 22.214.171.124 0.0% 19 1.8 2.0 1.5 4.8 0.8 10. 126.96.36.199 0.0% 19 284.6 292.2 278.3 305.8 8.7 11. so-4-0-0.mpr2.lax9.us.above.net 15.8% 19 286.2 291.9 278.6 325.2 11.5 12. so-5-0-0.mpr1.iah1.us.above.net 15.8% 19 313.2 323.8 311.3 340.5 8.6 13. so-5-3-0.cr1.dfw2.us.above.net 5.3% 19 319.6 331.1 314.1 404.6 20.3 14. so-0-0-0.cr2.dfw2.us.above.net 11.1% 19 696.0 718.7 693.3 797.2 33.7 15. so-3-1-0.cr2.dca2.us.above.net 36.8% 19 347.4 354.9 342.5 368.7 8.6 16. so-0-1-0.mpr1.lhr3.uk.above.net 11.1% 19 421.5 423.6 411.1 438.2 7.8 17. so-1-0-0.mpr3.ams1.nl.above.net 11.1% 19 433.9 438.9 422.4 514.1 22.0 18. DutchDSL.above.net 33.3% 19 423.1 432.7 418.7 463.0 11.4 19. ge-0-1-0-v189.rtr1.ams-rb.io.nl 27.8% 19 405.8 414.7 398.2 434.6 11.6 20. 188.8.131.52 23.5% 18 404.8 418.1 399.9 485.8 21.9
Looking at the Avg column (units in ms), the above output shows that my network packets pass through Beijing Telecom’s routers to the U.S., then to the U.K., and finally to their destination in the Netherlands. A large latency increase occurs between lines 9 and 10 (presumably leaving P.R. China), and another between lines 13 and 16 (U.S. to U.K.).
mtr has some interesting display modes besides the above, where it shows the latency of each packet graphically according to a dynamic scale. In this way, the above points of really large latency can be easily detected.
mtr can be obtained from the mtr website, or it can be installed in Debian/Ubuntu by:
# apt-get install mtr-tiny
# apt-get install mtr
for the ncurses or X11 versions, respectively, although mtr-tiny appears to be installed by default in the Debian and Ubuntu machines I have tested. So you may already have it.
The latest comic from Dilbert speaks some truth about life as a Sysadmin:
If you’re curious or interested in getting involved in the Ubuntu community, “Ubuntu Open Week” begins tonight. It’s “a week of IRC tutorials and sessions designed to encourage more and more people to join our diverse community”. More information is here:
The times in the calendar are UTC, so just add 8 hours for China. For example, the first session on Monday on the “Ubuntu Desktop Team – Sebastien Bacher” at 15:00 UTC will actually occur at 23:00 tonight, Beijing time. Of notable interest is the “Ask Mark” session on Tuesday, featuring Ubuntu founder Mark Shuttleworth.
By the way, I plan on attending some of the earlier-in-the-evening sessions, but once it gets too late I’ll just let my IRC client log the rest of them while I’m sleeping. 😉
The latest comic from xkcd is just too funny. Check it out:
At Exoweb, our software developers use bogus email addresses of the form *@example.com (where I mean “example.com” literally, not as an example) to test their software’s ability to send email. Since I don’t want our Postfix server to attempt to deliver these messages out on the Internet, I need Postfix to handle these messages and blackhole them (make them disappear, sent to
/dev/null). So what follows are instructions on how to blackhole an entire domain in Postfix.
First, we add a
virtual_alias_maps entry to
/etc/postfix/main.cf so that we can specify example.com as one of our virtual domains:
virtual_alias_maps = hash:/etc/postfix/virtual_alias
/etc/postfix/virtual_alias, add a catchall address:
We have to use blackhole@localhost here and not /dev/null/ because
virtual_alias_maps cannot run commands—it can only forward to real addresses. So we put an entry inside
/etc/aliases to handle the blackhole:
This assumes that one of your mydestination domains in
main.cf is localhost so that Postfix will actually consult the
In order to make these changes take affect, you have to rebuild the aliases database, build the virtual_alias database, and reload your Postfix configuration. Respectively:
# newaliases # postmap /etc/postfix/virtual_alias # postfix reload
Now, any emails you send to blackhole@localhost will disappear, and so will any emails addressed to email@example.com (provided they are relayed through your Postfix server).
Footnote: The top-level and second-level domain names that are reserved for testing can be found in RFC 2606.
CHM files, known as Microsoft Compressed HTML Help files, are a common format for eBooks and online documentation. They are basically a collection of HTML files stored in a compressed archive with the added benefit of an index.
Under Linux, you can view a CHM file with the xchm viewer. But sometimes that’s not enough. Suppose you want to edit, republish, or convert the CHM file into another format such as the Plucker eBook format for viewing on your Palm. To do so, you first need to extract the original HTML files from the CHM archive.
This can be done with the CHMLIB (CHM library) and its included helper application
In Debian or Ubuntu:
$ sudo apt-get install libchm-bin $ extract_chmLib book.chm outdir
book.chm is the path to your CHM file and
outdir is a new directory that will be created to contain the HTML extracted from the CHM file.
In other Linuxes, you can install it from source. First download the libchm source archive from the above website. I couldn’t get the
extract_chmLib utility to compile under the latest version 0.38, so I used version 0.35 instead.
$ tar xzf chmlib-0.35.tgz $ cd chmlib-0.35/ $ ./configure $ make $ make install $ make examples
After doing the “
make examples“, you will have an executable
extract_chmLib in your current directory. Here is an example of running the command with no arguments and the output it produces:
$ ./extract_chmLib usage: ./extract_chmLib <chmfile> <outdir>
After running the utility to extract the HTML files from your CHM file, the extracted files will appear in
<outdir>. There won’t be an “index.html” file, unfortunately. So you’ll have to inspect the filenames and/or their contents to find the appropriate main page or Table of Contents.
Now the HTML is yours to enjoy!
When you work in a software development shop, spaghetti code is most certainly frowned upon. What about spaghetti wiring?
So this was the remains of the small electrical fire that occurred 15 minutes before I arrived at the office today… the charged main circuit to the room that houses most of our developers, their computers, and the all-important Ice Box.
Do you think this is enough to convince the building management that the building’s wiring needs professional help? No, unfortunately, I think we’ll still have to fight them to get adequate electrical capacity and safety. 🙁
Here’s a great opinion piece from the blog of my DNS provider entitled: Want to reduce email spam to your mail server? Stop using backup spooling.
It is with regret that we have come to the following conclusion, but here it is: Offsite backup SMTP spoolers and backup mail exchangers have become worse than useless.
The problem is spam and the software that delivers it exploiting the weak authentication schemes inherent in the SMTP protocol itself. It used to be an annoyance, then it became a concern, it is now an epidemic and has resulted in the death of the offsite backup MX handler.
The author then goes on to explain what the problem is and why you won’t really miss your backup spooling. It’s a very interesting point of view (in a good way) that’s worth considering.