Category Archives: Linux

The GNU/Linux operating system

DNS-based mitigation for Samsung SwiftKey keyboard vulnerability

I was just listening to the discussion of the Samsung SwiftKey keyboard vulnerability from Security Now! episode 513, and I came up with a simple DNS-based mitigation that a user could implement to protect themselves.

The Vulnerability

Without any user interaction, the user’s phone makes a plaintext http GET request to a SwiftKey update server, and this request can be hijacked and malicious code injected into the phone by any man-in-the-middle bad actor. According to NowSecure, the discoverer of the vulnerability, the request looks like this:

GET http://skslm.swiftkey.net/samsung/downloads/v1.3-USA/az_AZ.zip

DNS-based Mitigation

With a rooted Android phone, a user could edit their /etc/hosts file to redirect the hostname of the update server (skslm.swiftkey.net) to localhost, preventing the http GET request from ever leaving the phone. In other words, the user is hijacking the request to the update server before a bad guy gets the opportunity to do the same.

With a non-rooted phone, there are DNS Resolver apps that can be installed that do the same kind of redirection to localhost.

Will this kind of mitigation work? Since I don’t have an Android phone to test against, this is just a thought experiment for myself.

Backup Your Dropbox Files with rdiff-backup

The Problem

Teresa and I use a single Dropbox account to share files between our computers. I also use the same account to store (and sync) plain-text notes on my iPad and iPhone (I use the apps PlainText and iA Writer). In case things go wrong with these apps, the syncing, or with Dropbox itself, I want to backup my Dropbox files and keep past snapshots of the backups so I can go back in time.

The Solution

rdiff-backup can do this. It is a command-line tool written in Python that:

…backs up one directory to another, possibly over a network. The target directory ends up a copy of the source directory, but extra reverse diffs are stored in a special subdirectory of that target directory, so you can still recover files lost some time ago. The idea is to combine the best features of a mirror and an incremental backup.

To make this all happen, I have Dropbox installed, signed-in, and running on my Linux desktop/server, which runs Ubuntu 11.04 Natty with Gnome 2.

Install rdiff-backup thusly:

    # aptitude install rdiff-backup

I use the directory /backup/ to hold all my backup targets, so I can run rdiff-backup like this:

    $ rdiff-backup  \
        --exclude $HOME/Dropbox/.dropbox \
        --exclude $HOME/Dropbox/.dropbox.cache \
        $HOME/Dropbox /backup/Dropbox

Every time I run rdiff-backup like this, it creates a new snapshot of my Dropbox files. Old snapshots are kept until I decide to purge them (if at all). To purge any snapshots older than two months, for example, I run this command:

    $ rdiff-backup --force --remove-older-than 2M /backup/Dropbox

I run the above two commands in an @hourly crontab script to keep this all happening automatically.

Browsing Past Snapshots

rdiff-backup has its own commands for digging into the files stored in the past snapshots, but it requires exactly knowing the filenames and backup times. Another tool, rdiff-backup-fs solves this problem by mounting the rdiff-backup backup directory as a FUSE filesystem, allowing me to grep and find my way through a directory tree of all snapshots.

After installing FUSE and rdiff-backup-fs, I mount my Dropbox snapshot tree with this command:

    $ rdiff-backup-fs ~/mnt /backup/Dropbox

Note that the order of the arguments for mounting source and target are backwards compared to the canonical mount command.

A long listing of my 10-oldest snapshots looks like this:

    $ ls -lF ~/mnt/ | head -10
    total 0
    dr-xr-xr-x 1 root root 4096 2013-03-10 13:52 2013-01-06T05:00:01/
    dr-xr-xr-x 1 root root 4096 2013-03-10 13:52 2013-01-06T06:00:01/
    dr-xr-xr-x 1 root root 4096 2013-03-10 13:52 2013-01-06T07:00:01/
    dr-xr-xr-x 1 root root 4096 2013-03-10 13:52 2013-01-06T08:00:01/
    dr-xr-xr-x 1 root root 4096 2013-03-10 13:52 2013-01-06T09:00:01/
    dr-xr-xr-x 1 root root 4096 2013-03-10 13:52 2013-01-06T10:00:01/
    dr-xr-xr-x 1 root root 4096 2013-03-10 13:52 2013-01-06T11:00:01/
    dr-xr-xr-x 1 root root 4096 2013-03-10 13:52 2013-01-06T12:00:01/
    dr-xr-xr-x 1 root root 4096 2013-03-10 13:52 2013-01-06T13:00:01/

I can then explore all my snapshots at once with any tools wish.

When done, I unmount the rdiff-backup-fs filesystem with:

    $ /bin/fusermount -u ~/mnt

Which editor should I learn?

On serverfault.com, Rory McCann asked, “What’s the best terminal editor to suggest to a Unix newbie? i.e. not vi or Emacs.”

This answer, which purposefully ignores the original poster’s restriction, says it best:

My take is still Emacs or vi. Even for a beginner.

Why?

Because time invested in learning an editor is productive only as long as you keep using that editor. All those less expressive options are poor choices for the long run, and will be abandoned eventually. At which point the time spent learning them is wasted, and the user still has to learn Emacs or vi.

In other words, the best (most expressive) tool for the job is one of Emacs or vi, and so you’ll eventually switch to one of them. It ultimately doesn’t matter which one you choose, but you would be smart to invest yourself into learning one of them.

For the record, I’m a vim user, and I love using it.

My First Bug Report

I recently submitted my first bug report to the Debian Project, regarding mod_dav and apache2. It was accepted by the maintainers of the relevant packages, and they’re taking the necessary steps to fix it.

So I’m proud of my little contribution. 🙂 Yay for me! Yay for Debian!

MTR as a combined traceroute and ping tool

mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.” And it’s really cool.

Like ping, it sends “echo” packets from your machine to the target machine to measure latency and packet loss along the network path, but it continuously displays updated statistics in real time as it operates.

Like traceroute, it shows the names or IP addresses of each machine along the network path, also updating these statistics for each machine.

Here’s some (frozen) sample output from the ncurses mode (terminal mode) mtr:

                                My traceroute  [v0.71]
exobox (0.0.0.0)                                             Thu Dec 21 16:15:01 2006
Keys:  Help   Display mode   Restart statistics   Order of fields   quit
                                             Packets               Pings
 Host                                      Loss%   Snt   Last   Avg  Best  Wrst StDev
 1.                                         0.0%    20    0.4   0.3   0.2   0.7   0.1
 2.                                         0.0%    20    1.6   1.4   1.0   2.7   0.4
 3.                                         0.0%    19    1.3   1.3   1.0   2.7   0.4
 4. 10.0.0.25                               0.0%    19    1.1   1.2   1.0   2.7   0.4
 5. 219.142.10.17                           0.0%    19    1.4   2.6   1.2  16.0   3.3
 6. bj141-130-121.bjtelecom.net             0.0%    19    1.4   1.6   1.4   2.5   0.3
 7. 202.97.57.221                           0.0%    19  199.3  12.0   1.3 199.3  45.4
 8. 202.97.37.9                             0.0%    19    1.7  35.7   1.3 186.5  56.2
 9. 202.97.53.146                           0.0%    19    1.8   2.0   1.5   4.8   0.8
10. 202.97.61.50                            0.0%    19  284.6 292.2 278.3 305.8   8.7
11. so-4-0-0.mpr2.lax9.us.above.net        15.8%    19  286.2 291.9 278.6 325.2  11.5
12. so-5-0-0.mpr1.iah1.us.above.net        15.8%    19  313.2 323.8 311.3 340.5   8.6
13. so-5-3-0.cr1.dfw2.us.above.net          5.3%    19  319.6 331.1 314.1 404.6  20.3
14. so-0-0-0.cr2.dfw2.us.above.net         11.1%    19  696.0 718.7 693.3 797.2  33.7
15. so-3-1-0.cr2.dca2.us.above.net         36.8%    19  347.4 354.9 342.5 368.7   8.6
16. so-0-1-0.mpr1.lhr3.uk.above.net        11.1%    19  421.5 423.6 411.1 438.2   7.8
17. so-1-0-0.mpr3.ams1.nl.above.net        11.1%    19  433.9 438.9 422.4 514.1  22.0
18. DutchDSL.above.net                     33.3%    19  423.1 432.7 418.7 463.0  11.4
19. ge-0-1-0-v189.rtr1.ams-rb.io.nl        27.8%    19  405.8 414.7 398.2 434.6  11.6
20. 213.196.40.242                         23.5%    18  404.8 418.1 399.9 485.8  21.9

Looking at the Avg column (units in ms), the above output shows that my network packets pass through Beijing Telecom’s routers to the U.S., then to the U.K., and finally to their destination in the Netherlands. A large latency increase occurs between lines 9 and 10 (presumably leaving P.R. China), and another between lines 13 and 16 (U.S. to U.K.).

mtr has some interesting display modes besides the above, where it shows the latency of each packet graphically according to a dynamic scale. In this way, the above points of really large latency can be easily detected.

mtr can be obtained from the mtr website, or it can be installed in Debian/Ubuntu by:

# apt-get install mtr-tiny

or

# apt-get install mtr

for the ncurses or X11 versions, respectively, although mtr-tiny appears to be installed by default in the Debian and Ubuntu machines I have tested. So you may already have it.

Ubuntu Open Week Begins Tonight

If you’re curious or interested in getting involved in the Ubuntu community, “Ubuntu Open Week” begins tonight. It’s “a week of IRC tutorials and sessions designed to encourage more and more people to join our diverse community”. More information is here:

https://wiki.ubuntu.com/UbuntuOpenWeek

The times in the calendar are UTC, so just add 8 hours for China. For example, the first session on Monday on the “Ubuntu Desktop Team – Sebastien Bacher” at 15:00 UTC will actually occur at 23:00 tonight, Beijing time. Of notable interest is the “Ask Mark” session on Tuesday, featuring Ubuntu founder Mark Shuttleworth.

By the way, I plan on attending some of the earlier-in-the-evening sessions, but once it gets too late I’ll just let my IRC client log the rest of them while I’m sleeping. 😉

How to convert CHM files under Linux

CHM files, known as Microsoft Compressed HTML Help files, are a common format for eBooks and online documentation. They are basically a collection of HTML files stored in a compressed archive with the added benefit of an index.

Under Linux, you can view a CHM file with the xchm viewer. But sometimes that’s not enough. Suppose you want to edit, republish, or convert the CHM file into another format such as the Plucker eBook format for viewing on your Palm. To do so, you first need to extract the original HTML files from the CHM archive.

This can be done with the CHMLIB (CHM library) and its included helper application extract_chmLib.

In Debian or Ubuntu:

$ sudo apt-get install libchm-bin
$ extract_chmLib book.chm outdir

where book.chm is the path to your CHM file and outdir is a new directory that will be created to contain the HTML extracted from the CHM file.

In other Linuxes, you can install it from source. First download the libchm source archive from the above website. I couldn’t get the extract_chmLib utility to compile under the latest version 0.38, so I used version 0.35 instead.

$ tar xzf chmlib-0.35.tgz
$ cd chmlib-0.35/
$ ./configure
$ make
$ make install
$ make examples

After doing the “make examples“, you will have an executable extract_chmLib in your current directory. Here is an example of running the command with no arguments and the output it produces:

$ ./extract_chmLib
usage: ./extract_chmLib <chmfile> <outdir>

After running the utility to extract the HTML files from your CHM file, the extracted files will appear in <outdir>. There won’t be an “index.html” file, unfortunately. So you’ll have to inspect the filenames and/or their contents to find the appropriate main page or Table of Contents.

Now the HTML is yours to enjoy!

Resources

I got help in writing this article from here and here.

A Hacker’s Vacation

[Head in the clouds]I took this week off work to enjoy a Hacker’s Vacation. That is, I’m planning to spend a lot of time hacking on my computer.

It’s more than that, actually. I desperately need some time to put my life back in order and catch up on things that I’ve been neglecting, such as housework, email, this website, hard drive spring cleaning, my Tchou Tchou’s website, the Swing website, a new server, and various little projects I have going on. Slowly, I’m getting parts of it all done. I’ll have to carry on some of the tasks later, but at least this week will give me a good foundation to work with.

The biggest thing I want to hack on is my brain. As I mentioned above, I’ve got a new server and I need to spend some time learning how it works. I’m intimately familiar with FreeBSD, but since it’s a virtual hosting solution, I’m constrained at this point to use Debian GNU/Linux on the new server. Since I’ve been using Ubuntu (which is based on Debian) on my desktop for over a year, it is fairly easy to manage. But there are lots of server-related configurations and tasks that I need to nail down for good security and management.

For general Linux information goodness, I’m following a set of tutorials from the IBM Developer Network entitled the Linux Professional Institute (LPI) exam prep, described as a “series of tutorials to help you learn Linux fundamentals and prepare for system administrator certification”. I’m not intending to write the exams—just learn the material. I’m finding that the tutorials give very good background information, covering things in enough detail to explain the process. I can then, of course, delve into the man pages and other documentation to learn more.

I’m enjoying it so far.

vnStat Network Traffic Monitor

I just discovered vnStat, a network traffic monitor for Linux. Here’s a blurb from the website:

vnStat is a network traffic monitor for Linux that keeps a log of daily network traffic for the selected interface(s). vnStat isn’t a packet sniffer. The traffic information is analyzed from the /proc -filesystem, so vnStat can be used without root permissions.

It will tell you how much inbound and outbound bandwidth that your Linux machine is using—hourly, daily, weekly, monthly, yearly. That’s handy.

By inspection, I’ve been able to tell that it runs like so: it installs itself as a cron job that runs every five minutes to update its internal database. Then you can type vnstat on the command line to give you the stats.

I’d post some sample output, but since I’ve just started running it, there’s nothing to show. I’m glad to have found it, though.