Laptop Travel Essentials

Being an intrepid occasional business traveler, I’ve come to rely on my trusty MacBook as sort of an office-away-from-home. Working while away from the office presents an interesting set of challenges. Internet access (particularly Wi-Fi) is becoming ever more ubiquitous, so getting connected is easy, but there’s no guarantee that a given Internet access point is secure — in fact, it’s most likely not. Working remotely often requires access to potentially sensitive data and resources that are protected behind firewalls and the like. It’s best to keep sensitive data off laptops, as laptops are easily stolen, and a data breech could land you on the 11:00 news, or worse.

This is a collection of tips, tools, etc. that I use to work securely with my laptop while on travel. It’s geared towards Macs, but much of it is applicable to other operating systems as well. Comments, suggestions, corrections, etc. are welcome.

SSH Tunnel Manager

A lot of organizations use VPNs to facilitate off-site access to private intranets, and ours is no exception.  I’ve never been a big fan of VPNs, because they all seem to rely on OS-specific drivers (or weird Java applets) that inevitably fail to work properly with my OS or web browser.  So, I avoid our VPN and use SSH tunnels instead.  All this requires is SSH access to a host with access to the intranet resource(s) I need.  With several well-crafted SSH tunnels, I’ve never found a need to use our VPN.

There’s one catch with SSH tunnels where laptops are concerned.  Setting up an SSH tunnel often requires feeding the SSH command a complex set of options.  When I’m on travel, I’m constantly moving from place to place, and bringing my laptop in and out of sleep mode.  This causes the SSH connections to time out, and I end up having to re-initialize all my tunnels every time I want to work on something — a big pain.  This is where a good SSH tunnel manager helps.  A tunnel manager maintains a list of tunnels and lets you start and stop them with a mouse click.  There’s a decent app for OS X called (surprise) “SSH Tunnel Manager,” and PuTTY does a nice job on Windows.  For Linux, I like gSTM.  With the SSH Tunnel manager, I’m up and running in seconds after starting up the laptop, and I don’t have to remember complex SSH command-line options.

Firefox Proxy-switching Extension

Secure web-browsing is a primary concern when traveling.  As such, I do all my browsing through SSH tunnels, which ensures that all my browser traffic is encrypted.  For general purpose browsing, I use a tunnel to an ad-filtering proxy running on a server in my office.  For work related stuff, as well as online banking and related things, I use a SOCKS proxy.  There are a couple other configurations I use as well.  Each of these requires a different proxy configuration in Firefox.  As shipped, Firefox only allows you to define a single proxy configuration.  There’s no support for multiple proxy configurations; if you want to change your proxy, you need to go in and manually update the settings each time.  Proxy-switching extensions allow you to define as many proxy configurations as you want, and switch between them quickly and conveniently.  I’ve found them to be indispensable.  There are a bunch of proxy-switching extensions out there, but my favorite is SwitchProxy, because it seems to be the best balance between simplicity and functionality (note that the stock version of SwitchProxy doesn’t run on Firefox 3, but I found a modified version that works nicely here).

Foxmarks

Foxmarks is a Firefox extension that synchronizes bookmarks between different instances of Firefox.  With Foxmarks, I now have the same set of bookmarks at work, at home, and on my laptop, and when I change my bookmarks in one place, all the others stay in sync automatically.  I’ve been running separate Firefox installations on different computers forever now, and I only recently discovered Foxmarks.  It’s one of those things where once you have it, you wonder how you got along without it.

VNC

VNC, or Virtual Network Computing, is a remote desktop-sharing technology.  It’s similar to Microsoft’s Remote Desktop service, but it’s an open standard and is platform-independent.  It allows me to pull up a virtual desktop and access data on a remote server as if I were physically sitting at the server.  This is a great way to keep sensitive data off my laptop — I just manipulate it remotely.  All of the connections are made through SSH tunnels. (what else?)

VNC is one of those things that I keep finding more and more uses for as time goes on.  I use it to access various GUI-based apps on my home and work PCs while traveling.  It’s particularly useful for running the occasional Windows or Linux-based app that I don’t have available on my Mac.  For example, I use GnuCash to track all of our household finances.  It’s installed on my Linux server at home.  With VNC, I can connect to my home server, run GnuCash remotely, and keep up with the finances while I’m away from home.  No need to run it locally on the Mac and worry about the data getting out of sync.

My favorite VNC client for the Mac is Chicken of the VNC.

FileVault

FileVault is a file-encryption system that ships with OS X.  It will transparently encrypt and decrypt files on the fly, using the user’s account password as a key.  I haven’t used it before, but I am going to give it a go with my new laptop.  It seems like an easy way to protect sensitive data that inadvertently finds its way onto the laptop.  In the event the laptop is stolen, the thieves will at least have to work harder to get at the data.

And there you have it.  I’m sure I’m leaving something out that will become apparent the next time I travel.  One thing I’d like to have is some sort of theft recovery software.  Haven’t yet looked into what’s available in that department.

Latest Ubuntu Upgrade

I just upgraded my Ubuntu box from 7.04 (Feisty Fawn) to 7.10 (Gutsy Gibbon), and after 3 upgrades (I started out with Dapper Drake) I remain impressed with how easy and painless it is. This time there was a hiccup, though. But, it’s not something that I’d expect an average user to encounter.

First, a bit of background. My desktop machine gets a lot of its files via NFS from a remote server. The server runs a base of Debian Etch with a bunch of stuff from the testing/unstable trees. The two computers are both on the same LAN, and the server currently runs kernel v. 2.6.20.1. Ubuntu 7.10 currently uses 2.6.22.

After completing the upgrade, I rebooted my machine into Ubuntu 7.10 for the first time, and logged on. It took about 5 minutes for all my menus and apps to come up (some of the apps came up before I had any menus, making me wonder if the upgrade had botched something.. but everything did finally appear). I quickly figured out the cause of the problem: all of my NFS mounts were timing out.

I did a few more tests, and I found out that I had no problem mounting and unmounting NFS directories from the server. But when I tried to run ‘ls’, my terminal just froze and I got ‘NFS server blah.blah not responding’ in the kernel log. No amount of rebooting, re-exporting filesystems, etc. seemed to help. I wondered if it was some sort of subtle incompatibility between the two different kernel versions, although I’d never had this kind of issue with NFS before in my almost 20 years of dealing with it. (Wow, has it really been that long?)

I’m aware that there are two versions of NFS nowadays, the older version 3 and the newer version 4. The 2.6 kernel supports both versions, but I’ve always run version 3 because it has always worked for me, and I’ve never seen a need to change. Plus, when I go to configure my kernel, all of the NFS v4 stuff is labeled as EXPERIMENTAL, which makes me shy away from it. This time, though, rather than futzing around trying to get my old NFS v3 setup to work again, I decided to try v4. I built it into the server’s kernel and rebooted the server. I then followed the very helpful Ubuntu NFSv4 Howto, which explained the differences between v3 and v4 and walked me through the setup.  It worked, and it doesn’t hang any more.

It’s a little troubling not knowing why the upgrade broke my NFS v3 setup.  Searching around on Google wasn’t too helpful.  I suspect it’s some kind of issue with the 2.6.22 kernel, but I did not want to spend a lot of time troubleshooting it..  I just need my computer to work.  So I’m glad NFS v4 fixed it for me, otherwise I’d probably have to downgrade back to Feisty.

NFS issue aside, the Gutsy upgrade was very smooth, and I continue to be happy with Ubuntu.

Ubuntu hard drive upgrade

I just finished upgrading the hard drive on my Ubuntu machine, and it wasn’t as easy or straightforward as I was expecting.  So I figured I’d write up some notes for the next time I do it.

First I backed everything up. Then I shut down the computer, put the new drive in, and booted up with a copy of Knoppix I had lying around. Under Knoppix, I opened up a shell and mounted my old root filesystem:

# mount /dev/hda1 /mnt

I then mounted the new root filesystem on /mnt2, and copied all the files over:

# mount /dev/hdb1 /mnt2
# cd /mnt
# tar cvpf - . | (cd /mnt2; tar xpf -)

Then, I installed the grub boot loader in the MBR of the new drive:

# grub
# root (hd1,0)
# setup (hd1)

At that point, I shut down the computer, removed the old drive, installed the new one in its place, and attempted to boot back up. Happily, it found the grub boot loader and proceeded to load the kernel. But then it hung trying to mount the root filesystem.

It turns out that a couple releases ago, Ubuntu started referring to disk partitions by UUID rather than using a specific device name such as /dev/sda1 or /dev/hda1. Both /boot/grub/menu.lst and /etc/fstab still contained UUID references for the old hard drive, so I had to go through and painstakingly replace all the old UUID references with the updated UUID for the new disk. I just used vi and did a search-and-replace, although there’s probably an easier way. Once I did this, everything booted up just fine.

I can see the advantages to using UUIDs, but it does add an extra layer of complexity when doing something like this. At least I know what to expect the next time around.

Coming soon: my adventures upgrading from Ubuntu 7.04 (Feisty Fawn) to 7.10 (Gutsy Gibbon).  There were a couple of snags, but it was mostly painless.

Server shuffle again

My server shuffling is finished, and went off (almost) without a hitch. Slight hiccup installing my new Zonet USB 2.0 PCI card. When I first popped it in, it worked fine. But for some reason, after I finished all the hardware shuffling and booted back up, the Linux kernel no longer recognized the USB 2.0 EHCI host controller on the card.  I ran ‘lspci’ and the card was there, but it showed only the UHCI (USB 1.1) controller.  And of course, I had just cut the UPC off the box and mailed in the rebate submission, thus rendering the card un-returnable.  Funny how those things work.

Anyhow, before I tried anything drastic, I upgraded my Linux kernel from 2.6.20.1 to 2.6.23.14 (the latest revision as of this writing), and damned if that didn’t fix it. Not sure if this was a one-time glitch, or if the newer kernel actually fixed something related to this. But in any case, I’ll keep an eye on it. I can’t complain too much.. the thing only set me back 8 bucks.

The server shuffle

I’ve decided to do a bit of server shuffling this weekend. I’m basically going to do a case/motherboard swap of concerto, my 700mhz server at the office, with my 450mhz server at home. That will give me a little bit more CPU at home to run stuff like GnuCash and OpenOffice inside my VNC desktop. The motherboards in the two boxes are very similar, so this should be a really easy swap… no new kernels needed, etc. Ironically, this will put concerto back on the original hardware that ran it, which should more than suffice for what it runs now, namely Apache, MySQL and Samba. One difference between the two motherboards is that [I believe] the 700mhz board does not have an ISA slot. That means I won’t be able to use my really-old ISA 56K modem card at home any more. I don’t think I’ll miss it, though, and if I do, PCI modem cards are cheap.

Yesterday I ordered a Western Digital “My Book” 750G external USB hard drive from newegg.com. I need something portable to use for backups of important documents, digital photos, videos, music, etc. The sale price was $175, including free shipping. That works out to just over 23 cents per gigabyte… amazing. And a few years from now, that’s probably going to seem expensive.

Of course, to get any kind of transfer speed out of a USB hard drive, USB 2.0 is a necessity. My old machines only support USB 1.1 on-board, so I also needed to buy a USB 2.0 PCI card. These are amazingly cheap now too. Grand total of $9.99 – $7.00 mail-in rebate, or $2.99. Technology is a funny thing. Compared to 10 years ago, the price of milk and gas seems sky-high nowadays.  But that same 10 years ago, I paid $3000+ for a 300mhz Pentium-II with an 8 gig hard drive, which seemed unthinkably cutting-edge at the time.  Computers (and electronics for that matter) are cheap, cheap, cheap now by comparison.

Saturday update

Got a start on winterizing the pool today, with occasional breaks to shoo Andrew off the pool cover.  I drained the water down below the tile line and added chlorine and algaecide.  The water was nice and clean even after a month of neglect.  Wonder if the algaecide I added last month helped.  Anyways, tomorrow I hope to get out earlier and get the bulk of the work done.  Not sure if I’ll get to blowing out the return lines.  We’ll see.

On the calendar front…  turns out Sunbird is not buggy after all as I had assumed yesterday.  Apple’s iCal exhibits similar behavior.  It appears that if I have events with RECURRENCE-ID properties, somewhere there needs to be an event that “defines” the recurrence with an RRULE or RDATE property.  Oracle Calendar’s output is missing this “defining” event.  I thought briefly about trying to “fix” the recurrences by adding RDATEs, etc. to the iCalendar output, but I think that’s more trouble than it’s worth.  I’m just going to try rewriting the recurring events as separate events, giving them unique IDs based on the start date of the event.  I’ll try it out Monday and see how it goes.

Calendaring revisited

It’s been a year or so since I gave up on my home-grown calendar sync setup.  It was nice for awhile, then we upgraded our Oracle Calendar server, it broke, I tried to fix it and didn’t get very far, and that was the end of that.  Well, as it happens, there’s been some recent interest in an Oracle-calendar-to-iCalendar gateway at work, so I decided to drag my old stuff out and try again.  And it turns out, things have improved in a year’s time.  First off, the Oracle Calendar SDK seems to be more reliable.  I used to get lots of internal library errors, particularly when trying to download large chunks of calendar data.  But that doesn’t seem to be happening now (I know, famous last words).  And on top of that, the iCalendar output is much cleaner.  For example, recurring events are now properly tagged with RECURRENCE-ID properties, so recurrences “just work” now without any extra work on my part.  There are still a few little quirky things, but by and large, it’s a huge improvement.

Also improved is Sunbird, Mozilla’s standalone calendar app.  It’s still a little rough around the edges, but it seems much more robust than previous versions.  I’d eventually like to use Sunbird as my main calendaring app everywhere, because it’s cross-platform and it allows interactive editing of subscribed WebDAV calendars (unlike Apple’s iCal).  The only stumbling block is my old, crusty Palm PDA, which only syncs with iCal.  Much as I’ve liked the Palm PDAs I’ve used over the years, I’m wondering if it isn’t time to start thinking about something different.  It’d be great to have something with functionality similar to Sunbird’s, in a PDA form factor.  Never going to get that with something that relies on desktop sync.

Fun with ssh tunnels

A couple months back, UMBC decided to block off-campus access to most of its internal hosts. Included in this bunch was concerto, which houses this blog as well as my house wiki. Although I could probably apply for and get a hole punched in the firewall for http and ssh, I decided not to bother. I’m not the most proactive guy in the world when it comes to keeping up with security patches, so the firewall thing is probably for the best. Of course, the down side to this is that I can’t use concerto as a free web hosting environment any more, which again, is probably mostly a blessing in disguise. However, there was one big thing I didn’t want to give up: ssh and web access to concerto from our home LAN. After all, there’s not much point in having a house wiki if I can’t get to it from my house! So the challenge became, how do I get this back, and make it as seamless as possible?

Continue reading

Fixing openoffice.org fonts..

I use openoffice.org all the time on my Debian server box, most of the time through a VNC connection. Problem is, the fonts have always looked horrible. ISTR that they weren’t always bad, but they’ve certainly been bad for awhile. Well, today I finally sat down and fixed it.

It all started out when I upgraded the system (which I hadn’t done in forever) to get some updated packages. I was hoping that would fix my OO.o font problems, but it didn’t, so I dug a little deeper…

Basically, there are two issues I was seeing:

  1. The menu font was cartoonishly large in proportion to all of my other apps; and
  2. It appeared that anti-aliasing was not working or something, because all of the fonts had a very crude, blocky scaled look to them.

None of this prevented me from using the system, but it sure didn’t make it enjoyable. Anyhow, first I tackled the ugly-scaled-fonts problem. I noticed that I didn’t have the problem when I started OO.o directly on an X.org display. The problem was limited to the VNC server. The solution turned out to be starting Xvnc with a depth of 24 bits instead of 16. Don’t ask me why it works, but it does. It remains to be seen if the increased resolution will cause any performance degradation over my slow DSL uplink. I’ve not noticed any difference over our 100mbps LAN at home.

I solved the second issue by adding the following line to my $HOME/.Xresources file:

Xft.dpi: 85

So there you have it. As always, Google was extremely helpful in tracking down this info. References here and here.

Let’s go paperless

I’m on a paperless kick. I’ve decided that I have too much paperwork cluttering up my file cabinets, desk drawers, etc., so I’m getting rid of as much of it as I can. My goal is to shrink my paperwork collection down so that it only takes up one file cabinet (I currently have three). It’s one part of an overall downsizing theme that’s pervading our household lately, the idea being that if we get rid of as much stuff as possible now, it’ll be easier to move into a smaller, lower-maintenance house down the road.

It’s also getting easier to go paperless. More and more billers, financial institutions, etc. are offering electronic (usually PDF) statements with the option of turning off paper mailings. It took me a little while to warm up to this technology, but now I’ve accepted it wholeheartedly (the key was deciding that I trust the online delivery mechanism more than I trust our mailman).

The centerpiece of the paperless scheme is what I call a “virtual file cabinet”, which is just a fancy name for a directory hierarchy to hold PDF documents. I use ‘unison’ to keep an exact copy of the hierarchy on a different machine, which serves as an effective backup scheme.

I’ve centered on PDF as my document format of choice, because it works well and is used by the majority of the e-document providers I deal with. And that means that everything that is not PDF, has to be converted to PDF. The best way I’ve found to do this is to set up a virtual “PDF Printer”, which creates a PDF file in lieu of actually sending the document to the printer. Then, just send the document to the virtual queue to create a PDF. This saves a step over printing the document to a file (which creates a PostScript file which then must be converted with ps2pdf). And some apps, like H&R Block’s TaxCut, don’t allow printing to a file, but they’ll happily send stuff to the PDF queue.

Setting up the PDF printer on my Ubuntu machine was a piece of cake, following these instructions. Condensed version:

  1. sudo apt-get install cups-pdf
  2. sudo chmod 4755 /usr/lib/cups/backend/cups-pdf
  3. Go to System -> Administration -> Printing -> New Printer
  4. Select ‘PDF Printer’
  5. Select ‘Generic’, ‘postscript color printer’ driver

It was a little more difficult, but not overly so, to set this up on my home server box (and also configure our Windows box to print to it). See the Wiki for details.