3 cheers for backward compatibility

Well, we finally upgraded our calendar server last night, from old-and-crusty Steltor CorporateTime to its successor, new-and-shiny Oracle Collaboration Suite. And, kudos to Oracle, as it looks like they’ve kept it backwards-compatible with the old CorporateTime API. That is, my homegrown OracleCalendar-to-iCalendar exporter thingy is still working. That’s nice, because I’ve come to depend on it, and this means that I won’t need to spend lots of time fixing it. Things are a little busy here right now, so if it had broken, it probably would have stayed broken for awhile.

Once things slow down, and I can revisit this, I bet I can make it work even better with the newer APIs now. Actually I may not need to use the APIs at all any more, as OCS supposedly supports CalDAV. In the meantime though, I’ll very happily continue to use the stuff I already have.

Oh, and a few days ago, I installed MediaWiki. I think I’m going to get a lot of use out of it. Read about my recent trials and tribulations with PHP iCalendar.

Party on…

Summer’s almost over

Well, here it is late summer again. And I must say — I’m ready for fall. If you’d have told me 20 years ago that I’d be writing that now, I’d have looked at you like you were crazy. But it’s true.. I’m just not crazy about summer these days. It’s not really the heat — being a profuse perspirer, I’ve always disliked the heat, but I haven’t always disliked summer. I think the main culprit is our house. Now, you’d think our house would be the ideal place to spend summer, with a big yard and a pool. And yeah, I guess it is ideal if you’re a kid and don’t have to take care of the place. But the upkeep on the place, combined with working full time and helping care for two small kids, doesn’t leave much time to enjoy the amenities. What it boils down to is, it never seems like there’s enough time in the day, and after a couple of months, it starts to get old.

Owning a pool has proven to be interesting. I do enjoy getting in the pool, particularly with my son. But it’s funny — when you spend a certain amount of time and money on the upkeep of a pool, and the pool is lightly used, you actually start to feel obligated to get in the pool to make it seem worthwhile. And when the pool sits unused for more than a few days, you start wondering why you’re putting so much into it, and then you start entertaining thoughts of having it filled in. People have told me that my best pool-ownership days are ahead of me, when the kids get to the age where they’re swimming all day, every day. I’m looking forward to that, but once the kids are past that age, I want to move — seriously. Because at that point, pool ownership just won’t be worthwhile any more.

But anyhow… I want to like summer again. I dislike it now, but I still get a little sad when it ends. Everything is a phase. Eventually, life will be less crazy and I’ll be able to kick back and enjoy summer again. We may need to move, the kids may need to get a little older. But it’ll eventually happen.

Cisco VPN client for Linux

Well.. after quite a bit of fiddling (what else is new) I managed to get Cisco’s VPN Client working on my Linux box. OIT provides a download for this, but it’s just a tar file of the client software.. no docs or any other info.

Details: I run Debian, and most of my Linux boxes have custom-built kernels; I don’t use the pre-packaged Debian Kernels. For some reason, I’ve found that things “work better” like that, at least for servers. Case in point: I initially tried to build the VPN stuff on a freshly-installed box with the stock Debian kernel, and it bombed spectacularly. I then built a custom Kernel, tried again, and it worked.

I run 2.4.x. Specifically, the two machines I’ve built VPN on run 2.4.32 and 2.4.33. I have no idea if the stuff works on 2.6 or not.

Installation:

  1. Untar the distribution, VPNClient.tar.gz, and cd into the resulting vpnclient directory.
  2. Become root and run the installation script:

    ./vpn_install

    It should be safe to accept the defaults for all of the prompts. One of the prompts is whether to start the VPN Service at boot time. Since I rarely use VPN, I elected not to do this. I ended up with an init script, /etc/init.d/vpnclient_init, which I need to run manually. Presumably, if you tell it to start at boot time, it’ll create the appropriate link in /etc/rc2.d or wherever.

  3. UMBC includes two VPN “profile” files, "UMBC OffCampus.pcf" and "UMBC OnCampus.pcf". Copy these into the directory /etc/opt/cisco-vpnclient/Profiles. Make sure they are set to mode 644.

    cp UMBC* /etc/opt/cisco_vpnclient
    chmod 644 /etc/opt/cisco_vpnclient/UMBC*

  4. Check the file /opt/cisco-vpnclient/bin/cvpnd and ensure the setuid bit is set. For some reason, after installing on two different machines, one of them had the bit set and the other didn’t. This file must be setuid root or vpnclient will not run for a non-root user.

    chmod 4111 /opt/cisco-vpnclient/bin/cvpnd

  5. Try it out:

    vpnclient connect UMBC\ OnCampus
    or
    vpnclient connect UMBC\ OffCampus

Problems? Make sure all the files are in the locations they should be (no filenames misspelled etc) with the exact permissions specified above. It’s very picky about this, and the errors it gives aren’t too helpful. strace is definitely your friend here.

In other news.. I think I’m going to try setting up a personal Wiki to document stuff like this. Using the blog for this kind of stuff does work (i.e. I’m documenting stuff that I previously wasn’t, and I have a resource I can refer to for stuff now), but the diary-like nature of the blog doesn’t lend itself too well to organizing information. With a Wiki, I’ll be able to organize stuff for future reference, and I can keep the Blog for the stream-of-consciousness type stuff. I think I’ll try MediaWiki initially, because I’m familiar with it and like its look. My only concern is that it might be overkill, so I’ll have to see what kind of footprint it has.

Parking Permits and EasiPipe hacking

I bit the bullet today and bought a UMBC Parking Permit. Now, for someone who works at UMBC, this may not seem too remarkable. But, this is the first time I’ve had a parking permit in about 4 years. I’ve always had philosophical issues with the University charging its employees for parking, but that’s kinda beside the point — I originally worked in a remote area of campus (TRC building) with a lot of nearby street parking. So, it was kind of a no-brainer to eschew the permit and just park on the street. A few years ago I moved to main campus, which is about a 20-minute walk from where I was originally parking. But up till now, I remained permit-less. I would either park off-campus and walk the mile or so to my office, or I’d ride my bike in, or have my wife drop me off. It worked, for awhile. Now with two kids, it’s becoming too inconvenient. So, I got the permit. I’ll actually miss the occasional walks to/from my car, but it’ll be nice not to have to worry about getting to my office in bad weather. And, I don’t really feel the need to beat the system any more just to save a few hundred bucks. Sometimes convenience is worth paying for.

In other news, I hacked a bit on Easipipe today. Easipipe is the program that brokers connections between myUMBC and the HP3000 mainframe that serves as our SIS system of record. This is all part of the big project to move all our stuff off of SGI hardware. Easipipe is a big piece of that. It’s written in C, and required a bit of porting to get it running under Solaris. But it wasn’t too difficult after I dusted off my long-neglected C programming skills. Since we have two clustered portal web servers, I’ve decided to try running two Easipipe instances, one on each web server. It required a bit of hacking to prepare the code for the new configuration. The HP3000 listens on a total of 20 contiguous TCP ports. Each web server will use a block of 10 of those ports. The code needed to be hacked so each Easipipe instance could figure out the correct block of ports to use. Right now, I’m doing that by checking the hostname. I’m going to cut over to this setup tomorrow morning, so I can babysit it over the course of the day. I think it’ll work fine, but ya never know.

SGI, we hardly knew ye

At UMBC, we have a bunch of web apps which are running on SGI hardware. The SGIs are no longer under maintenance, no longer being upgraded, no longer getting new software builds, etc. etc. Let’s just say “we’re phasing them out.”

I can’t help but feel a little melancholy about this turn of events, having been at UMBC in 1992 when we moved into the new Engineering/Computer Science (now just “Engineering”) building. We got a big pile of money to buy computers and other equipment for the new building. Now, up to that point, our department was primarily a DEC (Digital Equipment Corp) shop. And, we were all lined up to buy lots and lots of DEC hardware with our new-building money. But, DEC’s new baby, the Alpha chip, was not quite ready for prime time in 1992, so they weren’t able to sell us Alpha-based equipment in time for the building to open. So, DEC ended up losing (and subsequently getting bought by Compaq), and SGI ended up getting our money. SGI had just released their new entry-level workstation, the Indigo, and we packed our labs full of them. We also had Crimson servers and eventually, a 16-node Challenge XL. At the time, SGI was well-known for making high-end graphics workstations. They were featured in movies and were widely used by the entertainment industry for animation and special effects. As such, they showed up a lot in the popular press, so in general, the public was “aware” of SGI and their reputation. By extension, with our labs full of SGIs, we became cutting-edge too. It was a busy year, but it was lots of fun, particularly for a 22-year-old geek like me with no life outside of UMBC (hey, at least I admit it 🙂 )

But unfortunately, all good things must come to an end. The mid- and late-90s were not kind to SGI. Companies like ATI, nVidia and 3dfx began marketing cheap, high-end graphics cards for garden variety PCs. The cards got better and better, and SGI lost their edge. People began moving away from SGI’s proprietary hardware and OS, towards PCs running Windows or Linux. Then, the tech downturn hit, and that pretty much killed SGI. Now, pretty much every PC out there has graphics hardware that blows away anything SGI had back in the day.

The upshot of all this, is that we no longer have any SGI workstations at UMBC. However, we still have a few SGI servers plodding along, running legacy web apps, because no one has gotten around to moving the apps off. With the hardware no longer under maintenance, it’s only a matter of time till it dies, so I’m trying to get the apps moved off before that happens. That can be a challenge, because a lot of these apps also need other kinds of attention — for example, conversion from legacy authentication systems to our current campus single signon scheme. And many of them aren’t happy about being moved; lots of hardcoded path names, hostnames, URLs etc.

So it’s a slow process, but I’m slowly getting it done, moving us ever closer to the day when we can shut down the last SGI machine at UMBC… the end of an era.

Ubuntu fonts

I think I’ve finally got an Ubuntu font setup that I can live with. It’s not perfect, but it’s livable. Here’s what I did so I can replicate it if necessary.

  • Install msttcorefonts package.
  • Install MS “Tahoma” and “Tahoma Bold” fonts, neither of which are included with msttcorefonts.
  • Set X server to 96x96DPI.
  • Install a custom .fonts.conf that disables anti-aliasing for smaller fonts, sets some font prefs, and enables sub-pixel rendering.
  • In Firefox, go to Edit->Content->Fonts & Colors->Advanced. Set Proportional font to “Sans Serif”, Size 14pt. Set Serif font to “Times New Roman”. Set Sans-Serif font to “Verdana”. Set Monospace to “Courier New” at 12pt.

I’m pretty sure that’s it. Further references may be found in other posts in this category.

The overall result is a very Microsoft-y look, probably because of the heavy use of the Tahoma font. Some fonts are a little too small, others are a little to big (the default font in Firefox, for one). But, I can live with this until I go to a Mac on my desktop. It took a bit of tweaking, but it definitely looks nicer than my old vanilla Debian setup.

Followup 8/15.. The menu fonts in OpenOffice.org were still kinda ugly after doing all this.. I fixed this by going to Tools->Options->OpenOffice.org->View, and unchecking “Use system font for user interface.” Then when I restarted, the menus came up in Tahoma. Problem solved.

Followup 1/11/07: Installed Firefox 2 and found things required some additional tweaking. Changed Sans-Serif font from Verdana to Arial. Changed proportional font size to 16pt and fixed font size to 14pt. Re-enabled anti-aliasing for smaller fonts. Not sure I’m 100% happy with it, but it’s tolerable.

Scaling down the pool project

Over the past weekend, I came to the conclusion that I’ve bitten off more than I can chew with my swimming pool repair project this year.

The moment of enlightenment came on Saturday, when I spent most of the day working on the pool. It occurred to me that to effectively re-bed my loose coping stones, I’m going to need to grind a lot more mortar off the bond beam than I was originally planning. Otherwise, the stones are either going to be uneven, or they’re going to sit up too high. Grinding the beam down is going to require a power tool such as an electric or pneumatic chipping hammer. And, it’ll make enough of a mess that I think the pool will need to be drained. And, that means it’s not happening this year.

So, I’ve elected to put off the major repairs until spring (probably late April or early May). This summer, I’ll make repairs to the deck and caulk the expansion joint in the areas where the coping is sound. I should be able to finish that up over the next couple of weekends. Then when I close the pool, I’ll tarp the areas where the coping is off. Then I’ll drain the pool next April around the same time I would normally start up the equipment.

This past weekend, I got most of the expansion joint cleaned and filled it with foam backer rod. I learned something about backer rod in the process: After about 24 hours in the joint, it “settles” lengthwise. My butt joints now have about 1/2″ of space between them. No problem, I can fill them with little bits of backer rod. But, I’m glad I didn’t caulk right away.

With the pool empty next spring, I’ll have the opportunity to do some maintenance, such as..

  • Touch-up areas of loose or peeling paint
  • Inspect and re-caulk around light niches, return jets, main drain, etc.
  • Inspect and repair a return jet that appears to have a threadded sleeve stuck in it
  • Patch skimmers where necessary
  • Inspect shell cracks and re-putty where necessary
  • Install an overflow line (maybe)
  • And of course, repair the coping stones and tile in the deep end

Hopefully after that, I’ll be good for another 5 years.

I love pool ownership. Really, I do.

Installed Ubuntu

I tracked down a spare 8.5gig disk today (the one that came with my old P2-300 box, ironically) and installed Ubuntu on it. First problem: I installed the spare disk as an IDE slave, and the Ubuntu install totally hosed the boot loader on the master (Grub). After installation, the boot loader gave some cryptic error message and hung. So, I booted into Knoppix and reinstalled the boot loader, which allowed me to boot into my Debian OS on the master disk. I then attempted to configure my existing Grub to boot Ubuntu off the slave. But, when I booted, grub refused to recognize the slave disk. Not sure why (BIOS issue maybe?) but I ended up copying all of the Ubuntu /boot stuff into the /boot partition on my master disk, pointing grub at that, and just booting everything from there. Once I did that I was finally able to boot Ubuntu. (One hitch with this method — Kernel upgrades in Ubuntu are no longer automatic. I have to copy the kernel and initrd images into /boot on the main disk, then edit the gruf.conf there to boot the new kernel. Not a big deal, as I don’t plan on running this configuration for too long — if I like Ubuntu, I’ll install it as the main OS on the computer.)

Upon bootup, it immediately prompted me to install 160-odd megs of software updates, which mostly worked, but some of them apparently crapped out as I got this happy-fun-ball “some updates failed to install” window after the installation finished. Being that Ubuntu uses apt, this is somewhat to be expected, but I hope it doesn’t screw up further updates (as apt failures on my Debian boxes are wont to do). Followup — no further problems with apt so far. After installing the updates, I was prompted to reboot, which I did, which brings me to where I am now, writing this entry.

Ubuntu seems nice enough, but so far it doesn’t seem much different from other Linux desktop installations I’ve seen, all of which are fraught with quality-control issues such as these. Once configured, they work well, but there’s always that pain of setup and configuration. I guess I’m a little disappointed — after all the hype I’ve read, I was hoping Ubuntu would be more revolutionary — a Linux desktop that doesn’t really feel like a Linux desktop. Oh well. Off I go to a command-line to get my graphics card working and fix my fonts, just like every other Linux desktop….

[More:]

OK.. Installing the nvidia driver was easy, actually. There’s a package (nvidia-glx) that takes care of it. After installing this, I went in and copied my configuration out of my old xorg.conf, restarted X (by way of the gdm display manager), and it came right up with my dualhead configuration.

I’m now in the process of installing some other “must-have” apps such as emacs, thunderbird, etc. Oh yeah.. and OpenAFS. Uh-oh…

Well, openafs turned out to be painless. Just install the modules-source package and follow the directions in /usr/share/doc/openafs-client/README.modules. Now to work on fonts. Installing msttcorefonts package helped a lot. To do that I first needed to go into Synaptic (Ubuntu’s GUI front-end to apt) and enable the “multiverse” repository, which includes non-free packages. Then, I found that my X display was not set to 96x96dpi, which is supposedly optimal for font rendering. Based on info found here and here, I tweaked the display DPI in xorg.conf by adding the following option to my “Monitor” section:

DisplaySize 783 277 # 96 DPI @ 2960x1050

and the following to “Device”:

Option "UseEdidDpi" "false"

Next it looks like I need to tweak anti-aliasing for certain fonts (reference).

Little by little it’s coming along.

Another good font tutorial for configuring fonts under Ubuntu. This one includes instructions for installing the Tahoma family, which for some reason is not included with Microsoft’s Core fonts for the web. With the MS fonts (plus Tahoma) installed, things look much better already, and apparently I can improve the look further by tweaking anti-aliasing and other stuff… might play with that a bit tomorrow.

First impressions of Ubuntu, etc.

Last Friday I tried out the latest release of Ubuntu Linux. They provide a “live” CD, which boots and runs directly from the CD just like Knoppix. My goal is to find a nice desktop-oriented version of Linux that “just works”. On the server side, I’m sticking with Debian, but I find vanilla Debian a bit lacking in the desktop department. So as a stop-gap between cutting over to OS X completely, I thought I’d try out Ubuntu and see how I like it. Ubuntu is based on the same apt package system as Debian, so it’s familiar, and it’s touted as being very desktop-friendly.

First impressions: it looks nice. apt-get works as expected from the command line, but the default archive server has a very slow connection — I wonder if there are mirrors on Internet2 that I could use. If not, that’s a definite drawback, as I’m not sure I could give up the blazing speed I get from debian.lcs.mit.edu. I was able to install xmms easily, and my sound card was immediately recognized, and the system shares the sound card between apps. However, for some reason it didn’t work streaming MP3s from my mp3act server. Recent versions of OpenOffice and Firefox are provided. It didn’t pick up my dual-head setup, but I didn’t expect it to — I’ll need to download and install nVidia’s x.org driver manually. It looks like I’ll need to install some compilers and devel tools before I’ll be able to build the nVidia driver. But I expect it’ll work.

As with every other version of Linux, the default fonts are butt-ugly. Why can’t someone put out a Linux distro that has nice fonts out of the box? That has always been one of my biggest gripes with Linux. There are tutorials on the ‘net to improve the look of the fonts under Ubuntu, but honestly, this shouldn’t be something I have to mess with. Linux is never going to get anywhere in the desktop market until they can get past this issue.

All of that said, I think I may try out an “official” install of Ubuntu on the hard drive, and see how it goes for awhile. I’d rather not wipe out my existing Debian install, so I’ll have to scrounge around for a spare hard drive first.

In other news.. I’m thinking about finally taking the plunge and going with totally paperless bills and financial statements (where possible). My redundant disk setup gives me a more reliable place to store documents electronically, so there’s no reason not to go for it. As with everything else, I’ll see how it goes.

MySQL Replication

I’m about done with my computer shuffling which I started a month or so ago. I have a 300g drive at work and a 120g drive at home. The idea is to replicate stuff like the MP3 collection, photos, system backups, etc. in both places, to guard against losing data to a disk crash.

The next challenge is to set up an mp3act server at home that replicates the one at work. I’m going to try doing this with MySQL replication. The idea here is to replicate the mostly-static music library tables with work being the master and home being the slave. Then, each site would have its own copies of the dynamic tables like playlists and playlist history. This lets me use one-way replication and avoid setting up a dual master/slave configuration, which I don’t think would work well with the dynamic tables, particularly WRT simultaneous access etc.

Yesterday and this morning I took the first steps toward doing this, which involved locking down my MySQL server at home, then setting it up to allow remote TCP connections (comment out the bind_address option in my.cnf). Then I needed to create a slave user, which I did by doing

GRANT REPLICATION SLAVE ON *.* TO slave@homedsl IDENTIFIED BY 'password'

The grant command will create the user if it doesn’t already exist.

Then, I needed to edit /etc/mysql/my.cnf on both the master and the slave, to give each one a unique server ID. I gave the master an ID of 1 and the slave an ID of 2. This is accomplished in the [mysql] section of the config file, with a line like this:

server-id=1

Next, I followed the instructions in the documentation (see link above) to lock the tables on the master and create tar archives of each of the databases I wanted to replicate (on my Debian box, each database has its own directory under /var/lib/mysql). I then untarred the databases into /var/lib/mysql on the slave. For each database I was copying, I added a line like this to the slave’s config file:

replicate-do-db=database-name

I found that if I wasn’t replicating all the databases from the master, the replication would fail unless I explicitly listed the databases like this. The slave expects the tables it’s replicating to already be present — it does not “automagically” create tables as part of the replication process. I wasn’t really clear on this point after reading the documentation; it was only clear after I actually tried it.

With the tables copied over and the configurations tweaked accordingly, I followed the docs to record the master’s ‘state’ info, point the slave at the master, and start the replication threads on the slave. This all worked as expected. All in all, it wasn’t too hard to set up, so I’ll see how well it works going forward.