Blog

  • Scaling down the pool project

    Scaling down the pool project

    Over the past weekend, I came to the conclusion that I’ve bitten off more than I can chew with my swimming pool repair project this year.

    The moment of enlightenment came on Saturday, when I spent most of the day working on the pool. It occurred to me that to effectively re-bed my loose coping stones, I’m going to need to grind a lot more mortar off the bond beam than I was originally planning. Otherwise, the stones are either going to be uneven, or they’re going to sit up too high. Grinding the beam down is going to require a power tool such as an electric or pneumatic chipping hammer. And, it’ll make enough of a mess that I think the pool will need to be drained. And, that means it’s not happening this year.

    So, I’ve elected to put off the major repairs until spring (probably late April or early May). This summer, I’ll make repairs to the deck and caulk the expansion joint in the areas where the coping is sound. I should be able to finish that up over the next couple of weekends. Then when I close the pool, I’ll tarp the areas where the coping is off. Then I’ll drain the pool next April around the same time I would normally start up the equipment.

    This past weekend, I got most of the expansion joint cleaned and filled it with foam backer rod. I learned something about backer rod in the process: After about 24 hours in the joint, it “settles” lengthwise. My butt joints now have about 1/2″ of space between them. No problem, I can fill them with little bits of backer rod. But, I’m glad I didn’t caulk right away.

    With the pool empty next spring, I’ll have the opportunity to do some maintenance, such as..

    • Touch-up areas of loose or peeling paint
    • Inspect and re-caulk around light niches, return jets, main drain, etc.
    • Inspect and repair a return jet that appears to have a threadded sleeve stuck in it
    • Patch skimmers where necessary
    • Inspect shell cracks and re-putty where necessary
    • Install an overflow line (maybe)
    • And of course, repair the coping stones and tile in the deep end

    Hopefully after that, I’ll be good for another 5 years.

    I love pool ownership. Really, I do.

  • Installed Ubuntu

    Installed Ubuntu

    I tracked down a spare 8.5gig disk today (the one that came with my old P2-300 box, ironically) and installed Ubuntu on it. First problem: I installed the spare disk as an IDE slave, and the Ubuntu install totally hosed the boot loader on the master (Grub). After installation, the boot loader gave some cryptic error message and hung. So, I booted into Knoppix and reinstalled the boot loader, which allowed me to boot into my Debian OS on the master disk. I then attempted to configure my existing Grub to boot Ubuntu off the slave. But, when I booted, grub refused to recognize the slave disk. Not sure why (BIOS issue maybe?) but I ended up copying all of the Ubuntu /boot stuff into the /boot partition on my master disk, pointing grub at that, and just booting everything from there. Once I did that I was finally able to boot Ubuntu. (One hitch with this method — Kernel upgrades in Ubuntu are no longer automatic. I have to copy the kernel and initrd images into /boot on the main disk, then edit the gruf.conf there to boot the new kernel. Not a big deal, as I don’t plan on running this configuration for too long — if I like Ubuntu, I’ll install it as the main OS on the computer.)

    Upon bootup, it immediately prompted me to install 160-odd megs of software updates, which mostly worked, but some of them apparently crapped out as I got this happy-fun-ball “some updates failed to install” window after the installation finished. Being that Ubuntu uses apt, this is somewhat to be expected, but I hope it doesn’t screw up further updates (as apt failures on my Debian boxes are wont to do). Followup — no further problems with apt so far. After installing the updates, I was prompted to reboot, which I did, which brings me to where I am now, writing this entry.

    Ubuntu seems nice enough, but so far it doesn’t seem much different from other Linux desktop installations I’ve seen, all of which are fraught with quality-control issues such as these. Once configured, they work well, but there’s always that pain of setup and configuration. I guess I’m a little disappointed — after all the hype I’ve read, I was hoping Ubuntu would be more revolutionary — a Linux desktop that doesn’t really feel like a Linux desktop. Oh well. Off I go to a command-line to get my graphics card working and fix my fonts, just like every other Linux desktop….

    [More:]

    OK.. Installing the nvidia driver was easy, actually. There’s a package (nvidia-glx) that takes care of it. After installing this, I went in and copied my configuration out of my old xorg.conf, restarted X (by way of the gdm display manager), and it came right up with my dualhead configuration.

    I’m now in the process of installing some other “must-have” apps such as emacs, thunderbird, etc. Oh yeah.. and OpenAFS. Uh-oh…

    Well, openafs turned out to be painless. Just install the modules-source package and follow the directions in /usr/share/doc/openafs-client/README.modules. Now to work on fonts. Installing msttcorefonts package helped a lot. To do that I first needed to go into Synaptic (Ubuntu’s GUI front-end to apt) and enable the “multiverse” repository, which includes non-free packages. Then, I found that my X display was not set to 96x96dpi, which is supposedly optimal for font rendering. Based on info found here and here, I tweaked the display DPI in xorg.conf by adding the following option to my “Monitor” section:

    DisplaySize 783 277 # 96 DPI @ 2960x1050

    and the following to “Device”:

    Option "UseEdidDpi" "false"

    Next it looks like I need to tweak anti-aliasing for certain fonts (reference).

    Little by little it’s coming along.

    Another good font tutorial for configuring fonts under Ubuntu. This one includes instructions for installing the Tahoma family, which for some reason is not included with Microsoft’s Core fonts for the web. With the MS fonts (plus Tahoma) installed, things look much better already, and apparently I can improve the look further by tweaking anti-aliasing and other stuff… might play with that a bit tomorrow.

  • First impressions of Ubuntu, etc.

    First impressions of Ubuntu, etc.

    Last Friday I tried out the latest release of Ubuntu Linux. They provide a “live” CD, which boots and runs directly from the CD just like Knoppix. My goal is to find a nice desktop-oriented version of Linux that “just works”. On the server side, I’m sticking with Debian, but I find vanilla Debian a bit lacking in the desktop department. So as a stop-gap between cutting over to OS X completely, I thought I’d try out Ubuntu and see how I like it. Ubuntu is based on the same apt package system as Debian, so it’s familiar, and it’s touted as being very desktop-friendly.

    First impressions: it looks nice. apt-get works as expected from the command line, but the default archive server has a very slow connection — I wonder if there are mirrors on Internet2 that I could use. If not, that’s a definite drawback, as I’m not sure I could give up the blazing speed I get from debian.lcs.mit.edu. I was able to install xmms easily, and my sound card was immediately recognized, and the system shares the sound card between apps. However, for some reason it didn’t work streaming MP3s from my mp3act server. Recent versions of OpenOffice and Firefox are provided. It didn’t pick up my dual-head setup, but I didn’t expect it to — I’ll need to download and install nVidia’s x.org driver manually. It looks like I’ll need to install some compilers and devel tools before I’ll be able to build the nVidia driver. But I expect it’ll work.

    As with every other version of Linux, the default fonts are butt-ugly. Why can’t someone put out a Linux distro that has nice fonts out of the box? That has always been one of my biggest gripes with Linux. There are tutorials on the ‘net to improve the look of the fonts under Ubuntu, but honestly, this shouldn’t be something I have to mess with. Linux is never going to get anywhere in the desktop market until they can get past this issue.

    All of that said, I think I may try out an “official” install of Ubuntu on the hard drive, and see how it goes for a while. I’d rather not wipe out my existing Debian install, so I’ll have to scrounge around for a spare hard drive first.

    In other news.. I’m thinking about finally taking the plunge and going with totally paperless bills and financial statements (where possible). My redundant disk setup gives me a more reliable place to store documents electronically, so there’s no reason not to go for it. As with everything else, I’ll see how it goes.

  • MySQL Replication

    MySQL Replication

    I’m about done with my computer shuffling which I started a month or so ago. I have a 300g drive at work and a 120g drive at home. The idea is to replicate stuff like the MP3 collection, photos, system backups, etc. in both places, to guard against losing data to a disk crash.

    The next challenge is to set up an mp3act server at home that replicates the one at work. I’m going to try doing this with MySQL replication. The idea here is to replicate the mostly-static music library tables with work being the master and home being the slave. Then, each site would have its own copies of the dynamic tables like playlists and playlist history. This lets me use one-way replication and avoid setting up a dual master/slave configuration, which I don’t think would work well with the dynamic tables, particularly WRT simultaneous access etc.

    Yesterday and this morning I took the first steps toward doing this, which involved locking down my MySQL server at home, then setting it up to allow remote TCP connections (comment out the bind_address option in my.cnf). Then I needed to create a slave user, which I did by doing

    GRANT REPLICATION SLAVE ON *.* TO slave@homedsl IDENTIFIED BY 'password'

    The grant command will create the user if it doesn’t already exist.

    Then, I needed to edit /etc/mysql/my.cnf on both the master and the slave, to give each one a unique server ID. I gave the master an ID of 1 and the slave an ID of 2. This is accomplished in the [mysql] section of the config file, with a line like this:

    server-id=1

    Next, I followed the instructions in the documentation (see link above) to lock the tables on the master and create tar archives of each of the databases I wanted to replicate (on my Debian box, each database has its own directory under /var/lib/mysql). I then untarred the databases into /var/lib/mysql on the slave. For each database I was copying, I added a line like this to the slave’s config file:

    replicate-do-db=database-name

    I found that if I wasn’t replicating all the databases from the master, the replication would fail unless I explicitly listed the databases like this. The slave expects the tables it’s replicating to already be present — it does not “automagically” create tables as part of the replication process. I wasn’t really clear on this point after reading the documentation; it was only clear after I actually tried it.

    With the tables copied over and the configurations tweaked accordingly, I followed the docs to record the master’s ‘state’ info, point the slave at the master, and start the replication threads on the slave. This all worked as expected. All in all, it wasn’t too hard to set up, so I’ll see how well it works going forward.

  • Whew…  what a day

    Whew… what a day

    Thursday was quite the day of triumphs and tribulations.

    It all started out with a successful swap-out of my home Linux server. It had been running on an old, trusty-but-tired Dell P2-300mhz, and I upgraded it to a slightly-less-old Dell P3-450mhz (Linux is far from perfect, but it truly shines at getting new life out of old, scavenged hardware). The upgrade was as easy as: build a new kernel, shut the old box down, pull out all the boards and peripherals, put the stuff in the new box, and boot the new box up. The result was a home server with a 50% faster processor and an extra 256mb RAM (640mb total vs 384mb). Not earth shattering, but a noticeable improvement, and it was easy to do. The trick to doing is this to transplant as much of the “guts” from the old box to the new box as possible, so the hardware configuration stays mostly the same.

    Next up was the launch of our new myUMBC portal, which so far has been very smooth, other than the usual little niggling problems that always pop up with these things. We had already been running uPortal in production for six months, and that experience definitely helped ease the transition. The centerpiece of this launch was a complete redesign of the UI, but behind the scenes we also upgraded the framework and layout manager. This gave us an opportunity to start out “clean” with a fresh set of database tables, and to redo a few things which were done sloppily with the initial launch (such as our PAGS group and channel hierarchies). It positions us very well for future releases and gives us a clean platform on which to build future improvements.

    Of course, by Murphy’s Law, the air conditioning in ECS picked yesterday to go on the fritz. So the launch was a little sweaty, but it happened anyhow. When I left the office, the A/C was finally getting back to normal, and then I get home and our power goes out. It ended up being out for around 4 hours, from 7:30pm till 11:30pm. Not all that long, but it always seems like forever when it’s actually out, and it made for a pretty sweaty (and surly) day. And of course, BGE’s automated system never gave us an estimate of when the power would be restored, so Cathy packed up the kids to go sleep at her sister’s, and pulled out 2 minutes before the power came back on. Fortunately I was eventually able to get a decent night’s sleep. I must say I’m more than ready for this summer to end. This is the first truly miserable summer we’ve had since 2002, and I had forgotten just how bad 2002 was. Oh well… t-minus 7 weeks till the Outer Banks.

  • CWebProxy channels and self-signed certs

    OK. Just so I don’t forget this when I inevitably have to do it again.

    We are starting to add some CWebProxy channels that access the portal web server via its external URL rather than one of the loopback interfaces (long story why, but there are a few issues with proxying to a localhost URL, particuraly WRT inline images). These channels go through SSL, as opposed to the loopback ones which use standard HTTP. Our test portal server uses a self-signed SSL cert. That causes some problems, because the portal doesn’t have access to the server’s cert to properly negotiate the SSL connection.

    Solution: Create a local keystore containing the cert info, and point the JVM at this file via a command-line argument.

    How to do it in 5 easy steps:

    1. Find the SSL cert for the web server. On the portal servers, this is located under server-root/conf/server-name.crt. Make a temporary copy of this file. Edit the copy and remove all lines except the actual cert data, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines.
    2. Use the cert file to create a Java keystore file. Assuming the keystore will live at /etc/umbc/uportal-test.umbc.edu.keystore and the cert file copy is cert.txt:

      keytool -import -trustcacerts -keystore /etc/umbc/uportal-test.umbc.edu.keystore -file cert.txt -alias uportal-test

      (Note: keytool is in JAVA_HOME/bin on recent versions of the Sun JVM.)

    3. Set permissions on the keystore file so that the portal web server can read it.
    4. Point the portal web server’s JVM at the custom keystore file. With Tomcat, this is done by setting the JAVA_OPTS environment variable prior to starting Tomcat. For UMBC web servers, the place to set this is server-root/bin/config-perl.
    5. Restart Tomcat.
  • Pool coping project drones on

    Pool coping project drones on

    Work on the pool coping project continues slowly but surely. I didn’t intend for this to be an all-summer project, but that’s how it’s turning out. The hot weather has really slowed it down, which is not all bad, as it’s keeping me from overextending myself. I’ve done most of my recent work in the early mornings on weekends.

    Today I finally finished prepping the individual coping stones for re-mortaring. This involved using a hammer and chisel to laboriously chip old, loose mortar off the bottom of the stones. From the appearence of the mortar, it looks like someone attempted a similar repair at some point in the past. Hopefully, this one will last a bit longer — that was the idea behind saw-cutting the expansion joint, anyhow.

    Next up is to chip any loose stuff off the top of the bond beam, and finish cleaning out the expansion joint at the deep end. Also, one of the coping stones needs to be glued together. This week I’ll figure out what product I need for that. Weather permitting, next weekend I’ll try to get all this prep work finished up so that I’ll be ready to reattach the stones.

    I will update this entry as I gather info.

  • Pipe Insulation Adventures

    Pipe Insulation Adventures

    Now that our boiler job is complete, I’m going through and insulating all of the near-boiler pipes, which put off quite a bit of heat. First priority is the primary loop (1-1/4″ copper), followed by the secondary loop for the indirect water heater (1″ copper), followed by the DHW piping near the indirect (3/4″ copper), and lastly the secondary loops for our 3 heating zones (mix of 1″ and 3/4″ copper). I’m using the pre-formed fiberglass pipe insulation that has a white scrim jacket, as that’s what was recommended to me. Home Depot carries this stuff. I found it in the aisle with the furnace/air handler filters, thermostats, and ductwork. Here’s the catch — the stuff is tagged as being for various sizes of copper pipe, but the tags have no bearing on reality. I initially bought the stuff labeled for 1-1/4″ copper, but it was way too big for my 1-1/4″ pipe. I had to return it and get the stuff labeled for 1″ copper, which was a snug fit on the 1-1/4″ pipe. And, the stuff for 1″ pipe is actually more expensive than the stuff for 1-1/4″ pipe (by about 50¢ per 3′ length), which makes absolutely no sense.

    Fast forward to the actual installation. The insulation slips easily over the pipe, and seals shut with an adhesive strip on the scrim jacket. However, on the stuff I bought, the adhesive doesn’t hold up very well against the expansion and contraction of the pipes as they heat/cool… a lot of the seams were popping loose just a few hours after I wrapped the pipes. I’ve compensated by adding some strategically placed wire ties, but I may need to track down a better adhesive to apply to the problem areas.

    For butt joint (and possibly seam) sealing, I found some white scrim tape at Grainger, but it’s mind-numbingly expensive: Around $35 for 50 yards. That’s almost 25¢ a foot. I can’t imagine what’s in the stuff to make it that expensive.

    On a totally unrelated note, I see that Home Depot is now carrying trench drainage systems. This could come in handy for my driveway down the road….

  • Hump Day Ramblings

    Hump Day Ramblings

    I accidentally shut down my X server yesterday before I left work, so I took the opportunity to install the custom nVidia driver on my X desktop. And I must say, the proprietary driver is much nicer than the nv driver that comes with X.org. Someone at nVidia has put a lot of work into making these cards work well with Linux. The installer script produced a working config file that started right up, with acceleration enabled to boot.

    The nVidia driver has a built-in option called “TwinView” which provides multihead support via the VGA and DVI ports on the card. It replaces Xinerama, although supposedly Xinerama can still be used to provide the same functionality. However, TwinView seems to be the better alternative because it provides acceleration on both displays. It also adds Xinerama-compatible “hints” to the X server so that window managers will work the same as with Xinerama. It’s really very well done. So now I have a full 24-bit display across both monitors, with acceleration. Right now I’m still using the widescreen as my “main” display and the standard display as my “secondary” screen. I’m going to try it like this for a while, and if I don’t like it, I’ll switch them.

    The only configuration challenge was getting both displays working at their respective resolutions. I accomplished this with the following Device section:

    Section "Device"
    Identifier "nVidia"
    Driver     "nvidia"
    BusID      "PCI:1:0:0"
    Option     "TwinView"
    Option     "MetaModes" "CRT-0: 1680x1050, DFP-0: 1280x1024"
    Option     "TwinViewOrientation" "RightOf"
    EndSection

    The MetaModes line is the important one.

    [More:]

    While testing things out, I also learned something about VNC: It compresses the pixel depth over slow connections. To force full-color rendering, I need to do

    vncviewer -FullColor host.name:display

    I was initially scratching my head as to why it was still rendering limited colors despite the 24-bit display. That explains it, and I may want to keep full color rendering disabled to maximize performance over my slow DSL uplink.

    Also, this morning I hooked a scavenged SCSI disk up to my PC at home, mainly as a proof-of-concept: The disk uses 68-pin wide SCSI, and my controller uses 50-pin narrow SCSI. However, all I needed to make it work was a 50-to-68-pin adapter cable. I just jumpered the drive to SCSI ID 1 and plugged it in. Initially I had an external terminator on the drive case, and that hosed things. When I removed the terminator, it worked. Apparently my controller wants to terminate the bus itself. At any rate, now I have a spare 72-gig drive with its own case and power supply.

    And finally, what’s a summer blog entry without a mention of the pool? Last week, the backup valve on my Polaris 380 broke. The valve mechanism itself is a big mess of gears and turbines, which fits into a plastic enclosure. The enclosure is made up of two halves held together by a screw-on collar ring, with a couple of o-rings to prevent leaks. The screw-on collar piece is what actually broke, probably after the valve was dropped onto the concrete pool deck one too many times. The mechanism itself was undamaged. Fortunately, Polaris sells a replacement case kit separately, and it’s much less expensive than an entire valve. The annoying thing is, the case kit only includes one of the two necessary o-rings. The included o-ring seals the two case halves together. The other one seals the backup jet to the case. It’s “technically” not part of the case, but if I’ve got the valve disassembled anyhow, I might as well replace both o-rings as they’re both probably worn out. It’s a small o-ring (1/2″ O.D. x 3/8″ I.D.) and it would have been nice if Polaris had seen fit to throw one in with the case kit. Oh well. For future reference, I found a replacement o-ring at Home Depot, in the plumbing section where they have the faucet repair kits.

    Well, I guess I should get to work now.

  • Bike-to-work milestone

    Bike-to-work milestone

    Today I biked in for the 22nd time this year, which matches my total for last year (and this time last year, I hadn’t even started riding in yet). I haven’t been riding with quite the frequency that I did last year. Most of last year’s rides were concentrated in August and September, and this year’s have been spread out over April, May, June, and July. However, most of my missed rides have been for legitimate reasons (illness, travel, family issues, rain, extreme heat, appointments/errands, days off to work on projects, etc), as opposed to laziness. I hope to pick up the pace as we enter the second half of the riding season. My goal for the year is 40 rides, which I don’t think is unreasonable.