Whew… what a day

Thursday was quite the day of triumphs and tribulations.

It all started out with a successful swap-out of my home Linux server. It had been running on an old, trusty-but-tired Dell P2-300mhz, and I upgraded it to a slightly-less-old Dell P3-450mhz (Linux is far from perfect, but it truly shines at getting new life out of old, scavenged hardware). The upgrade was as easy as: build a new kernel, shut the old box down, pull out all the boards and peripherals, put the stuff in the new box, and boot the new box up. The result was a home server with a 50% faster processor and an extra 256mb RAM (640mb total vs 384mb). Not earth shattering, but a noticeable improvement, and it was easy to do. The trick to doing is this to transplant as much of the “guts” from the old box to the new box as possible, so the hardware configuration stays mostly the same.

Next up was the launch of our new myUMBC portal, which so far has been very smooth, other than the usual little niggling problems that always pop up with these things. We had already been running uPortal in production for six months, and that experience definitely helped ease the transition. The centerpiece of this launch was a complete redesign of the UI, but behind the scenes we also upgraded the framework and layout manager. This gave us an opportunity to start out “clean” with a fresh set of database tables, and to redo a few things which were done sloppily with the initial launch (such as our PAGS group and channel hierarchies). It positions us very well for future releases and gives us a clean platform on which to build future improvements.

Of course, by Murphy’s Law, the air conditioning in ECS picked yesterday to go on the fritz. So the launch was a little sweaty, but it happened anyhow. When I left the office, the A/C was finally getting back to normal, and then I get home and our power goes out. It ended up being out for around 4 hours, from 7:30pm till 11:30pm. Not all that long, but it always seems like forever when it’s actually out, and it made for a pretty sweaty (and surly) day. And of course, BGE’s automated system never gave us an estimate of when the power would be restored, so Cathy packed up the kids to go sleep at her sister’s, and pulled out 2 minutes before the power came back on. Fortunately I was eventually able to get a decent night’s sleep. I must say I’m more than ready for this summer to end. This is the first truly miserable summer we’ve had since 2002, and I had forgotten just how bad 2002 was. Oh well… t-minus 7 weeks till the Outer Banks.

CWebProxy channels and self-signed certs

OK. Just so I don’t forget this when I inevitably have to do it again.

We are starting to add some CWebProxy channels that access the portal web server via its external URL rather than one of the loopback interfaces (long story why, but there are a few issues with proxying to a localhost URL, particuraly WRT inline images). These channels go through SSL, as opposed to the loopback ones which use standard HTTP. Our test portal server uses a self-signed SSL cert. That causes some problems, because the portal doesn’t have access to the server’s cert to properly negotiate the SSL connection.

Solution: Create a local keystore containing the cert info, and point the JVM at this file via a command-line argument.

How to do it in 5 easy steps:

  1. Find the SSL cert for the web server. On the portal servers, this is located under server-root/conf/server-name.crt. Make a temporary copy of this file. Edit the copy and remove all lines except the actual cert data, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines.
  2. Use the cert file to create a Java keystore file. Assuming the keystore will live at /etc/umbc/uportal-test.umbc.edu.keystore and the cert file copy is cert.txt:

    keytool -import -trustcacerts -keystore /etc/umbc/uportal-test.umbc.edu.keystore -file cert.txt -alias uportal-test

    (Note: keytool is in JAVA_HOME/bin on recent versions of the Sun JVM.)

  3. Set permissions on the keystore file so that the portal web server can read it.
  4. Point the portal web server’s JVM at the custom keystore file. With Tomcat, this is done by setting the JAVA_OPTS environment variable prior to starting Tomcat. For UMBC web servers, the place to set this is server-root/bin/config-perl.
  5. Restart Tomcat.

Pool coping project drones on

Work on the pool coping project continues slowly but surely. I didn’t intend for this to be an all-summer project, but that’s how it’s turning out. The hot weather has really slowed it down, which is not all bad, as it’s keeping me from overextending myself. I’ve done most of my recent work in the early mornings on weekends.

Today I finally finished prepping the individual coping stones for re-mortaring. This involved using a hammer and chisel to laboriously chip old, loose mortar off the bottom of the stones. From the appearence of the mortar, it looks like someone attempted a similar repair at some point in the past. Hopefully, this one will last a bit longer — that was the idea behind saw-cutting the expansion joint, anyhow.

Next up is to chip any loose stuff off the top of the bond beam, and finish cleaning out the expansion joint at the deep end. Also, one of the coping stones needs to be glued together. This week I’ll figure out what product I need for that. Weather permitting, next weekend I’ll try to get all this prep work finished up so that I’ll be ready to reattach the stones.

I will update this entry as I gather info.

Pipe Insulation Adventures

Now that our boiler job is complete, I’m going through and insulating all of the near-boiler pipes, which put off quite a bit of heat. First priority is the primary loop (1-1/4″ copper), followed by the secondary loop for the indirect water heater (1″ copper), followed by the DHW piping near the indirect (3/4″ copper), and lastly the secondary loops for our 3 heating zones (mix of 1″ and 3/4″ copper). I’m using the pre-formed fiberglass pipe insulation that has a white scrim jacket, as that’s what was recommended to me. Home Depot carries this stuff. I found it in the aisle with the furnace/air handler filters, thermostats, and ductwork. Here’s the catch — the stuff is tagged as being for various sizes of copper pipe, but the tags have no bearing on reality. I initially bought the stuff labeled for 1-1/4″ copper, but it was way too big for my 1-1/4″ pipe. I had to return it and get the stuff labeled for 1″ copper, which was a snug fit on the 1-1/4″ pipe. And, the stuff for 1″ pipe is actually more expensive than the stuff for 1-1/4″ pipe (by about 50¢ per 3′ length), which makes absolutely no sense.

Fast forward to the actual installation. The insulation slips easily over the pipe, and seals shut with an adhesive strip on the scrim jacket. However, on the stuff I bought, the adhesive doesn’t hold up very well against the expansion and contraction of the pipes as they heat/cool… a lot of the seams were popping loose just a few hours after I wrapped the pipes. I’ve compensated by adding some strategically placed wire ties, but I may need to track down a better adhesive to apply to the problem areas.

For butt joint (and possibly seam) sealing, I found some white scrim tape at Grainger, but it’s mind-numbingly expensive: Around $35 for 50 yards. That’s almost 25¢ a foot. I can’t imagine what’s in the stuff to make it that expensive.

On a totally unrelated note, I see that Home Depot is now carrying trench drainage systems. This could come in handy for my driveway down the road….

Hump Day Ramblings

I accidentally shut down my X server yesterday before I left work, so I took the opportunity to install the custom nVidia driver on my X desktop. And I must say, the proprietary driver is much nicer than the nv driver that comes with X.org. Someone at nVidia has put a lot of work into making these cards work well with Linux. The installer script produced a working config file that started right up, with acceleration enabled to boot.

The nVidia driver has a built-in option called “TwinView” which provides multihead support via the VGA and DVI ports on the card. It replaces Xinerama, although supposedly Xinerama can still be used to provide the same functionality. However, TwinView seems to be the better alternative because it provides acceleration on both displays. It also adds Xinerama-compatible “hints” to the X server so that window managers will work the same as with Xinerama. It’s really very well done. So now I have a full 24-bit display across both monitors, with acceleration. Right now I’m still using the widescreen as my “main” display and the standard display as my “secondary” screen. I’m going to try it like this for awhile, and if I don’t like it I’ll switch them.

The only configuration challenge was getting both displays working at their respective resolutions. I accomplished this with the following Device section:

Section "Device"
Identifier "nVidia"
Driver     "nvidia"
BusID      "PCI:1:0:0"
Option     "TwinView"
Option     "MetaModes" "CRT-0: 1680x1050, DFP-0: 1280x1024"
Option     "TwinViewOrientation" "RightOf"
EndSection

The MetaModes line is the important one.

[More:]

While testing things out, I also learned something about VNC: It compresses the pixel depth over slow connections. To force full-color rendering, I need to do

vncviewer -FullColor host.name:display

I was initially scratching my head as to why it was still rendering limited colors despite the 24-bit display. That explains it, and I may want to keep full color rendering disabled to maximize performance over my slow DSL uplink.

Also, this morning I hooked a scavenged SCSI disk up to my PC at home, mainly as a proof-of-concept: The disk uses 68-pin wide SCSI, and my controller uses 50-pin narrow SCSI. However, all I needed to make it work was a 50-to-68-pin adapter cable. I just jumpered the drive to SCSI ID 1 and plugged it in. Initially I had an external terminator on the drive case, and that hosed things. When I removed the terminator, it worked. Apparently my controller wants to terminate the bus itself. At any rate, now I have a spare 72-gig drive with its own case and power supply.

And finally, what’s a summer blog entry without a mention of the pool? Last week, the backup valve on my Polaris 380 broke. The valve mechanism itself is a big mess of gears and turbines, which fits into a plastic enclosure. The enclosure is made up of two halves held together by a screw-on collar ring, with a couple of o-rings to prevent leaks. The screw-on collar piece is what actually broke, probably after the valve was dropped onto the concrete pool deck one too many times. The mechanism itself was undamaged. Fortunately, Polaris sells a replacement case kit separately, and it’s much less expensive than an entire valve. The annoying thing is, the case kit only includes one of the two necessary o-rings. The included o-ring seals the two case halves together. The other one seals the backup jet to the case. It’s “technically” not part of the case, but if I’ve got the valve disassembled anyhow, I might as well replace both o-rings as they’re both probably worn out. It’s a small o-ring (1/2″ O.D. x 3/8″ I.D.) and it would have been nice if Polaris had seen fit to throw one in with the case kit. Oh well. For future reference, I found a replacement o-ring at Home Depot, in the plumbing section where they have the faucet repair kits.

Well, I guess I should get to work now.

Bike-to-work milestone

Today I biked in for the 22nd time this year, which matches my total for last year (and this time last year, I hadn’t even started riding in yet). I haven’t been riding with quite the frequency that I did last year. Most of last year’s rides were concentrated in August and September, and this year’s have been spread out over April, May, June, and July. However, most of my missed rides have been for legitimate reasons (illness, travel, family issues, rain, extreme heat, appointments/errands, days off to work on projects, etc), as opposed to laziness. I hope to pick up the pace as we enter the second half of the riding season. My goal for the year is 40 rides, which I don’t think is unreasonable.

Success with Dell widescreen monitor and X.Org

What a difference a video card makes…

A couple months back I tried getting a Dell 2007WFP working with X, using the Intel graphics controller built into my motherboard. I didn’t have much luck. Well, now I have a new video card, with an nVidia GeForce MX 4400 chipset. And, all I had to do was specify the card, driver and resolution (1680×1050) in my xorg.conf, and it worked right away. Great. Still more work than it was with my Mac (where I just plugged it in and it worked), but not bad at all, especially considering that this is X.

Still a few things to be resolved:

  1. The new card is only working through the VGA port. Need to figure out what magic incantations I need to use DVI.
  2. Apparently I can’t use the onboard graphics controller and the new (AGP) card at the same time. As soon as I plugged the new card in, it was like the onboard controller didn’t exist. So, I’m still stuck using the old Matrox PCI card for my other monitor. The Matrox only has 8mb video memory, which limits me to a 16 bit display depth. The MX 4400 apparently supports multiheading via the DVI port and VGA port, so I’ll have to see if I can do this with X.
  3. The widescreen monitor is vertically smaller than my other standard-resolution flat panels. I’m getting a little squinty-eyed staring at it. I think I may make it the “secondary” display and use a physically larger screen as my “primary” display.

But all in all, I’m really happy to have the 1680×1050 working, and without much extra fiddling or hair-pulling.

On another note, I’ve restored sonata to a new drive (after its old drive crashed). I lost a few files before I backed the dying drive up. The system works fine, but fonts in some applications are screwy (screwy fonts in X is a relative term, of course…). So, I may end up reinstalling the machine anyhow…

Followup… looks like I may need to use nVidia’s driver (instead of the built-in nv driver) to get multihead support. I’ll try that out later this week.

Load center upgrades

Continuing in my grand tradition, I’m writing about yet another house project that I’d like to do… the problem, as always, is finding the time for it..

We have two circuit breaker panels which really should be replaced. They are FPE panels with known safety issues. One panel is our main house panel, and the other is a subpanel. The FPE subpanel is fed by a third subpanel, a Square-D QO type.

The Square-D subpanel has 20 slots, of which only 9 are currently in use. Because we’ve abandoned a few circuits in the FPE subpanel that it feeds, I could actually squeeze all the circuits into the Square-D if I wanted. However, if I did that, the panel would be full with no room for future expansion. So.. it would probably make sense to replace both subpanels with a single 24-slot Square-D QO type.

I would need a panel, a cover, and a ground bar kit, as well as a bunch of breakers. It looks like the project would cost around $500. Probably worth it for the safety and peace of mind — maybe I should slate it for this winter.

The main house panel is a bigger project. I would need to involve BG&E to get them to shut off my power at the meter, and to tell me what kind of service I have — the panel is 150 amps, but it appears that the service may be 200 amps. In this case, I’d get a 200amp, 40-slot panel. This project would probably run closer to $1000. If I can get the subpanel project under my belt this winter, maybe I could tackle the main panel next winter. Again, the main issue is finding time and prioritizing it amongst all the other stuff that has to get done around here.

Troubleshooting cloudy pool water

For the past several years of pool ownership, I’ve always had off-and-on problems with cloudy water. I’m generally pretty good at keeping up with the water chemistry, so I’ve always been a bit curious as to why the water clouds up so regularly. The pattern is the same every year: it starts out crystal clear, then after a month or so, the water slowly starts getting hazy.

The only way to get to the bottom of this is to apply the scientific method: assume that the problem is caused by x, try a known solution for x, and see if it works. I’ve worked at this over the past few seasons, and I’ve come up with three potential causes.

[More:]

Problem: Yellow algae
Cause: Lack of superchlorination

Yellow (or brown) algae presents as a fine “dirt-like” substance that accumulates on surfaces. When brushed, it dissipates easily and clouds up the water. It re-settles when the pump is off. I had big problems with yellow algae last year and the year before. At the time, I was superchlorinating very infrequently (only one or two times a season). This year, I have been superchlorinating weekly and also using a polyquat type algaecide semi-regularly, and I have not had an algae problem (yet). If this is the ticket to keeping it at bay, then I need to figure out the ideal frequency of superchlorination that will prevent algae blooms without wasting too much chlorine.

Problem: High pH
Cause: Prolonged use of hypochlorite sanitizers without adding acid to compensate

High pH and/or Alkalinity can cause cloudy water. Once this year I let the pH drift to almost 8, and the water was noticeably turbid. Adding acid cleared it up after 12 hours or so. I’ve found that supplementing the hypochlorite with a trichlor floater (in moderation, to avoid high levels of cyanuric acid) can help to keep the pH down, particularly during the hot months when the chlorine demand is high.

Problem: Inadequate filtration
Cause: Undersized pump and/or not running pump long enough

I’ll freely admit to running the pump as infrequently as I can get away with it, to try to save electricity. Unfortunately it appears that I’m paying the price for this in the form of cloudy water. Currently, the pump runs around 9 hours a day (6 hours in daylight and 3 hours after dark). With turbid water, a pH of 7.4 and no visible algae, I ran the pump for 24 hours straight and there was a marked improvement in clarity. So it appears that I need more circulation. This seems odd to me, because 9 hours really should be enough to fully turn the water over and keep it from clouding up. So I’m curious if my pump and/or filtration system is undersized. When I get around to it, I’ll measure my flow rate and see what kind of numbers I’m getting. If they’re low, I may want to consider a larger pump and/or filter. Until then, I guess I’m stuck running the pump longer if I want clear water.

Crash.

Well, the hard drive in sonata, my desktop machine at work, is dying a slow death. I saw the writing on the wall a few months ago, and now it’s finally biting the big one. I’m typing on the box now, but it’s slowly getting wonkier and wonkier as it churns out ever more i/o errors, read errors etc.

A new drive is forthcoming, but in the meantime, I’ve managed to back up what’s left of the old drive, so I can restore it onto the new one. And, I’ve found that when it comes to archiving drives with errors, cpio beats tar hands down. I started out doing

cd /filesystem
tar cvplf - . |
ssh other-box "cd /backup/disk; tar xpf -"

But, it dies as soon as it hits a bad spot.

I ended up doing

cd /filesystem
find . -xdev -depth -print |
cpio -o --verbose -H newc |
ssh other-box "cd /backup/disk; cpio -id -H newc"

The -H option is needed to back up files with inode numbers greater than 65535.

Cpio is even more obfuscated than tar WRT command line options, etc., but it seems to do a better job at disaster recovery.

Fun fun…