Fruddled Gruntbugglies

Enthralling readers since 2005

Author: lpaulriddle

  • Stupid hard-starting generator

    We have this generator, that we bought a few years back to use during power outages. It spends 99.9% of its time sitting in the garage gathering dust (much like the snowblower). Every few months, I start the thing up and run it for 20 minutes or so, just to make sure it will work if we ever really do need it. That was one of today’s chores.

    Now, the generator has always been a little hard to start when it’s been sitting for 2 or 3 months. But it’s pretty reliable: it takes 9 or 10 yanks on the cord, then it starts up. Then you have to run it partially choked for several minutes so it doesn’t miss. Then, finally, it will warm up and run at the factory-set fuel mixture. Not exactly a finely tuned machine, but at least it works.

    Today, I figured, well, it’s January, it’s cold, so the generator will probably be even more reluctant to start. So, I got the bright idea to spray some starting fluid into the carb. I did, and it fired right up on the first pull. Then, after 10 seconds or so, it died. And refused to start up again for love or money.

    A couple dozen cord pulls, several more shots of starting fluid, and countless swear words later, I checked the oil. Hmm, seems a tad low. Topped it off with some 10w30. Started on the first pull, and stayed running. I guess there was just enough oil in there to start it initially, but once it started sloshing around in there, it tripped the low oil cutoff. Moral of the story: I need to check the oil every time I start the thing up. Followup: I later checked my records, and found that this was the first time I had tried to start it after changing the oil a couple months ago. I guess I didn’t add enough.

    My theory about the hard starting problem.. I think over time, the fuel is slowly seeping out of the fuel pump/carb and causing it to lose prime. After 9 or 10 pulls, it reprimes itself and everything works again. I think Honda would do well to add a manual primer bulb to the engine. In any case, I think I’ll try the starting fluid again next time around, and see if I have better luck.

  • CGenericXSLT channels, parameters, and Local Connection Contexts

    I’m a bit strapped for time today, but I did take a quick look at this, to see if it looks doable. In a nutshell.. I’d like to use a local connection context to do legacy authentication and obtain an encr string to pass to various legacy backed services. This would allow me to create RSS-type channels that link to authenticated services, so I don’t need to use web proxy channels for everything. Initially I’d use it to connect to external services like MAP/DN, but eventually I could actually have the legacy perl code handle the rendering for stuff like registration, and just re-skin it to look like uPortal.

    I started out by seeing if I could pass an “encr” into the RSS and have it display conditionally somehow (we don’t necessarily want it appended to every link in the RSS feed). I came up with the somewhat hackish idea of using the RSS <category> element. If I give the item a category of “myumbcauth”, I can tweak the XSLT to look for that and append extra data to the link. Then, I can pass the actual encrypted string into the XSLT using a stylesheet parameter. This all works fine. The next challenge is getting the portal to set the appropriate parameter in the stylesheet. It looks like all of the channel runtime parameters are also passed in as stylesheet parameters (and in fact I was able to read one of them, baseActionURL), the question is, can I somehow add my own arbitrary param in there? Obviously this would have to be done somehow in the local connection context code. Anyhow, I got as far as that and now I have to run off and fight other fires, so I’ll have to come back to this later.

  • Update on calendar stuff

    I haven’t done much with my calendar stuff recently, for a couple of reasons; one being that work has been crazy and I haven’t had much time to hack on it, and another being that it’s all working without a hitch. I don’t even have to think about the Oracle Calendar stuff; my cron job auto-downloads it several times a day, and all I need to do is fire up iCal, refresh all calendars, and sync to the Palm. I see that Missing Sync version 5.0.3 final has been released, but beta6 is working fine for me so I’m not in a fired-up hurry to upgrade. I’m starting to make more and more use of iCal’s concept of multiple calendars; I already have a calendar with various family birthdays and anniversaries, and I’m going to do one with holidays. Holidays will be a good test of how the Palm handles repeating events, and how well Missing Sync translates repeating events from the iCalendar files to the Palm. I’ve also found myself using PHP iCalendar quite a bit when I don’t have my Mac handy. All in all, I’m really happy with how this whole thing has turned out.

    What’s next? More features, of course. I’d really like to have attendee data for some of my meetings. Because of the performance issues involved with downloading attendees, I’ll have to do two separate downloads (a large range without attendee data, and a small range with attendees) and merge the two together. This could create some problems because of the way I’m munging UIDs for repeating events. Since I’m just appending ascending numbers to create unique UIDs, there’s the chance that the same event could get a different UID depending on the date range that’s being downloaded. I think before I try to do anything fancy, I need to rethink the way I’m assigning UIDs to events. This could present an interesting challenge. Anyhow, more on that later.

    The Holy Grail of this project, of course, would be to have two-way syncing with Oracle Calendar, where changes I make on the Palm or in iCal will get back-propagated to Oracle Calendar. However, this increases the complexity of the project quite a bit, and I doubt I’ll ever go there. I don’t really miss the functionality all that much anyhow.

  • That time of year again.

    Well, it’s time once again to start working on taxes. And once again I find myself using H&R Block’s Taxcut Product. This year, they’ve gone the consumer-friendly route of including the state product with their Taxcut Deluxe package, so you don’t have to purchase it and request a rebate. Good move on their part.

    Once again, the most fun part of tax time is figuring out capital gains on all of our various stock and mutual fund sales. Or more specifically, figuring out the cost basis. And it’s even more fun, when there are splits and spinoffs involved with the stock you’re selling.

    Earlier in 2005 I put together a spreadsheet for each stock and mutual fund holding we own, that identifies each specific lot along with purchase date, amount invested, etc. That allows me to compute accurate cost basis info using either the FIFO method, or by specific identification of lots. It seems to work pretty well, and takes a lot of the tedium out of the process. However, it doesn’t handle the case of multiple purchases on the same day. It shouldn’t be hard to modify the sheet to handle this, but so far I haven’t needed to.

    At any rate, it looks like we’re due for nominal refunds from both state and fed, which is exactly what I shot for when I last filled out a W-4. Time to have another kid, so I can do that all over again!

  • Today’s database tweak..

    Well, one thing our ongoing uPortal launch has illustrated, is that contrary to popular belief, our Oracle database server does not have unlimited resources. To that end, a lot of my recent efforts have been geared towards making our installation more “database friendly”. The centerpiece of this is the connection pooling we set up on Monday. Of course, once you’ve got a nice, manageable connection pooling setup, you want to use it whenever possible. And until today, there was one big piece of the portal that still wasn’t using the pool: the “glue” that interfaces the uPortal web proxy channels to the legacy portal’s authentication scheme. uPortal calls this a local connection context, and ours goes by org.jasig.portal.security.UmbcLegacyLocalConnectionContext. The legacy portal’s session information is all database driven, so this code needs to connect to the database and create a valid legacy portal session for the user, so the web proxy channels will work and the kiddies can see their schedules and drop all their classes. This code was doing an explicit connect to the ‘myumbc’ user in the UMBC instance. Each channel needs to do it, and some of our portal tabs contain several of this type of channel. I’m not sure exactly how many times this code was getting invoked, or how many connections it was generating, etc. because I didn’t do any profiling. But it definitely had an impact.

    Anyhow, I’ve modified the code so that it pulls a connection from the pool (using RDBMServices.getConnection) and uses that instead. I needed to modify the LegacyPortalSession code a bit to support this. Also, since our connection pool uses the ‘uportal’ user (not ‘myumbc’), I needed to get our DBA to do a couple of grants so that ‘uportal’ would have access to the tables it needs.

    For better or for worse, it’s in production now, so we’ll see how it goes.

    The plan for tomorrow: Fix all of the missing or broken links that people have reported. Create a new channel exclusively for DN/MAP. And, look into local connection context usage with CGenericXSLT type channels. I recently discovered that this type of channel can use a local connection context. Depending on how it works, I may be able to use it to eliminate a couple more web proxy channels and replace them with RSS type channels. We’ll see.

  • Legacy myUMBC ACLs as PAGS Groups

    I think I’ve found a way (two ways, actually) to import program ACLs (from the BRCTL.PROG_USER_XREF SIS table) into uPortal as PAGS groups, so that we can publish uPortal channels with the exact same access lists as the respective areas in the legacy myUMBC. This would be a big win, particularly for an app like Degree Navigation/MAP. In the old portal, we control access to DN/MAP using a big, looong list of individual usernames. If the user isn’t on the list, they don’t even see a link to DN/MAP. However, with uPortal, we currently don’t have access to this list, so we have to present the DN/MAP link to a much larger set of users (basically anyone who is faculty or staff), or we’re faced with totally replicating the access list in uPortal, and maintaining two lists. Not what we want.

    Fortunately, we designed the old portal with a bit of forward thinking, and made its ACL mechanism totally database driven. That is, all ACL info is stored in the Oracle database, so some future portal could theoretically extract that data and use it down the road. The challenge, then, is to figure out how to get uPortal to do that.

    uPortal provides a very nice groups manager called PAGS, which allows us to create arbitrary groups based on what uPortal calls Person Attributes. It can extract Person Attributes directly from LDAP, as well as extracting them from the results of an arbitrary RDBM query. It then presents this group of attributes as a seamless collection, regardless of the actual backend datasource for each individual attribute. It’s really very nice.

    My first thought, then, was to just have uPortal query the legacy myUMBC ACL table to get a list of each app a particular user can access, and map the results to “Person Attributes”. I tested this and it works just fine, but there’s one problem: The legacy ACL table is indexed by UMBC username, but the way we have uPortal configured, it’s currently using the LDAP GUID to do its queries. So, to do this the right way (that is, without hacking the uPortal code), we’d need a table that maps the GUID to the username, so that we could do a join against it to get our results. Currently, we don’t have LDAP GUID data anywhere in our Oracle database. Now, I don’t think getting it there would be a huge issue (we’re already doing nightly loads of usernames from LDAP to Oracle), but it still needs to happen before we could use this method.

    The second method would be to import the user’s legacy ACL data into the LDAP database as an additional attribute. Then I could just pull the data directly out of LDAP, without having to worry about an RDBM query at all. This seems like a simpler solution, if it’s possible. More later..

    Note: Configuration of Person Attributes is done in the file /properties/PersonDirs.xml. When specifying an RDBM attributes query, the SQL statement must include a bind variable reference, or the code will crap out. I learned this when I tried to remove the bind variable and hardcode my own username.. no dice. To test this stuff out, subscribe to the “Person Attributes” channel, which is under the “Development” group. Then look for the attributes you defined in the config file. If they’re there, it worked. If not, not.

  • Connection pooling crash course

    Just spent the whole day tweaking our new uPortal installation and trying to get it to stay up reliably under load. It’s coming along, but not quite there yet. First lesson: Under any kind of load, you must, absolutely must, enable database connection pooling. That’s because if you don’t, it will open enough database connections to, let’s just say, really screw things up. Now, setting up connection pooling is not supposed to be that hard. But in our case, it was a huge pain. The default uPortal 2.4.3 configuration, includes a file uPortal.xml which is used to specify the connection pooling info to Tomcat. Great, I set it up with our connection parameters, and tried it out. Hmm, doesn’t seem to work. Look a little further.. Apparently in portal.properties, I need to set the flag org.jasig.portal.RDBMServices.getDatasourceFromJndi to “true”, or it bypasses the whole connection pooling thing and just opens direct connections. I set it, and tried again. Major bombage. More poking around and I found this page describing the mechanics of Tomcat connection pooling. Apparently, the config file format (as well as the factory class name) changed from Tomcat 5.0.x to Tomcat 5.5.x. We’re running 5.5.x, and the uPortal distro’s config file in the 5.0.x format. So, I updated the config file. Plus as a good measure, I dropped a copy of the Oracle JDBC jar file into tomcat-root/common/lib. Not sure if it really needs to be there or not. But, once I jumped through all those hoops, the connection pooling finally seems to work.

    Now, we’re dealing with memory issues causing slowness, as well as a couple lingering database issues with logins to the ‘myumbc’ user…

    I hope I don’t have too many more days like this…

    Update 1/12/2006: Well, it appears that the connection pooling breaks any ant targets that use the database: This includes pubchan as well as pubfragments, etc. This is kinda bogus, but rather than tweaking portal.properties every time I want to publish a channel or fragment, it looks like I can just run these from the test tree (which uses the same set of database tables).

  • More on iDVD and DVD burning on the Mac

    Well, unfortunately, it appears that iDVD doesn’t work quite as I had predicted in a previous entry. Apparently, even though it stores the encoded video between sessions, it still needs the entire uncompressed iMovie project to be able to do anything with the project. I learned this the hard way, after I had deleted some stuff from the iMovie, and found that I could no longer go into iDVD and burn a new disc. So apparently, the encoded data that iDVD stores is only there to speed up subsequent burns, and not for archival purposes. So, this is a bit disappointing, but that’s life (I guess they figure disk space is cheap, so why wouldn’t I want to keep 15+ gigs of uncomressed video around for every tape I shoot).

    Basically, what I’m looking to do here, is just archive my DVD image somehow so that I can burn extra copies down the road. Once I’ve edited the video, created the menus etc., I don’t care about making further mods to the project itself, I just want to keep a copy of my work in case a disc goes bad down the road, or whatever. It appears that iDVD isn’t my answer here.

    Fortunately, the solution turns out to be much simpler: Once I burn a project to DVD, I can just extract the image from the disc, and re-burn it to a new disc. Apple conveniently provides an article that describes how to do this.

    In practice, this seems to work, but the process had a couple hiccups. I tried it out with one of my previously-burned discs. Extracting the data onto the hard drive went without a hitch. Then I went to burn the image onto a new disc. The first attempt failed. I took the disc out of the drive, and it had a physical glitch (appeared to be a speck of something, but I couldn’t wipe it off the disc) right where the burning stopped. On the second try (with a new disc of course), the disc burned successfully, but then it went to verify it (which I’m guessing just does a byte-by-byte comparison of the image on the DVD with the image on the hard disk), and that failed. However, the resulting disc played fine all the way through on the Mac.

    My whole recent experience with DVD-R burning leaves me feeling not overly confident about the reliability of the media, but despite the glitches, I seem to end up with playable discs. Not quite sure what to make of it. At any rate, in the future, I think I’ll burn two copies of each iDVD project. One copy can be for archival purposes (to burn more copies down the road), and the other for playing. Alternatively, I could burn one copy and then extract the image, and save the image on my hard disk. Or I could do both (I believe iDVD can create disk images directly, but I haven’t tried it yet). When finished, I’ll delete the iMovie and iDVD projects. And, I’ll be sure to keep the source tapes around.

    All in all, it’s great that this technology works as well as it does, but it’s got a bit of evolving to do before I will feel like I can completely trust it!

  • Big Portal Launch Today..

    Today’s the day where we launch our new myUMBC web portal, essentially turning it loose on the unwashed masses and making the world (well, the campus at least) our big, happy beta-test community. As part of this, we’re kindly leaving the old portal around for awhile, because we anticipate stuff will be broken. The new portal will live at http://my.umbc.edu, which (for now) the old portal currently occupies. That means that if we want to keep the old portal running, we have to move it to an alternate URL.

    Now, our old portal has been active since 1999, at its current URL. It’s a big, old, bloated beast, and it’s very happy staying where it is. Getting this thing moved is somewhat akin to booting a 35-year-old freeloading kid out of the house. That is, you can be sure it will resist.

    In this case, it was a tedious matter of chasing down all the references to the portal’s top URL, and making sure it gets changed everywhere it needs to. Then restart, wonder why it doesn’t work, and determine that the web server no longer has read access to the Webauth cookie. Then fix logout (it’s absolutely mandatory, when making any change like this, that logout stops working. It’s like death and taxes).

    Great news is, it appears to work now. Off to fix some other stuff.

  • FastCGI Weirdness

    Getting some strange behavior from FastCGI regarding signal handling..

    Platform is SunOS 5.10 on Intel, Perl 5.8.6, mod_fastcgi version 0.67. Seems like the FastCGI accept routine is somehow blocking the delivery of signals. If I set a handler for SIGTERM, then call FCGI::accept(), the signal is ignored until the accept routine exits (which happens when a new request comes in). So basically, when I send SIGTERM to the process, it ignores the signal until I go to my browser and hit the app URL. Then, the signal handler is invoked.

    The consequence of this is, basically, none of my shutdown scripts are working right, because they all work by sending SIGTERM to the FastCGI processes.

    The really weird thing here is, if I don’t set a signal handler at all, the SIGTERM immediately terminates the process. It’s only when a handler is set, that I have problems. I’ve tried a couple ways of coding the FastCGI loop:

    while (FCGI::accept() >= 0) { ... }

    vs.

    my $request = FCGI::Request();
    while ($request->Accept() >= 0) { ... }

    Same results with either method. I have no problems using an old-and-crusty version of FastCGI (0.49) on our old-and-crusty SGI hardware. I’ve glanced at the new code that does the accept, and there’s nothing there that looks like it’s holding or blocking signals. Could this be an OS thing? I dunno, but if I can’t fix it I’m going to have to come up with some kind of workaround to kill and restart the processes..