Update on calendar stuff

I haven’t done much with my calendar stuff recently, for a couple of reasons; one being that work has been crazy and I haven’t had much time to hack on it, and another being that it’s all working without a hitch. I don’t even have to think about the Oracle Calendar stuff; my cron job auto-downloads it several times a day, and all I need to do is fire up iCal, refresh all calendars, and sync to the Palm. I see that Missing Sync version 5.0.3 final has been released, but beta6 is working fine for me so I’m not in a fired-up hurry to upgrade. I’m starting to make more and more use of iCal’s concept of multiple calendars; I already have a calendar with various family birthdays and anniversaries, and I’m going to do one with holidays. Holidays will be a good test of how the Palm handles repeating events, and how well Missing Sync translates repeating events from the iCalendar files to the Palm. I’ve also found myself using PHP iCalendar quite a bit when I don’t have my Mac handy. All in all, I’m really happy with how this whole thing has turned out.

What’s next? More features, of course. I’d really like to have attendee data for some of my meetings. Because of the performance issues involved with downloading attendees, I’ll have to do two separate downloads (a large range without attendee data, and a small range with attendees) and merge the two together. This could create some problems because of the way I’m munging UIDs for repeating events. Since I’m just appending ascending numbers to create unique UIDs, there’s the chance that the same event could get a different UID depending on the date range that’s being downloaded. I think before I try to do anything fancy, I need to rethink the way I’m assigning UIDs to events. This could present an interesting challenge. Anyhow, more on that later.

The Holy Grail of this project, of course, would be to have two-way syncing with Oracle Calendar, where changes I make on the Palm or in iCal will get back-propagated to Oracle Calendar. However, this increases the complexity of the project quite a bit, and I doubt I’ll ever go there. I don’t really miss the functionality all that much anyhow.

That time of year again.

Well, it’s time once again to start working on taxes. And once again I find myself using H&R Block’s Taxcut Product. This year, they’ve gone the consumer-friendly route of including the state product with their Taxcut Deluxe package, so you don’t have to purchase it and request a rebate. Good move on their part.

Once again, the most fun part of tax time is figuring out capital gains on all of our various stock and mutual fund sales. Or more specifically, figuring out the cost basis. And it’s even more fun, when there are splits and spinoffs involved with the stock you’re selling.

Earlier in 2005 I put together a spreadsheet for each stock and mutual fund holding we own, that identifies each specific lot along with purchase date, amount invested, etc. That allows me to compute accurate cost basis info using either the FIFO method, or by specific identification of lots. It seems to work pretty well, and takes a lot of the tedium out of the process. However, it doesn’t handle the case of multiple purchases on the same day. It shouldn’t be hard to modify the sheet to handle this, but so far I haven’t needed to.

At any rate, it looks like we’re due for nominal refunds from both state and fed, which is exactly what I shot for when I last filled out a W-4. Time to have another kid, so I can do that all over again!

Today’s database tweak..

Well, one thing our ongoing uPortal launch has illustrated, is that contrary to popular belief, our Oracle database server does not have unlimited resources. To that end, a lot of my recent efforts have been geared towards making our installation more “database friendly”. The centerpiece of this is the connection pooling we set up on Monday. Of course, once you’ve got a nice, manageable connection pooling setup, you want to use it whenever possible. And until today, there was one big piece of the portal that still wasn’t using the pool: the “glue” that interfaces the uPortal web proxy channels to the legacy portal’s authentication scheme. uPortal calls this a local connection context, and ours goes by org.jasig.portal.security.UmbcLegacyLocalConnectionContext. The legacy portal’s session information is all database driven, so this code needs to connect to the database and create a valid legacy portal session for the user, so the web proxy channels will work and the kiddies can see their schedules and drop all their classes. This code was doing an explicit connect to the ‘myumbc’ user in the UMBC instance. Each channel needs to do it, and some of our portal tabs contain several of this type of channel. I’m not sure exactly how many times this code was getting invoked, or how many connections it was generating, etc. because I didn’t do any profiling. But it definitely had an impact.

Anyhow, I’ve modified the code so that it pulls a connection from the pool (using RDBMServices.getConnection) and uses that instead. I needed to modify the LegacyPortalSession code a bit to support this. Also, since our connection pool uses the ‘uportal’ user (not ‘myumbc’), I needed to get our DBA to do a couple of grants so that ‘uportal’ would have access to the tables it needs.

For better or for worse, it’s in production now, so we’ll see how it goes.

The plan for tomorrow: Fix all of the missing or broken links that people have reported. Create a new channel exclusively for DN/MAP. And, look into local connection context usage with CGenericXSLT type channels. I recently discovered that this type of channel can use a local connection context. Depending on how it works, I may be able to use it to eliminate a couple more web proxy channels and replace them with RSS type channels. We’ll see.

Legacy myUMBC ACLs as PAGS Groups

I think I’ve found a way (two ways, actually) to import program ACLs (from the BRCTL.PROG_USER_XREF SIS table) into uPortal as PAGS groups, so that we can publish uPortal channels with the exact same access lists as the respective areas in the legacy myUMBC. This would be a big win, particularly for an app like Degree Navigation/MAP. In the old portal, we control access to DN/MAP using a big, looong list of individual usernames. If the user isn’t on the list, they don’t even see a link to DN/MAP. However, with uPortal, we currently don’t have access to this list, so we have to present the DN/MAP link to a much larger set of users (basically anyone who is faculty or staff), or we’re faced with totally replicating the access list in uPortal, and maintaining two lists. Not what we want.

Fortunately, we designed the old portal with a bit of forward thinking, and made its ACL mechanism totally database driven. That is, all ACL info is stored in the Oracle database, so some future portal could theoretically extract that data and use it down the road. The challenge, then, is to figure out how to get uPortal to do that.

uPortal provides a very nice groups manager called PAGS, which allows us to create arbitrary groups based on what uPortal calls Person Attributes. It can extract Person Attributes directly from LDAP, as well as extracting them from the results of an arbitrary RDBM query. It then presents this group of attributes as a seamless collection, regardless of the actual backend datasource for each individual attribute. It’s really very nice.

My first thought, then, was to just have uPortal query the legacy myUMBC ACL table to get a list of each app a particular user can access, and map the results to “Person Attributes”. I tested this and it works just fine, but there’s one problem: The legacy ACL table is indexed by UMBC username, but the way we have uPortal configured, it’s currently using the LDAP GUID to do its queries. So, to do this the right way (that is, without hacking the uPortal code), we’d need a table that maps the GUID to the username, so that we could do a join against it to get our results. Currently, we don’t have LDAP GUID data anywhere in our Oracle database. Now, I don’t think getting it there would be a huge issue (we’re already doing nightly loads of usernames from LDAP to Oracle), but it still needs to happen before we could use this method.

The second method would be to import the user’s legacy ACL data into the LDAP database as an additional attribute. Then I could just pull the data directly out of LDAP, without having to worry about an RDBM query at all. This seems like a simpler solution, if it’s possible. More later..

Note: Configuration of Person Attributes is done in the file /properties/PersonDirs.xml. When specifying an RDBM attributes query, the SQL statement must include a bind variable reference, or the code will crap out. I learned this when I tried to remove the bind variable and hardcode my own username.. no dice. To test this stuff out, subscribe to the “Person Attributes” channel, which is under the “Development” group. Then look for the attributes you defined in the config file. If they’re there, it worked. If not, not.

Connection pooling crash course

Just spent the whole day tweaking our new uPortal installation and trying to get it to stay up reliably under load. It’s coming along, but not quite there yet. First lesson: Under any kind of load, you must, absolutely must, enable database connection pooling. That’s because if you don’t, it will open enough database connections to, let’s just say, really screw things up. Now, setting up connection pooling is not supposed to be that hard. But in our case, it was a huge pain. The default uPortal 2.4.3 configuration, includes a file uPortal.xml which is used to specify the connection pooling info to Tomcat. Great, I set it up with our connection parameters, and tried it out. Hmm, doesn’t seem to work. Look a little further.. Apparently in portal.properties, I need to set the flag org.jasig.portal.RDBMServices.getDatasourceFromJndi to “true”, or it bypasses the whole connection pooling thing and just opens direct connections. I set it, and tried again. Major bombage. More poking around and I found this page describing the mechanics of Tomcat connection pooling. Apparently, the config file format (as well as the factory class name) changed from Tomcat 5.0.x to Tomcat 5.5.x. We’re running 5.5.x, and the uPortal distro’s config file in the 5.0.x format. So, I updated the config file. Plus as a good measure, I dropped a copy of the Oracle JDBC jar file into tomcat-root/common/lib. Not sure if it really needs to be there or not. But, once I jumped through all those hoops, the connection pooling finally seems to work.

Now, we’re dealing with memory issues causing slowness, as well as a couple lingering database issues with logins to the ‘myumbc’ user…

I hope I don’t have too many more days like this…

Update 1/12/2006: Well, it appears that the connection pooling breaks any ant targets that use the database: This includes pubchan as well as pubfragments, etc. This is kinda bogus, but rather than tweaking portal.properties every time I want to publish a channel or fragment, it looks like I can just run these from the test tree (which uses the same set of database tables).

More on iDVD and DVD burning on the Mac

Well, unfortunately, it appears that iDVD doesn’t work quite as I had predicted in a previous entry. Apparently, even though it stores the encoded video between sessions, it still needs the entire uncompressed iMovie project to be able to do anything with the project. I learned this the hard way, after I had deleted some stuff from the iMovie, and found that I could no longer go into iDVD and burn a new disc. So apparently, the encoded data that iDVD stores is only there to speed up subsequent burns, and not for archival purposes. So, this is a bit disappointing, but that’s life (I guess they figure disk space is cheap, so why wouldn’t I want to keep 15+ gigs of uncomressed video around for every tape I shoot).

Basically, what I’m looking to do here, is just archive my DVD image somehow so that I can burn extra copies down the road. Once I’ve edited the video, created the menus etc., I don’t care about making further mods to the project itself, I just want to keep a copy of my work in case a disc goes bad down the road, or whatever. It appears that iDVD isn’t my answer here.

Fortunately, the solution turns out to be much simpler: Once I burn a project to DVD, I can just extract the image from the disc, and re-burn it to a new disc. Apple conveniently provides an article that describes how to do this.

In practice, this seems to work, but the process had a couple hiccups. I tried it out with one of my previously-burned discs. Extracting the data onto the hard drive went without a hitch. Then I went to burn the image onto a new disc. The first attempt failed. I took the disc out of the drive, and it had a physical glitch (appeared to be a speck of something, but I couldn’t wipe it off the disc) right where the burning stopped. On the second try (with a new disc of course), the disc burned successfully, but then it went to verify it (which I’m guessing just does a byte-by-byte comparison of the image on the DVD with the image on the hard disk), and that failed. However, the resulting disc played fine all the way through on the Mac.

My whole recent experience with DVD-R burning leaves me feeling not overly confident about the reliability of the media, but despite the glitches, I seem to end up with playable discs. Not quite sure what to make of it. At any rate, in the future, I think I’ll burn two copies of each iDVD project. One copy can be for archival purposes (to burn more copies down the road), and the other for playing. Alternatively, I could burn one copy and then extract the image, and save the image on my hard disk. Or I could do both (I believe iDVD can create disk images directly, but I haven’t tried it yet). When finished, I’ll delete the iMovie and iDVD projects. And, I’ll be sure to keep the source tapes around.

All in all, it’s great that this technology works as well as it does, but it’s got a bit of evolving to do before I will feel like I can completely trust it!

Big Portal Launch Today..

Today’s the day where we launch our new myUMBC web portal, essentially turning it loose on the unwashed masses and making the world (well, the campus at least) our big, happy beta-test community. As part of this, we’re kindly leaving the old portal around for awhile, because we anticipate stuff will be broken. The new portal will live at http://my.umbc.edu, which (for now) the old portal currently occupies. That means that if we want to keep the old portal running, we have to move it to an alternate URL.

Now, our old portal has been active since 1999, at its current URL. It’s a big, old, bloated beast, and it’s very happy staying where it is. Getting this thing moved is somewhat akin to booting a 35-year-old freeloading kid out of the house. That is, you can be sure it will resist.

In this case, it was a tedious matter of chasing down all the references to the portal’s top URL, and making sure it gets changed everywhere it needs to. Then restart, wonder why it doesn’t work, and determine that the web server no longer has read access to the Webauth cookie. Then fix logout (it’s absolutely mandatory, when making any change like this, that logout stops working. It’s like death and taxes).

Great news is, it appears to work now. Off to fix some other stuff.

FastCGI Weirdness

Getting some strange behavior from FastCGI regarding signal handling..

Platform is SunOS 5.10 on Intel, Perl 5.8.6, mod_fastcgi version 0.67. Seems like the FastCGI accept routine is somehow blocking the delivery of signals. If I set a handler for SIGTERM, then call FCGI::accept(), the signal is ignored until the accept routine exits (which happens when a new request comes in). So basically, when I send SIGTERM to the process, it ignores the signal until I go to my browser and hit the app URL. Then, the signal handler is invoked.

The consequence of this is, basically, none of my shutdown scripts are working right, because they all work by sending SIGTERM to the FastCGI processes.

The really weird thing here is, if I don’t set a signal handler at all, the SIGTERM immediately terminates the process. It’s only when a handler is set, that I have problems. I’ve tried a couple ways of coding the FastCGI loop:

while (FCGI::accept() >= 0) { ... }

vs.

my $request = FCGI::Request();
while ($request->Accept() >= 0) { ... }

Same results with either method. I have no problems using an old-and-crusty version of FastCGI (0.49) on our old-and-crusty SGI hardware. I’ve glanced at the new code that does the accept, and there’s nothing there that looks like it’s holding or blocking signals. Could this be an OS thing? I dunno, but if I can’t fix it I’m going to have to come up with some kind of workaround to kill and restart the processes..

DVD Playback weirdness on the Mac

Well, I burned my second DVD today. I used the same parameters as my first disc, and the burn process was smooth (one thing I forgot.. in iMovie, when you go to auto-create an iDVD project, there’s no way to export only a subset of the actual content in iMovie. You have to physically delete the stuff you don’t want, then export. Bit of a pain, but shouldn’t be an issue for me any more beyond these first two discs — I’ll just do one iMovie per disc from now on).

After the burn, I popped the disc in the Mac and played some of it back. In one spot, it locked up. The app locked up and the drive seemed to be stuck seeking back and forth. I ended up powering down and rebooting. Tried again, froze up in the same spot. Bad media, maybe? Then, I took the disc home and tried it in my standalone DVD player. The same passage that froze the Mac, played fine in the standalone player. I haven’t viewed the rest of the disc yet, but I’ll do that tonight and see how it fares.

Seems a little odd that the Mac SuperDrive would have problems playing back media burned on the same drive.. Will have to check this out further.

Update 1/5/06: The entire disc played fine on my standalone Sony DVD Player. Not quite sure why the SuperDrive is having problems with it.

Followup: I just burned the exact same movie to another DVD. Playing it back in the Mac now. So far, no lockups (in particular, it did not lock up at the same spot it did with the other disc). I guess the SuperDrive must just not like the other disc. Odd, because the discs are the same brand (Fujifilm) and came from the same pack-of-50. Who knows?

I did learn something about iDVD today… when you go to burn a project for a second time, it re-encodes the menus and audio, but reuses the encoded video from the first run. This is nice, because it makes subsequent burns go much faster. I was wondering about this at first, because after the initial encoding it leaves the encoded MPEGs (4+ gigs worth) in the project directory. When I initially went to re-burn, and it started re-encoding the menus, I was wondering if it was going to go through the whole 2+ hour encoding process again, and if so, why did it bother saving all that encoded data from the previous run? Well, now I understand.

This also means that if I want to re-burn the discs at a later date, I should be able to safely delete the (huge) captured video data and just save the iDVD project.

Sync stuff working great

Today I gave the Mark/Space iCal sync conduit a really good workout, and it came through it without missing a beat. First off, I added the LOCATION field to the Oracle Calendar download. I’m not sure why I wasn’t pulling it down originally, I guess it was just an oversight. As a result, I now have location data for my meetings, and a lot of existing entries now have the additional field.

Next, I created a new calendar that has a bunch of repeating events (various birthdays and anniversary dates). Then I deleted all of the birthdays and anniversarys that I had stored in Oracle Calendar. Finally, I added a couple new events via the Palm.

In short: Add a new iCalendar field to several dozen existing events, add a new calendar consisting of repeating events, delete a dozen or so existing events, and copy a couple events from the Palm to the Mac, all in one sync. The Mark/Space conduit handled everything without a hitch. The deletions propagated, the new fields were added, and the new calendar was added with the proper recurrence rules.

This is really the way things are supposed to work when you pay for software: It should work, work well, and work reliably, and if it doesn’t, the company should stand behind the product and either make it work, or refund your money. So far, it looks like the $40 for Missing Sync was money well spent.

This also makes me curious about event recurrences. The iCalendar spec allows for some pretty fancy recurrence rules. I’m not sure if the Palm Date Book allows that much flexibility. If I were to define a really fancy recurrence (say, an event that happens the second tuesday of every month except March, or whatever) I wonder how that would propagate to the Palm. One of these days I’ll have to try it.

Incidentally, I created my birthday/anniversary calendar from scratch with a text editor. Then I copied it to my web server and subscribed to it with iCal. It worked fine, and seems to be a good way to handle relatively static calendars like this. The only requirement is that each event needs to have a unique UID. I used UIDs of the form:

YYYYMMDDTHHMMSSZ-#@concerto.ucs.umbc.edu

Where # is an ascending number. For the date stamp, I used the approximate time that I created the file.