First attempt at publishing exported Oracle Calendar data

Made a first stab at publishing exported output from Oracle Calendar tonight. I ran my CAPI program for a one-week time window, deleted the MIME headers at the beginning and end of the output, and slapped it up on my web server without any further mods. It actually sorta worked. Observations:

  • The iCalendar data appears to come through in DOS format, although neither iCal nor PHP iCalendar seem to have problems reading it. So it appears there’s no immediate need to convert.
  • PHP iCalendar shows the times incorrectly. The exported data has the time in GMT (or “Zulu” time). iCal correctly converts things to EST, but PHP iCalendar does not. I probably need to add timezone data to the file.
  • One of my meetings has a long DESCRIPTION property. The text is kind of screwed up. There are a lot of backslashed characters, a few ‘=20’s thrown in, and most importantly, the exported data has an extra carriage return thrown in. Both apps truncate the description at this spot.
  • Attendees are missing (which was expected) as well as alarms. Hopefully, getting alarms is just a matter of adding the appropriate alarm stuff to my list of properties to export. We’ll see.

So it’s obvious that I’ll need to write some Perl code to massage the exported data a bit, but I already expected that. The important thing is, we’re definitely making progress here.

My Fun Day.

Today was lotsa fun. It started out with the National Student Clearinghouse. I decided to get a “real” development instance going where I could connect to them as a student, demo it to Academic Services, etc. I ended up wrestling with their stupid referrer-based security scheme again. I took my existing clearinghouse script, which was working fine, and added Webauth authentication to it. I figure I’ll have the script verify the user’s Webauth credentials, then do an LDAP query to get the student ID, then pass that to the remote site. That way, my local script will have some authentication built in. Well, that broke it. On the initial authentication attempt, Webauth adds a query parameter called WebAuthExtAction (which the client is supposed to decode, and use the result to set a cookie). Great, but that changes the HTTP Referrer string, which breaks the clearinghouse crap. Hey, but they changed their site so it actually tells you what’s going on now, rather than just booting you out. Have to at least give them props for that, it saved me some head-scratching. OK, first attempt at fixing this: I’ll check for a WebAuthExtAction parameter, and if it exists, I’ll append it to the initial referrer string that I send them. Nope, that makes the referrer string too long, and the clearinghouse code can’t deal with it. Second attempt: look for the WebAuthExtAction parameter, and if it’s there, redirect the browser back to the same script, omitting the parameter. Bloody convoluted, but it works. Fortunately, in production, we won’t have to deal with this, because the prod code will run from the same web server as the portal, and the user will always have valid creds when they come to the site. Aargh.

Then there was fun with myUMBC itself. In an attempt to speed things up on the myUMBC web server, I decided to redo the Webauth ticket-logout script that it was using, and make it part of the myUMBC app itself. That way, logouts will go to the FastCGI processes, reducing overhead (the script needs to connect to the database, among other things) and hopefully speeding the machine up. This actually worked OK eventually, but of course, it broke things at first. Turns out I was short-circuiting the FastCGI loop without resetting certain global variables, which of course is a big no-no. But, that was good for a few choice expletives.

When does Christmas break start again?