Fruddled Gruntbugglies

Enthralling readers since 2005

Author: lpaulriddle

  • Legacy myUMBC ACLs as PAGS Groups

    I think I’ve found a way (two ways, actually) to import program ACLs (from the BRCTL.PROG_USER_XREF SIS table) into uPortal as PAGS groups, so that we can publish uPortal channels with the exact same access lists as the respective areas in the legacy myUMBC. This would be a big win, particularly for an app like Degree Navigation/MAP. In the old portal, we control access to DN/MAP using a big, looong list of individual usernames. If the user isn’t on the list, they don’t even see a link to DN/MAP. However, with uPortal, we currently don’t have access to this list, so we have to present the DN/MAP link to a much larger set of users (basically anyone who is faculty or staff), or we’re faced with totally replicating the access list in uPortal, and maintaining two lists. Not what we want.

    Fortunately, we designed the old portal with a bit of forward thinking, and made its ACL mechanism totally database driven. That is, all ACL info is stored in the Oracle database, so some future portal could theoretically extract that data and use it down the road. The challenge, then, is to figure out how to get uPortal to do that.

    uPortal provides a very nice groups manager called PAGS, which allows us to create arbitrary groups based on what uPortal calls Person Attributes. It can extract Person Attributes directly from LDAP, as well as extracting them from the results of an arbitrary RDBM query. It then presents this group of attributes as a seamless collection, regardless of the actual backend datasource for each individual attribute. It’s really very nice.

    My first thought, then, was to just have uPortal query the legacy myUMBC ACL table to get a list of each app a particular user can access, and map the results to “Person Attributes”. I tested this and it works just fine, but there’s one problem: The legacy ACL table is indexed by UMBC username, but the way we have uPortal configured, it’s currently using the LDAP GUID to do its queries. So, to do this the right way (that is, without hacking the uPortal code), we’d need a table that maps the GUID to the username, so that we could do a join against it to get our results. Currently, we don’t have LDAP GUID data anywhere in our Oracle database. Now, I don’t think getting it there would be a huge issue (we’re already doing nightly loads of usernames from LDAP to Oracle), but it still needs to happen before we could use this method.

    The second method would be to import the user’s legacy ACL data into the LDAP database as an additional attribute. Then I could just pull the data directly out of LDAP, without having to worry about an RDBM query at all. This seems like a simpler solution, if it’s possible. More later..

    Note: Configuration of Person Attributes is done in the file /properties/PersonDirs.xml. When specifying an RDBM attributes query, the SQL statement must include a bind variable reference, or the code will crap out. I learned this when I tried to remove the bind variable and hardcode my own username.. no dice. To test this stuff out, subscribe to the “Person Attributes” channel, which is under the “Development” group. Then look for the attributes you defined in the config file. If they’re there, it worked. If not, not.

  • Connection pooling crash course

    Just spent the whole day tweaking our new uPortal installation and trying to get it to stay up reliably under load. It’s coming along, but not quite there yet. First lesson: Under any kind of load, you must, absolutely must, enable database connection pooling. That’s because if you don’t, it will open enough database connections to, let’s just say, really screw things up. Now, setting up connection pooling is not supposed to be that hard. But in our case, it was a huge pain. The default uPortal 2.4.3 configuration, includes a file uPortal.xml which is used to specify the connection pooling info to Tomcat. Great, I set it up with our connection parameters, and tried it out. Hmm, doesn’t seem to work. Look a little further.. Apparently in portal.properties, I need to set the flag org.jasig.portal.RDBMServices.getDatasourceFromJndi to “true”, or it bypasses the whole connection pooling thing and just opens direct connections. I set it, and tried again. Major bombage. More poking around and I found this page describing the mechanics of Tomcat connection pooling. Apparently, the config file format (as well as the factory class name) changed from Tomcat 5.0.x to Tomcat 5.5.x. We’re running 5.5.x, and the uPortal distro’s config file in the 5.0.x format. So, I updated the config file. Plus as a good measure, I dropped a copy of the Oracle JDBC jar file into tomcat-root/common/lib. Not sure if it really needs to be there or not. But, once I jumped through all those hoops, the connection pooling finally seems to work.

    Now, we’re dealing with memory issues causing slowness, as well as a couple lingering database issues with logins to the ‘myumbc’ user…

    I hope I don’t have too many more days like this…

    Update 1/12/2006: Well, it appears that the connection pooling breaks any ant targets that use the database: This includes pubchan as well as pubfragments, etc. This is kinda bogus, but rather than tweaking portal.properties every time I want to publish a channel or fragment, it looks like I can just run these from the test tree (which uses the same set of database tables).

  • More on iDVD and DVD burning on the Mac

    Well, unfortunately, it appears that iDVD doesn’t work quite as I had predicted in a previous entry. Apparently, even though it stores the encoded video between sessions, it still needs the entire uncompressed iMovie project to be able to do anything with the project. I learned this the hard way, after I had deleted some stuff from the iMovie, and found that I could no longer go into iDVD and burn a new disc. So apparently, the encoded data that iDVD stores is only there to speed up subsequent burns, and not for archival purposes. So, this is a bit disappointing, but that’s life (I guess they figure disk space is cheap, so why wouldn’t I want to keep 15+ gigs of uncomressed video around for every tape I shoot).

    Basically, what I’m looking to do here, is just archive my DVD image somehow so that I can burn extra copies down the road. Once I’ve edited the video, created the menus etc., I don’t care about making further mods to the project itself, I just want to keep a copy of my work in case a disc goes bad down the road, or whatever. It appears that iDVD isn’t my answer here.

    Fortunately, the solution turns out to be much simpler: Once I burn a project to DVD, I can just extract the image from the disc, and re-burn it to a new disc. Apple conveniently provides an article that describes how to do this.

    In practice, this seems to work, but the process had a couple hiccups. I tried it out with one of my previously-burned discs. Extracting the data onto the hard drive went without a hitch. Then I went to burn the image onto a new disc. The first attempt failed. I took the disc out of the drive, and it had a physical glitch (appeared to be a speck of something, but I couldn’t wipe it off the disc) right where the burning stopped. On the second try (with a new disc of course), the disc burned successfully, but then it went to verify it (which I’m guessing just does a byte-by-byte comparison of the image on the DVD with the image on the hard disk), and that failed. However, the resulting disc played fine all the way through on the Mac.

    My whole recent experience with DVD-R burning leaves me feeling not overly confident about the reliability of the media, but despite the glitches, I seem to end up with playable discs. Not quite sure what to make of it. At any rate, in the future, I think I’ll burn two copies of each iDVD project. One copy can be for archival purposes (to burn more copies down the road), and the other for playing. Alternatively, I could burn one copy and then extract the image, and save the image on my hard disk. Or I could do both (I believe iDVD can create disk images directly, but I haven’t tried it yet). When finished, I’ll delete the iMovie and iDVD projects. And, I’ll be sure to keep the source tapes around.

    All in all, it’s great that this technology works as well as it does, but it’s got a bit of evolving to do before I will feel like I can completely trust it!

  • Big Portal Launch Today..

    Today’s the day where we launch our new myUMBC web portal, essentially turning it loose on the unwashed masses and making the world (well, the campus at least) our big, happy beta-test community. As part of this, we’re kindly leaving the old portal around for awhile, because we anticipate stuff will be broken. The new portal will live at http://my.umbc.edu, which (for now) the old portal currently occupies. That means that if we want to keep the old portal running, we have to move it to an alternate URL.

    Now, our old portal has been active since 1999, at its current URL. It’s a big, old, bloated beast, and it’s very happy staying where it is. Getting this thing moved is somewhat akin to booting a 35-year-old freeloading kid out of the house. That is, you can be sure it will resist.

    In this case, it was a tedious matter of chasing down all the references to the portal’s top URL, and making sure it gets changed everywhere it needs to. Then restart, wonder why it doesn’t work, and determine that the web server no longer has read access to the Webauth cookie. Then fix logout (it’s absolutely mandatory, when making any change like this, that logout stops working. It’s like death and taxes).

    Great news is, it appears to work now. Off to fix some other stuff.

  • FastCGI Weirdness

    Getting some strange behavior from FastCGI regarding signal handling..

    Platform is SunOS 5.10 on Intel, Perl 5.8.6, mod_fastcgi version 0.67. Seems like the FastCGI accept routine is somehow blocking the delivery of signals. If I set a handler for SIGTERM, then call FCGI::accept(), the signal is ignored until the accept routine exits (which happens when a new request comes in). So basically, when I send SIGTERM to the process, it ignores the signal until I go to my browser and hit the app URL. Then, the signal handler is invoked.

    The consequence of this is, basically, none of my shutdown scripts are working right, because they all work by sending SIGTERM to the FastCGI processes.

    The really weird thing here is, if I don’t set a signal handler at all, the SIGTERM immediately terminates the process. It’s only when a handler is set, that I have problems. I’ve tried a couple ways of coding the FastCGI loop:

    while (FCGI::accept() >= 0) { ... }

    vs.

    my $request = FCGI::Request();
    while ($request->Accept() >= 0) { ... }

    Same results with either method. I have no problems using an old-and-crusty version of FastCGI (0.49) on our old-and-crusty SGI hardware. I’ve glanced at the new code that does the accept, and there’s nothing there that looks like it’s holding or blocking signals. Could this be an OS thing? I dunno, but if I can’t fix it I’m going to have to come up with some kind of workaround to kill and restart the processes..

  • DVD Playback weirdness on the Mac

    Well, I burned my second DVD today. I used the same parameters as my first disc, and the burn process was smooth (one thing I forgot.. in iMovie, when you go to auto-create an iDVD project, there’s no way to export only a subset of the actual content in iMovie. You have to physically delete the stuff you don’t want, then export. Bit of a pain, but shouldn’t be an issue for me any more beyond these first two discs — I’ll just do one iMovie per disc from now on).

    After the burn, I popped the disc in the Mac and played some of it back. In one spot, it locked up. The app locked up and the drive seemed to be stuck seeking back and forth. I ended up powering down and rebooting. Tried again, froze up in the same spot. Bad media, maybe? Then, I took the disc home and tried it in my standalone DVD player. The same passage that froze the Mac, played fine in the standalone player. I haven’t viewed the rest of the disc yet, but I’ll do that tonight and see how it fares.

    Seems a little odd that the Mac SuperDrive would have problems playing back media burned on the same drive.. Will have to check this out further.

    Update 1/5/06: The entire disc played fine on my standalone Sony DVD Player. Not quite sure why the SuperDrive is having problems with it.

    Followup: I just burned the exact same movie to another DVD. Playing it back in the Mac now. So far, no lockups (in particular, it did not lock up at the same spot it did with the other disc). I guess the SuperDrive must just not like the other disc. Odd, because the discs are the same brand (Fujifilm) and came from the same pack-of-50. Who knows?

    I did learn something about iDVD today… when you go to burn a project for a second time, it re-encodes the menus and audio, but reuses the encoded video from the first run. This is nice, because it makes subsequent burns go much faster. I was wondering about this at first, because after the initial encoding it leaves the encoded MPEGs (4+ gigs worth) in the project directory. When I initially went to re-burn, and it started re-encoding the menus, I was wondering if it was going to go through the whole 2+ hour encoding process again, and if so, why did it bother saving all that encoded data from the previous run? Well, now I understand.

    This also means that if I want to re-burn the discs at a later date, I should be able to safely delete the (huge) captured video data and just save the iDVD project.

  • Sync stuff working great

    Today I gave the Mark/Space iCal sync conduit a really good workout, and it came through it without missing a beat. First off, I added the LOCATION field to the Oracle Calendar download. I’m not sure why I wasn’t pulling it down originally, I guess it was just an oversight. As a result, I now have location data for my meetings, and a lot of existing entries now have the additional field.

    Next, I created a new calendar that has a bunch of repeating events (various birthdays and anniversary dates). Then I deleted all of the birthdays and anniversarys that I had stored in Oracle Calendar. Finally, I added a couple new events via the Palm.

    In short: Add a new iCalendar field to several dozen existing events, add a new calendar consisting of repeating events, delete a dozen or so existing events, and copy a couple events from the Palm to the Mac, all in one sync. The Mark/Space conduit handled everything without a hitch. The deletions propagated, the new fields were added, and the new calendar was added with the proper recurrence rules.

    This is really the way things are supposed to work when you pay for software: It should work, work well, and work reliably, and if it doesn’t, the company should stand behind the product and either make it work, or refund your money. So far, it looks like the $40 for Missing Sync was money well spent.

    This also makes me curious about event recurrences. The iCalendar spec allows for some pretty fancy recurrence rules. I’m not sure if the Palm Date Book allows that much flexibility. If I were to define a really fancy recurrence (say, an event that happens the second tuesday of every month except March, or whatever) I wonder how that would propagate to the Palm. One of these days I’ll have to try it.

    Incidentally, I created my birthday/anniversary calendar from scratch with a text editor. Then I copied it to my web server and subscribed to it with iCal. It worked fine, and seems to be a good way to handle relatively static calendars like this. The only requirement is that each event needs to have a unique UID. I used UIDs of the form:

    YYYYMMDDTHHMMSSZ-#@concerto.ucs.umbc.edu

    Where # is an ascending number. For the date stamp, I used the approximate time that I created the file.

  • Calendar stuff is up and running

    I’m now up and running with the automatic Oracle Calendar export stuff. The other day I tested the Palm sync stuff out, to make sure event deletions were propagating properly to the Palm. Initially, they weren’t. However, after I followed the instructions that I got from Mark/Space support, everything worked fine. These instructions are worth repeating here, as they may come in handy down the road:

    Go to Missing Sync and hold the ‘option’ key down while you double click on the ‘Mark/Space Events’ conduit. When the settings window appears, click on the ‘Advanced Options’ button and then click on the ‘Unregister Sync Client’ button.

    Then sync, when you do, you will get a dialog box with an orange iSync icon on it. Check the box to erase the device, then click the ‘Allow’ button.

    This appears to erase all the existing calendar data on the Palm, and download totally fresh data from iCal. In any event, upgrading to 5.0.3b6 and then following these instructions solved my sync issues.

    The next step was to automate the export of the Oracle Calendar data. I did this with a cron script. Right now I have the script run every day at 10am, 1pm, 4pm, and 7pm. It connects to the calendar server, downloads updated data, and updates the iCalendar file on the web server. (This runs on my Linux box at work, concerto.ucs.umbc.edu, as user www-data.)

    So far, this appears to be working fine. I plan on keeping an eye on it for awhile to see if there are any issues. With this piece working, I can start to focus on improvements. Here’s the current wish list:

    • Add timezone data to the iCalendar file
    • Download attendee data for a small window of time (maybe 1 month or so starting from current date). Don’t want attendee data for everything, as it makes the iCalendar download take too long.
    • Do I want to treat “declined” events any differently from “accepted”? Right now, there’s no differentiation between the two in the downloaded calendar. Addressing this would mean keying off the STATUS field somehow.
    • Split different types of events into separate calendars. Various event types include “Meetings”, “Daily Notes”, “Day Events”, and “Holidays”. I might also want to separate out “Daily Notes” and “Day Events” that are created by users other than myself; this might be a way of handling the “accepted/declined” issue.
    • Revisit handling of alarms, if necessary.

    A few of these will require tweaks to the actual download process, and the others involve rewriting the iCalendar output in various ways.

  • VNC at Home

    Yesterday I played around a little bit with VNC (Virtual Network Computing) on our home network. VNC can be used (among other things) to pull up a remote desktop on a local machine and treat it as if you were sitting at the remote machine. One of its appeals is that it’s multi-platform, unlike similar technologies like Microsoft’s Remote Desktop Services. At home, VNC’s basic appeal is convenience. For example, while sitting upstairs, I can pull up the display of my basement Linux server to make a quick edit in GnuCash. Another example: When I’m doing our taxes, I can work downstairs in the office (where all our records are), pull up the Windows desktop upstairs, and run the tax software there.

    Installation on the Windows box was straightforward. I just downloaded and installed the distribution from RealVNC, and it installed both the viewer and server. No surprises, and it seems to work great.

    For the Mac.. My only Mac is a laptop, and I can’t really see wanting to connect to it with VNC. Still, I looked into it anyhow. First I tried a product called OSXvnc, which seems to work OK, but then I learned that MacOS 10.4 (Tiger) has VNC functionality built in, so I can use that if I ever need it. What I’m really interested in for the Mac, is a good VNC client. And that’s the weird thing.. Apple includes built-in VNC server support, but they don’t supply a VNC client. And there seems to be no one “defacto” VNC client that most people use on the Mac. There’s one called VNCThing, which was very hard to track down, but appears to work. It appears to be orphanware, though. There’s another one out there called Chicken of the VNC which appears to be a little less stale. Once I’m ready I’ll probably try that one out.

    Next up: The Linux box. First the easy stuff: RealVNC supplies a Linux VNC client that works fine. That brings us to the server. In grand Linux tradition, there is more than one way to do a VNC server, and none of the methods is a perfect solution.

    • Method 1: Creates a totally new VNC X “session” (for example, if the main desktop is host:0, the first VNC session would be host:1). Advantage: modular and efficient. Disadvantage: Doesn’t export your main desktop, so everything has to be done inside the virtual VNC session. To access that, you need to use a VNC viewer even on the host PC. Fine for remote access, but not real efficient for working on the host.
    • Method 2: Export the main X11 desktop using a polling server such as x11vnc. Advantage: works fine, no extra configuration required. Disadvantage: slow.
    • Method 3: Export the main X11 desktop using the VNC module supplied for XFree86 v4. Advantage: ties into the X server itself, so no polling is required and it’s very efficient. Disadvantage: Doesn’t work with direct rendering (DRI module) enabled. If I start the server with both VNC and DRI enabled, the server freezes the first time I try to access it remotely. This seems to be a compatibility issue between VNC and my particular video driver/card (ATI Rage Pro or somesuch, r128 driver). If I disable DRI, it works fine. So, if I want to use this solution, I have to give up direct rendering (which really isn’t the end of the world).

    This is sort of a microcosm of what’s wrong with the current state of desktop Linux. Lots of potential, but lots of interoperability issues. I still haven’t decided which method to use, but I’m sure I’ll settle on one eventually.

    Now that I’ve got VNC working at home, the next step is to get it so I can access my home Linux desktop from work. To do this I think I want to tunnel the VNC connection through SSH. Google turns up lots of tutorials on how to do this, so once I’m back in the office, I’ll try it out.

  • Chores Chores Chores, and a Broken Timer Switch

    Today was a “get stuff done around the house” kind of day, where I basically knocked as many items off my to-do list as possible. Among the fun stuff accomplished:

    I finished winterizing my chipper/shredder, pressure washer and trimmer. The chipper/shredder takes the most time, because I like to break it down, clean debris out of the blade housing, inspect the blade, and lubricate the metal flails. I also clean it off with a blow gun. For the others, it’s just a matter of adding some oil to the cylinder. I like to do this particularly with the chipper/shredder and pressure washer, because they can go long periods of time without being used.

    I drained 2-3 inches of water out of the pool, to get it back below the tile line. This is one of those thankless busy-work type winter chores. However, I’ll take this any day over a pool that is losing water. This winter, I decided to just pump the water back behind the deck, rather than running it all the way out to the side street. It’s much less of a hassle. I thought it would be faster, too, but it still seems to take forever. Best guess is around an inch an hour with my dinky 1/6hp utility pump.

    I noticed that our timer switch, that controls the front porch light, had stopped working. I only noticed because I happened to drive by the house around 2:30pm, and noticed that the porch light was on. I checked the switch and find the display said “No Op”. The switch is basically dead, and the light won’t turn off. I checked the trusty internet, and apparently these switches are basically garbage. Wish I had checked before I bought it. It’s a bit of a surprise, given that I’ve used lots of Intermatic products before, and generally been happy with the quality. However, this particular model seems to be a dud. Which leaves me without a timer for the front light. I pulled the switch out, so the fixture would go off. I guess I need to find a new switch. The challenge with this particular setup is that it’s a 3-way switch, and let me tell you, 3-way timer switches are haaaaard to come by. I have a standard single pole timer switch that I’m not using, so for now I may put that in and just forgo using the remote switch (we never use it anyhow). Long term, I may check into an X10 type switch, but there’s a limit to how much money I’m willing to pour into this. If a working 3-way setup turns out to be cost prohibitive, I’ll probably just live with the single pole.