New Router Follow-up

Today, I finally got the house to myself for long enough to take the home network down and swap our Verizon FiOS router out with a new Ubiquiti Unifi Cloud Gateway Ultra. I had spent a good while researching how to do this, and a week or so ago, I wrote out a step-by-step plan. As with everything in life, it didn’t go quite according to script, but overall, it went smoothly. Over the next few days, we’ll see if anything needs adjusting, but for now, the network is back up, connected, and ostensibly working fine. Here’s how it actually went down.

  1. I gathered everything noted in step 1 of last week’s list: MAC addresses for static leases, local DNS names, UI device SSH credentials, Ethernet adapter and cable for laptop. I also downloaded the Unifi app for my phone, and made sure I had access to my ui.com login credentials. Of these, the only things that proved essential were the Ethernet adapter/cable and my ui.com credentials. In particular, I did not need the phone app (read on).
  2. I created a full backup of my old Docker-based Unifi controller. It was about 13M.
  3. My original list included a step to remove the FiOS router from the Unifi controller’s device list before creating a backup of the settings. This was not necessary, or even possible, as it turns out that the Unifi controller doesn’t treat third-party routers as managed network devices. Therefore, there was no device to remove in the first place.
  4. I created a settings-only backup of the old controller, which was only 25K. However, I ended up not using it.
  5. I powered the FiOS gateway down and unplugged it. I did not do anything special to release its DHCP lease. I ended up having to briefly reconnect it to identify the cable running to the ONT (I have a bunch of disconnected CAT-6 cables in my wiring closet). Lesson learned: mark the cable somehow before disconnecting it and walking away. 😀
  6. I powered UCG-Ultra up and connected the ONT cable to its WAN port. My original list had me connecting the Ethernet first, but the setup guide says to connect the power first. In reality, I suspect it doesn’t matter much.
  7. The display on the UCG lit up with the Unifi logo, then a progress meter appeared at the bottom, and after a few minutes, it displayed a message that it was ready to configure and reachable at IP 192.168.1.1.
  8. This is where I thought I needed to connect with the mobile app over bluetooth, but it turns out that’s not necessary. The initial setup can be done over Ethernet using a laptop connected to one of the UCG’s LAN ports. All I needed to do was connect and point the laptop to http://192.168.1.1. Initially, I didn’t think it was working, as all I saw was a black screen. It turned out that it either doesn’t like Firefox, or doesn’t like one of my Firefox extensions or settings. When I tried with Chrome, it brought up a splash screen and let me proceed to configuration. I chose the option to restore from a backup, which prompted me to log in with my ui.com credentials, and then dumped me into the UCG’s web interface.
  9. The setup updated the UCG’s firmware/controller to version 8.2.93, which is the latest version as of this writing, and also the same version that I was running on the old Docker controller. I had this listed as a separate step, but it all happened automatically. It’s worth noting that during the upgrade, it displayed a screen saying it would take “about 5 minutes”, but seemed to stay there indefinitely. After 10 or 15 minutes, I tried re-connecting, and found that it had completed.
  10. At some point during this whole process, external Internet connectivity started working on my laptop. I can’t remember quite when, but I’m pretty sure it was before I restored the backup from the old controller. I suspect it was right after I “adopted” the UCG to my Unifi account. Initially, my Firefox browser displayed a “captive portal” banner similar to what I’m used to seeing on public guest WiFi portals.
  11. I restored the full backup from the old controller, which took a couple of minutes, and required a restart of the UCG. Again, the web browser experience through the restart wasn’t the smoothest, but it came back up just fine after a couple of minutes.
  12. At this point, I didn’t have any of the downstream network equipment connected to the UCG. I had planned to manually add static DHCP leases for the devices that needed them, but this wasn’t possible — after the restore, the UCG already “knew” about the device I tried to add, and told me that the MAC address I was trying to add already existed. I couldn’t find a way to go in and mark the reserved lease with the device disconnected. So, I just moved on:
  13. I connected my downstream gear to the LAN ports on the UCG, and after a few minutes, everything was working and had external connectivity, including the WiFi, with the same IP addresses they had before I swapped the router out. I’m not sure what will happen with regards to the DHCP leases, but I’m assuming the router will just treat them as new leases. The UCG’s default DHCP lease lifetime is 86400 seconds (24 hours). At this point, I was also able to go in to the devices that needed static IPs and mark them as “Fixed IP Address” (accessible by selecting the device and then clicking the settings icon). I assume that will do what I need.
  14. Next up was to go to the old controller and override the set-inform URL for the Unifi gear, so it will all start talking to the new controller. However, to my bemusement, I found that everything had already moved over without my doing anything. I thought maybe this was a new “feature” or something, but it turns out that it happened completely by accident. I logged into one of my APs via SSH and took a look at the log file. The original inform address was configured with a local DNS name (vs an IP address), and the DNS name was in turn configured into the old FiOS router. When I took the old router offline, the devices could no longer resolve the DNS name. After several unsuccessful retries, they eventually fell back on 192.168.1.1, which is where I wanted them anyhow — a happy coincidence.

Still to do:

  • Factory-reset the old FiOS router in preparation to return it to Verizon. I’m hoping this can be done via a physical button on the router. If not, I’ll need to somehow hook it up to an isolated network so I can connect to the web interface.
  • Figure out a local DNS strategy. I want to eventually route all of our DNS traffic through a Pi-hole, but I’m not sure if I want to manage local DNS names there, or on the UCG, and I’m not sure if I want the UCG in front of the Pi-hole, or vice versa. The FiOS router didn’t allow me to change the DNS server(s) it handed out via DHCP, so some of these configurations wouldn’t have been possible previously. I’ll have to think about this a bit.
  • Finally cancel our landline phone. I think I can get a 1G/1G FiOS connection for less than I’m paying for 512/512 FiOS + an essentially useless landline.

New Router

As part of an ongoing project to get rid of our landline phone (which nobody calls any more except spammers), I am looking to replace our Verizon FiOS Quantum Gateway router with my own router. There’s nothing wrong with the FiOS router, but when I contact Verizon to switch our plan to internet-only, I want to get rid of the monthly router rental fee. All of our other networking equipment is Ubiquiti Unifi, so I’ve decided to stay in that ecosystem and go with a Unifi Cloud Gateway Ultra. The FiOS router is currently connected to the ONT via ethernet (we switched off coax a few years ago), so “in theory”, it should be a drop-in replacement. Our network is fairly simple, with a couple of switches and a couple of WiFi access points, and the Unifi controller software currently runs on a local LAN host in a Docker container. The main things that need to happen seem to be:

  • Replacing the FiOS router with the UCG-Ultra and verifying that internet works
  • Recreating DHCP and DNS server settings. This includes:
    • IPv4 DHCP address range
    • Static DHCP leases
    • Local DNS hostnames
  • Migrating all of the Unifi devices from the self-hosted controller to the UCG’s built-in controller

Based on my research, this is the tentative plan:

  1. Write down/have handy the following:
    • List of MAC and IP addresses for static leases in FiOS gateway
    • List of names and IP addresses for local DNS entries in FiOS gateway
    • SSH login credentials (username/password) for Unifi gear — stored in controller under Settings > System > Advanced > Device Authentication (or go into settings and search for “passwords”)
    • Laptop with ethernet adapter and cable
    • Unifi app on phone for initial setup (which apparently uses bluetooth)
  2. Create a full backup of self-hosted controller and download to laptop
  3. Remove FiOS gateway from device list in self-hosted controller (maybe not necessary, as it’s not a Unifi router)
  4. Create a settings-only backup of self-hosted controller and download to laptop
  5. Release WAN DHCP lease on FiOS router and immediately unplug it from the network
  6. Connect UCG-Ultra to ONT, leaving downstream equipment unplugged for now
  7. Power up UCG-Ultra, wait for display to indicate WAN connectivity(?)
  8. Adopt UCG-Ultra to UI account using app
  9. Update UCG firmware and network controller (self-hosted controller running 8.2.93 as of this writing)
  10. Plug laptop into a UCG LAN port and make sure it gets a connection
  11. Restore controller backup (TBD: use full backup or just settings backup?)
  12. Configure DHCP and DNS server settings, including IPv4 range, static leases, and local DNS names
    • To add clients: Click “Client Devices” (left sidebar), then on the next page, then click the Add icon at the top right of the page. Dialog has entries for MAC address and device alias/name, and checkboxes for “Fixed IP Address” and “Local DNS Entry”.
  13. Connect downstream network devices and make sure everything works
  14. Go to old self-hosted controller and override set-inform address for Unifi gear
    • System (left sidebar) > Advanced (tab) > “Inform Host” setting > check “override” > enter UCG-Ultra’s IP address
    • Can manually change inform address on APs and PoE switch by connecting in to each via SSH and using set-inform command. However, this is not possible on the Flex Mini switch. It needs the old controller online so it can contact it to pick up the new inform IP. If that’s not possible, it will need to be factory reset and re-adopted.
  15. Wait for all of the Unifi gear to hopefully connect to the new controller
  16. Shut down the old controller
  17. Factory reset the FiOS router before returning to VZ

If this all goes according to plan, it hopefully won’t take too long. I’ll find out soon enough!

References:

I’m baaaack..

A few years back, I set lpaulriddle.com up on Ubuntu Linux running on a AWS EC2 instance. It ran just fine there, but to be honest, was kind of a mess. I was dreading the day when I would eventually have to update it or move it somewhere else, because I didn’t document anything that I did while configuring it, and thus, it would take forever to get everything working again.

Last summer, I decided to bite the bullet and redo everything on the site to run in Docker containers. That way, I’d have a repeatable build/deploy process that I could easily move around independently of the underlying support framework, be it ECS, another EC2 instance running Docker, or whatever. It’s still a work in progress, but it’s inching closer to completion. One of the first things I did was to move the MariaDB instance that hosts this blog’s database tables, into a container. This worked mostly OK: the blog still rendered just fine, and I could click around and read all of the posts the same as always. However, when I logged in at /wp-admin, It gave me a permission error, and I could not get to the dashboard. That effectively locked me out of the blog, preventing me from writing new posts, among other things.

About 4 months later, I finally got around to fixing it. Since I planned to move WordPress into a Docker container anyhow, I decided to start over with a fresh database, and just import all of my original blog content into the new instance. The catch was that I needed to somehow get into my old instance one last time to export the data. After some searching around, I found a snippet of PHP that I could add to my theme to bypass the permissions checks. That did the trick: I finally got back in, exported the data, and brought everything back up in a new, shiny Docker container. The blog is now powered by a Nginx front-end that talks to WordPress over a FPM proxy. Fun stuff.

Now that I can post again, I’ll try to write some more as the spirit moves me. As you can imagine, 2020 has been an interesting year with some pretty big changes to my daily routine.

Cloud

For a long time, I have been running an Ubuntu desktop in my basement “office”. However, lately, I’ve been using it less and less, in favor of my laptop. It runs a local web server which hosts a private wiki that we use for household stuff (recipes, scanned documents, etc); it also has two monitors, which occasionally comes in handy when I’m doing something that requires a lot of screen real estate. And, it runs Gnucash, my financial software of choice. But for the most part, it functions as a print and file server, and that’s about it.

A couple of months ago, I had an epiphany. It occurred to me that I don’t need this PC in the basement. I might use it one or two days a month, but for the most part, it sits there sucking up power. So, I came up with a plan:

  • Get a DVI adapter cable for my laptop so I can run it with an external monitor, which should solve my screen real estate issue.
  • Spin up an AWS EC2 t2.nano instance to run the wiki, and possibly, Gnucash.
  • Retire the PC, and get a Raspberry Pi or similar device to take over as print server and file server.

So far, I’ve got the AWS instance up and running, and moving the wiki over to it was surprisingly easy. I have to worry a little bit more about security now, as the AWS instance is available from anywhere on the Internet. My old web server was only accessible over our home LAN.

This AWS instance is the first “personal” web server I’ve ever had. Previously, I used public web space on a server hosted by my employer (a University). But, I’m trying to migrate things away from there, in an effort to separate my “work” and “personal” online identities. To that end, I’m also using the new AWS instance to host all of the content that was previously on the University’s server.

Lastly, I used to host this blog at wordpress.com, but now that I’ve got my own server, I figured there was no reason not to host my own WordPress instance. So, I moved the blog over as well.

I have to give a shout-out to Let’s Encrypt, a free online Certificate Authority. Before they came around, I would have had to shell out big bucks for a SSL certificate.

I thought Gnucash was going to be a sticking point. It’s not really a cloud-friendly app. I didn’t really want to install a full X-Windows environment on a t2.nano instance, just to have somewhere to run Gnucash. That seemed like killing a fly with a sledgehammer. Initially, I tried running it on the Mac via X11 forwarding. I set up XQuartz on the Mac, installed gnucash on the t2.nano, and tried it out. I was not happy with the performance at all. I ended up running the Mac-native version of Gnucash, and storing the data file on Dropbox. That seems to work OK, and gives me a centralized repository for the data file, while allowing me to run Gnucash on multiple Mac desktops (providing I remember to exit when I’m finished with it — it does not deal well with multiple instances accessing the data file simultaneously).

Speaking of Dropbox, I’ve just started using that as well. Although there are a couple of annoying things about it, I think it’s going to work well for me. It fits in well with how I like to work (read: it works well with the shell) and also supports Linux natively, which was a must-have for me. I’ll likely write something up about Dropbox once I’ve used it for a little longer.

For now, I still have the PC sitting in the basement. I still have to buy a Raspberry Pi, install Linux on it, and set it up as a print server. It’ll also run a 3TB USB disk that I’ll use as an offline backup for my Dropbox files, as well as VMs, and other assorted things that are too large for Dropbox. Stay tuned!!

Enumerating Contract Bridge Auctions with Lisp

I’ve been a fan of Contract Bridge for a long time.  I’m really bad at it, but all the same, I find it fascinating and compelling.  One of the interesting facts about Bridge is the astronomical number of possible auctions.  For any given deal, there are over 128 × 1045 possible auctions. The exact number is:

128,745,650,347,030,683,120,231,926,111,609,371,363,122,697,557

That’s a lot!  Being the nerd that I am, I decided to dust off my LISP skills (mostly neglected since college) and write a program to enumerate them.  To wit:

(setf allbids
      '("1C" "1D" "1H" "1S" "1NT"
        "2C" "2D" "2H" "2S" "2NT"
        "3C" "3D" "3H" "3S" "3NT"
        "4C" "4D" "4H" "4S" "4NT"
        "5C" "5D" "5H" "5S" "5NT"
        "6C" "6D" "6H" "6S" "6NT"
        "7C" "7D" "7H" "7S" "7NT"))

(defun printMore (bidList bidStr)
  (if (null bidList) nil
    (progn
      (mapcar #'(lambda (p)
                  (printAuctions bidList (concatenate 'string bidStr p (car bidList))))
              '(" " " P " " P P "))
      (printMore (cdr bidList) bidStr))))

(defun printAuctions (bidList bidStr)
  (let* ((matrix
          '(nil
            " P P Dbl"
            " P P Dbl Rdbl"
            " P P Dbl P P Rdbl"
            " Dbl"
            " Dbl Rdbl"
            " Dbl P P Rdbl"))
         (bidMatrix (mapcar #'(lambda (x)
                                (concatenate 'string bidStr x)) matrix)))
    (dolist (x bidMatrix)
      (progn
        (print (concatenate 'string x " P P P")
               (printMore (cdr bidList) x))))))

(defun printSomeAuctions (startBid &optional (prefix nil))
  (let ((bidPos (position startBid allbids :test #'equal)))
    (if bidPos
        (let ((bidList (nthcdr bidPos allbids)))
          (printAuctions bidList (concatenate 'string prefix (car bidList)))))))

(defun printAllAuctions ()
  (progn
    (print "P P P P")
    (mapcar #'(lambda (p)
                (printSomeAuctions "1C" p))
            '(nil "P " "P P " "P P P "))))

(printAllAuctions) will iterate through and print out every possible Bridge auction.  Don’t hold your breath waiting for it to finish, though.  The computer I was using, a Linux box running CLISP, printed out around 14,000 auctions per second.  At that rate, it will take 291.4 × 1033 years to complete.  That’s over 21 septillion (21 × 1024) times the age of the known universe.

ZyXEL WAP3205 – Not Recommended

Last Fall, I got it into my head that I needed to upgrade my home network’s wireless access point (WAP).  I’d been using an old, but trustworthy, Netgear WG602V2 since around 2001-2002, and while it worked, I was hoping to get something with a bit more range, that supported 802.11N and various newer features.  I decided to try out the ZyXEL WAP3205.

The ZyXEL started out OK, although it did not seem like much of an upgrade over the Netgear.  The range and data throughput weren’t noticeably better.  The problems started after a few months, when I upgraded my Macbook Pro to Mountain Lion.  When I woke my laptop from sleep mode, the wi-fi would no longer automatically re-connect.  I had to manually re-join the network every time.  A pain, but not a show stopper.

The next problem started when I began playing around with AirPlay/AirPrint, both of which use Apple’s Bonjour service, which uses multicasting.  With the ZyXEL, Bonjour was flaky at best: sometimes it worked, sometimes it didn’t.  I couldn’t figure out any rhyme or reason to it, other than that the WAP was definitely the culprit, as Bonjour services worked fine over wired connections.

I read on a web site somewhere that the latest firmware on the WAP3205 addressed some issues with Bonjour.  I was skeptical, because the firmware release notes didn’t mention anything about Bonjour, but I went ahead and updated anyway.  This turned out to be a disaster.  Not only did the new firmware not fix the Bonjour issues, it also messed up the networking on the WAP somehow.  After upgrading, the wired ethernet interface on the WAP started randomly freezing up.  The wireless was still active, but the WAP stopped responding to pings.  This happened a couple of times.  Another time, the interface stayed up for several hours, then froze up my entire LAN.  None of my wired devices could connect to anything else on the LAN.  When I unplugged the WAP3205, LAN connectivity instantly came back.  Word of warning to WAP3025 owners: don’t install firmware version 1.00(BFR.7)C0 (released November 2012).  This is the version that caused the instability with the LAN interface.  I’d recommend waiting until a newer firmware revision is released before updating.  Caveat Emptor.

After the LAN freeze-up, I ditched the WAP3205 and went back to my old Netgear.  With the Netgear, Bonjour works great, I’m able to use AirPlay/AirPrint without any issues, and when my laptop wakes from sleep, the wi-fi reconnects without any problems.  The Netgear isn’t perfect, though.  I’m not able to get AirPlay mirroring working.  The mirroring starts up and works for a few seconds, but then it shuts itself off.  I had the same issue with the ZyXEL, so I’m not sure if the WAP is to blame for this or not.  Searching the net hasn’t turned up a good explanation for this behavior so far, but I’m going to keep looking for a fix.

In short: If you need a reliable wi-fi access point that works with Bonjour, stay away from the ZyXEL WAP3205!

Another GIMP Trick

Recently, I had occasion to convert a few shapes extracted from a flash movie to PNG format.  I used the excellent swftools suite to extract the shapes from the movie, and then I used  Gnash to render the shapes and save PNG format screen shots. This works great, but unfortunately, the resulting image is missing the alpha channel, and its background is white.  I wanted a way to restore the shape’s transparent background.

One easy way to restore transparency is to use GIMP to select all the white background pixels and “erase” them to make them transparent.  Unfortunately, that’s not quite good enough.  That’s because anti-aliased images have “semi-transparent” pixels around the edges, which show the white background underneath.  If you just erase the white pixels, the semi-transparent pixels will leave artifacts around the image:

The above image is on a black background to highlight.  Note the white artifacts around the edge of the circle.

To truly restore transparency and get rid of the artifacts, we need two images, one on a white background, and another on a black background.  Then we can compare the images and average out the differences between the semi-transparent areas, thereby eliminating the artifacts.  For flash shapes, it’s relatively easy to generate a container movie that displays the shape on a black background.  You can do it with the “swfc” utility provided with swftools, and a script like this:

.flash filename="blackbg.swf" bbox=autocrop
   .swf temp "shape.swf"
   .put foo=temp
.end

Load the two images into GIMP using the “Open as Layers” dialog from the File menu.  Then duplicate each layer so that you have two copies of each image.  Order the layers so that the 2 layers with black backgrounds are on top of the white layers:

For clarity, I’ve renamed the layers according to their background colors.  Next, you want to hide “black” and “white” and select “black copy”.  Then set the opacity of “black copy” to 50.  The resulting image should be on a gray background, representing the average between black and white:

Now, merge the visible layers together (right-click on “black copy” and select “merge down”) to create a single layer containing the averaged background.  Move this layer to the top:

Now, we want to find the differences between the black and white layers and use this to create a layer mask, which we’ll paste over the averaged layer.  Hide “average” and show “black” and “white”.  Select “black”, click on the “Mode” drop-down box, and select “Difference.”  The result should look something like this:

The amount of white corresponds to how much the two images differ.  The gray areas correspond to the anti-aliased pixels along the edge of the circle.

Now we’ll use this image to apply transparency to the top, averaged layer.  Press Ctrl-A to select the image, then Edit – Copy Visible (or Shift-Ctrl-C).  It’s important to “Copy Visible” and not just “Copy”, so we get the visual representation of the differences between the two layers.  Otherwise it’ll only copy the active layer.

Hide the two bottom layers, so only the top “average” layer is visible.  On the Layers dialog, right-click the top layer and select “Add Layer Mask.”  Select the first option to initialize the mask to white (full opacity), and click “Add.”

Make sure the top layer is selected.  Right-click on it in the layers dialog again and ensure that “Edit Layer Mask” is checked.  Then, paste the clipboard into the layer mask with Ctrl-V or Edit – Paste.  Finally, invert the layer mask with Colors – Invert.

Here’s the result, shown on a red background to illustrate that the artifacts are gone.

And there you have it.  Hopefully someone will find this useful!

Update…  I found myself having to do this with a very large number of images.  After spending a couple mind-numbing hours doing repetitive operations with GIMP, I figured out a way to script this using ImageMagick:

# produce averaged image
convert black.png -channel a -evaluate set 50% out$$.png
convert white.png out$$.png -flatten avg$$.png
rm out$$.png

# generate alpha mask
composite -compose difference white.png black.png out$$.png
convert -negate out$$.png mask$$.png
rm out$$.png

# apply mask to averaged image
composite mask$$.png -alpha off -compose Copy_Opacity avg$$.png output.png
rm mask$$.png avg$$.png

This works great, and looks to be a huge time saver.

Text Effects with GIMP

As part of my fledgling hobby/future side career doing game development for the iPhone, I’m becoming sort of an inadvertent GIMP expert.  I’m not a graphic artist, and I don’t do any original artwork for the games I code.  However, I often need to edit and re-touch existing artwork, which is where GIMP really shines.

One of my games has a nice, eye-catching title logo:

Hurry Up Bob! Logo

This logo came to me as a PNG image.  I wanted to add some extra text with the same look, so I decided to try to mimic it with GIMP.  Most of my GIMP knowledge comes from reading tutorials on the net, so I figured I’d “give back” and share how I did it.

The first step was to install the font in GIMP.  The font used here is “Addled Thin.”  I looked online and found a .ttf for the font, dropped it into GIMP’s fonts directory, and restarted GIMP.

Next, I created a text layer with the text I wanted.  The text size is 96px.  To set the text color, I used the color picker tool and selected the foreground color of the text, which is #FBAE5C in RGB notation.

Next, create the brown outline around the text.  Use the select by color tool to select the text, then choose Select » Grow.  Grow the selection by 5 pixels and click “OK”.  Then create a new layer and order it so it’s below the text layer.  Go back to the color picker and select the brown outline color from the original image (#5F3813).  Select the new layer and choose the bucket fill tool.  On the tool options, select the radio button to “Fill whole selection.”  Fill the enlarged selection with the new color.  This should give you outlined text:

Outlined text

Now move the text layer up relative to the outline, to create an offset look.  I moved it up 2 pixels.

Outline with offset

Now, we want to repeat this drill to create the black outer border.  Hopefully, you still have the original enlarged outline selection active.  Grow this selection by another 5 pixels, create a third layer, fill it with the dark outer border color (#14100D), and offset it by 2 pixels relative to the other two layers.

Dual offset border

Starting to look pretty good.  Next we want to use GIMP’s built-in drop shadow effect to create a shadow.  Before doing this, merge all of the layers together by choosing Image » Merge Visible Layers (or Ctrl-M).  Then choose Filters » Light and Shadow » Drop Shadow.  I set “Offset X” to 5, “Offset Y” to 5, “Blur Radius” to 5, and left the color as black and the opacity at 80.

Drop Shadow

Finally, add in the coarse gradient effect from the original text.  To do this, I selected a chunk of the gradient from one of the lowercase ‘r’s on the original, and copied it to the clipboard.  Then I used the Select by Color tool to select the original text again, and did Select » Paste Into several times to recreate the gradient inside the selected text.

Text with gradient and shadow

One thing to note:  if you look at the original text, the words are all rotated at various angles, but the gradient is always horizontal.  If you want the new text rotated, you’ll want to rotate it before adding the gradient.

And there you have it:  A pretty close approximation of the original text effect.  Here it is pasted into the game artwork:

Finished artwork

Moving Ubuntu to a New Hard Drive

Well, I guess it had to happen some time…  the system disk on my home Ubuntu server started going south recently.  Just a few errors here and there, but once it starts, it only gets worse.  So I thought I’d write down the steps I took to move my system to a new disk, partly for my own reference, and partly in hopes that someone else will find it useful.

First, grab a copy of a “live boot” Linux distro that will run off a CD.  I use Knoppix, but there are others available too.  Attach the new disk to the system, boot off the CD, and mount both the old and new disks.  Make sure the old disk is mounted with ‘errors=continue’ option so that it’ll keep going when errors are encountered.

Use “tar” to copy the root filesystem from the old drive to the new.  Example:

cd /oldroot

tar cvpf – . | (cd /newroot; tar xpf -)

You might want to capture the output of the tar command as well, so you can go back over it and see where it found errors on the old disk.  That way you get an idea if any important files might be corrupted.

When the tar command completes, make sure you have a more-or-less complete copy of the old root filesystem.  An easy way to do this is to run ‘df’ and make sure both copies take up roughly the same amount of disk space.

If your old disk has multiple partitions, you’ll want to copy these as well.

Shut down, remove the old disk, and jumper the new one (if necessary) so it appears as the same device.  Reboot into Knoppix and re-mount the new disk.

Install the grub boot loader on the new disk:

/sbin/grub-install –root-directory=/newroot /dev/sda

Some Linux versions refer to disks by UUID rather than device name.  If this is the case, you’ll need to go into /etc/fstab and /boot/grub/menu.lst and change the UUID to reference the new disk.  You can find the new disk’s UUID by looking in /dev/disk/by-uuid.

My old disk had a swap partition, and I didn’t create one on the new disk.  Instead, I commented the swap partition out in /etc/fstab, booted the system without swap initially, then created a swap area on the filesystem:

dd if=/dev/zero of=/swap bs=1024 count=4194304

mkswap /swap

swapon /swap

This gave me a 4-gig swap area.  To automatically add it at boot time, add to /etc/fstab:

/swap swap swap defaults 0 0

I’m sure I’ve left something out somewhere, but that’s the general idea.

Perl rocks

I’m doing a bit of tidying-up of my online music library for consistency..  editing tags, renaming files, that kind of thing.  My library consists mainly of FLAC files ripped from my CD collection.  My music player of choice on my Ubuntu box is Banshee.  Banshee has an “Edit Metadata” feature which looks very handy on the surface, but it appears to have a bug where it doesn’t actually save the metadata edits back to the file.  It does, however, update the metadata in Banshee’s internal database, so in Banshee, it appears that the changes have “taken”, but when I play the music files elsewhere it becomes apparent that the changes haven’t been saved out to the files.  Of course, I didn’t discover this problem until I had made edits to 250 files or so.  Nothing against Banshee here of course..  it’s free and no warranty was ever implied or expected.  But, I did have some files to fix.

Fortunately, as I mentioned earlier, the edits I made were saved in Banshee’s internal SQLite database.  So all I really needed to do was whip something up to compare the database with the actual tags in the files.  First, I dumped Banshee’s SQLite database to a flat file:

sqlite3 banshee.db 'select * from Tracks'

Then I wrote a quick Perl script that extracted the FLAC tags from each of the files in the database dump and compared them to the corresponding fields in the SQLite table:

#!/usr/bin/perl

use URI::Escape;
use Audio::FLAC::Header;

print "Key\tPath\tFLAC tag\tDB tag\n";
while (<>) {
    chop;
    my %dbTags = ();
    my($path);
    ($path, $dbTags{ARTIST}, $dbTags{ALBUM}, $dbTags{TITLE},
     $dbTags{GENRE}, $dbTags{DATE}) =
        (split(/\|/))[1, 3, 5, 9, 10, 11];
    next unless ($path =~ /^file:\/\//);
    next unless ($path =~ /\.flac$/);
    next if ($path =~ /\/untagged\//);

    $path =~ s/^file:\/\///;
    $path = uri_unescape($path);
    if (! -f $path) {
        print STDERR "Can't find $path\n";
        next;
    }

    my $flac = Audio::FLAC::Header->new($path);
    my $tags = $flac->tags();

    # Strip extra date info..
    $tags->{DATE} = substr($tags->{DATE}, 0, 4);

    for (keys %dbTags) {
        if ($tags->{$_} ne $dbTags{$_}) {
            print "$_\t$path\t$tags->{$_}\t$dbTags{$_}\n";
        }
    }
}

exit 0;

This script outputs a tab-separated file that shows all of the discrepancies, suitable for loading into your spreadsheet of choice.  I only had to make a small number of edits, so I made the changes manually with EasyTag.  But if I had wanted to take this farther, I could have had the Audio::FLAC::Header module actually save the corrections to the files.

Yet another reason to love Perl.