MySQL Replication

I’m about done with my computer shuffling which I started a month or so ago. I have a 300g drive at work and a 120g drive at home. The idea is to replicate stuff like the MP3 collection, photos, system backups, etc. in both places, to guard against losing data to a disk crash.

The next challenge is to set up an mp3act server at home that replicates the one at work. I’m going to try doing this with MySQL replication. The idea here is to replicate the mostly-static music library tables with work being the master and home being the slave. Then, each site would have its own copies of the dynamic tables like playlists and playlist history. This lets me use one-way replication and avoid setting up a dual master/slave configuration, which I don’t think would work well with the dynamic tables, particularly WRT simultaneous access etc.

Yesterday and this morning I took the first steps toward doing this, which involved locking down my MySQL server at home, then setting it up to allow remote TCP connections (comment out the bind_address option in my.cnf). Then I needed to create a slave user, which I did by doing

GRANT REPLICATION SLAVE ON *.* TO slave@homedsl IDENTIFIED BY 'password'

The grant command will create the user if it doesn’t already exist.

Then, I needed to edit /etc/mysql/my.cnf on both the master and the slave, to give each one a unique server ID. I gave the master an ID of 1 and the slave an ID of 2. This is accomplished in the [mysql] section of the config file, with a line like this:

server-id=1

Next, I followed the instructions in the documentation (see link above) to lock the tables on the master and create tar archives of each of the databases I wanted to replicate (on my Debian box, each database has its own directory under /var/lib/mysql). I then untarred the databases into /var/lib/mysql on the slave. For each database I was copying, I added a line like this to the slave’s config file:

replicate-do-db=database-name

I found that if I wasn’t replicating all the databases from the master, the replication would fail unless I explicitly listed the databases like this. The slave expects the tables it’s replicating to already be present — it does not “automagically” create tables as part of the replication process. I wasn’t really clear on this point after reading the documentation; it was only clear after I actually tried it.

With the tables copied over and the configurations tweaked accordingly, I followed the docs to record the master’s ‘state’ info, point the slave at the master, and start the replication threads on the slave. This all worked as expected. All in all, it wasn’t too hard to set up, so I’ll see how well it works going forward.

Whew… what a day

Thursday was quite the day of triumphs and tribulations.

It all started out with a successful swap-out of my home Linux server. It had been running on an old, trusty-but-tired Dell P2-300mhz, and I upgraded it to a slightly-less-old Dell P3-450mhz (Linux is far from perfect, but it truly shines at getting new life out of old, scavenged hardware). The upgrade was as easy as: build a new kernel, shut the old box down, pull out all the boards and peripherals, put the stuff in the new box, and boot the new box up. The result was a home server with a 50% faster processor and an extra 256mb RAM (640mb total vs 384mb). Not earth shattering, but a noticeable improvement, and it was easy to do. The trick to doing is this to transplant as much of the “guts” from the old box to the new box as possible, so the hardware configuration stays mostly the same.

Next up was the launch of our new myUMBC portal, which so far has been very smooth, other than the usual little niggling problems that always pop up with these things. We had already been running uPortal in production for six months, and that experience definitely helped ease the transition. The centerpiece of this launch was a complete redesign of the UI, but behind the scenes we also upgraded the framework and layout manager. This gave us an opportunity to start out “clean” with a fresh set of database tables, and to redo a few things which were done sloppily with the initial launch (such as our PAGS group and channel hierarchies). It positions us very well for future releases and gives us a clean platform on which to build future improvements.

Of course, by Murphy’s Law, the air conditioning in ECS picked yesterday to go on the fritz. So the launch was a little sweaty, but it happened anyhow. When I left the office, the A/C was finally getting back to normal, and then I get home and our power goes out. It ended up being out for around 4 hours, from 7:30pm till 11:30pm. Not all that long, but it always seems like forever when it’s actually out, and it made for a pretty sweaty (and surly) day. And of course, BGE’s automated system never gave us an estimate of when the power would be restored, so Cathy packed up the kids to go sleep at her sister’s, and pulled out 2 minutes before the power came back on. Fortunately I was eventually able to get a decent night’s sleep. I must say I’m more than ready for this summer to end. This is the first truly miserable summer we’ve had since 2002, and I had forgotten just how bad 2002 was. Oh well… t-minus 7 weeks till the Outer Banks.

Hump Day Ramblings

I accidentally shut down my X server yesterday before I left work, so I took the opportunity to install the custom nVidia driver on my X desktop. And I must say, the proprietary driver is much nicer than the nv driver that comes with X.org. Someone at nVidia has put a lot of work into making these cards work well with Linux. The installer script produced a working config file that started right up, with acceleration enabled to boot.

The nVidia driver has a built-in option called “TwinView” which provides multihead support via the VGA and DVI ports on the card. It replaces Xinerama, although supposedly Xinerama can still be used to provide the same functionality. However, TwinView seems to be the better alternative because it provides acceleration on both displays. It also adds Xinerama-compatible “hints” to the X server so that window managers will work the same as with Xinerama. It’s really very well done. So now I have a full 24-bit display across both monitors, with acceleration. Right now I’m still using the widescreen as my “main” display and the standard display as my “secondary” screen. I’m going to try it like this for awhile, and if I don’t like it I’ll switch them.

The only configuration challenge was getting both displays working at their respective resolutions. I accomplished this with the following Device section:

Section "Device"
Identifier "nVidia"
Driver     "nvidia"
BusID      "PCI:1:0:0"
Option     "TwinView"
Option     "MetaModes" "CRT-0: 1680x1050, DFP-0: 1280x1024"
Option     "TwinViewOrientation" "RightOf"
EndSection

The MetaModes line is the important one.

[More:]

While testing things out, I also learned something about VNC: It compresses the pixel depth over slow connections. To force full-color rendering, I need to do

vncviewer -FullColor host.name:display

I was initially scratching my head as to why it was still rendering limited colors despite the 24-bit display. That explains it, and I may want to keep full color rendering disabled to maximize performance over my slow DSL uplink.

Also, this morning I hooked a scavenged SCSI disk up to my PC at home, mainly as a proof-of-concept: The disk uses 68-pin wide SCSI, and my controller uses 50-pin narrow SCSI. However, all I needed to make it work was a 50-to-68-pin adapter cable. I just jumpered the drive to SCSI ID 1 and plugged it in. Initially I had an external terminator on the drive case, and that hosed things. When I removed the terminator, it worked. Apparently my controller wants to terminate the bus itself. At any rate, now I have a spare 72-gig drive with its own case and power supply.

And finally, what’s a summer blog entry without a mention of the pool? Last week, the backup valve on my Polaris 380 broke. The valve mechanism itself is a big mess of gears and turbines, which fits into a plastic enclosure. The enclosure is made up of two halves held together by a screw-on collar ring, with a couple of o-rings to prevent leaks. The screw-on collar piece is what actually broke, probably after the valve was dropped onto the concrete pool deck one too many times. The mechanism itself was undamaged. Fortunately, Polaris sells a replacement case kit separately, and it’s much less expensive than an entire valve. The annoying thing is, the case kit only includes one of the two necessary o-rings. The included o-ring seals the two case halves together. The other one seals the backup jet to the case. It’s “technically” not part of the case, but if I’ve got the valve disassembled anyhow, I might as well replace both o-rings as they’re both probably worn out. It’s a small o-ring (1/2″ O.D. x 3/8″ I.D.) and it would have been nice if Polaris had seen fit to throw one in with the case kit. Oh well. For future reference, I found a replacement o-ring at Home Depot, in the plumbing section where they have the faucet repair kits.

Well, I guess I should get to work now.

Success with Dell widescreen monitor and X.Org

What a difference a video card makes…

A couple months back I tried getting a Dell 2007WFP working with X, using the Intel graphics controller built into my motherboard. I didn’t have much luck. Well, now I have a new video card, with an nVidia GeForce MX 4400 chipset. And, all I had to do was specify the card, driver and resolution (1680×1050) in my xorg.conf, and it worked right away. Great. Still more work than it was with my Mac (where I just plugged it in and it worked), but not bad at all, especially considering that this is X.

Still a few things to be resolved:

  1. The new card is only working through the VGA port. Need to figure out what magic incantations I need to use DVI.
  2. Apparently I can’t use the onboard graphics controller and the new (AGP) card at the same time. As soon as I plugged the new card in, it was like the onboard controller didn’t exist. So, I’m still stuck using the old Matrox PCI card for my other monitor. The Matrox only has 8mb video memory, which limits me to a 16 bit display depth. The MX 4400 apparently supports multiheading via the DVI port and VGA port, so I’ll have to see if I can do this with X.
  3. The widescreen monitor is vertically smaller than my other standard-resolution flat panels. I’m getting a little squinty-eyed staring at it. I think I may make it the “secondary” display and use a physically larger screen as my “primary” display.

But all in all, I’m really happy to have the 1680×1050 working, and without much extra fiddling or hair-pulling.

On another note, I’ve restored sonata to a new drive (after its old drive crashed). I lost a few files before I backed the dying drive up. The system works fine, but fonts in some applications are screwy (screwy fonts in X is a relative term, of course…). So, I may end up reinstalling the machine anyhow…

Followup… looks like I may need to use nVidia’s driver (instead of the built-in nv driver) to get multihead support. I’ll try that out later this week.

Crash.

Well, the hard drive in sonata, my desktop machine at work, is dying a slow death. I saw the writing on the wall a few months ago, and now it’s finally biting the big one. I’m typing on the box now, but it’s slowly getting wonkier and wonkier as it churns out ever more i/o errors, read errors etc.

A new drive is forthcoming, but in the meantime, I’ve managed to back up what’s left of the old drive, so I can restore it onto the new one. And, I’ve found that when it comes to archiving drives with errors, cpio beats tar hands down. I started out doing

cd /filesystem
tar cvplf - . |
ssh other-box "cd /backup/disk; tar xpf -"

But, it dies as soon as it hits a bad spot.

I ended up doing

cd /filesystem
find . -xdev -depth -print |
cpio -o --verbose -H newc |
ssh other-box "cd /backup/disk; cpio -id -H newc"

The -H option is needed to back up files with inode numbers greater than 65535.

Cpio is even more obfuscated than tar WRT command line options, etc., but it seems to do a better job at disaster recovery.

Fun fun…

FP using new concerto

I’ve got my “new” P3-750 box booted up as concerto.ucs.umbc.edu. This post will hopefully confirm that b2evolution is working right. Things are looking pretty good. My calendar and photo album appear to be working. I set up my Oracle Calendar download stuff and it looks good. The RRBC mailing list is back up and running.

The new box actually has slightly less RAM than the old one: 512M vs 576M. The old box will be going home to replace my old 300mhz P2 server, and I decided that I could use the extra RAM there more than here. So, the home machine gets 640M, and concerto gets 512M.

Next, I need to do some memory, disk and video-card shuffling amongst the machines, so off I go to shut everything down…

sshdfilter config

I’m beginning to think I need to set up a Wiki for this stuff.. but later.

Trying to get the sshdfilter stuff up and running on my new Debian box, and of course I didn’t document the process when I did it on 3 previous machines awhile back, so here goes.

  1. Install sshdfilter script in /usr/local/sbin
  2. Edit /etc/init.d/ssh. Look for two lines that look something like

    start-stop-daemon --start [...] /usr/sbin/sshd -- $SSHD_OPTS

    Replace them with

    start-stop-daemon --start --quiet --exec /usr/local/sbin/sshdfilter -- $SSHD_OPTS &

    Don’t forget the trailing ampersand!

  3. Create an executable file /usr/local/etc/iptables.sh:


    #!/bin/sh
    modprobe ip_tables
    iptables -N SSHD
    iptables -A INPUT -p tcp -m tcp --dport 22 -j SSHD
    exit 0

  4. Modify /etc/network/interfaces. Under interface eth0, add the following line:

    pre-up /usr/local/etc/iptables.sh

And that should do it.

Reinstalling ‘grub’ boot loader

OK. Documenting this here for the next time I have to do this.

I’m working on setting up a computer that dual-boots into Linux and XP.

Rule 1: Always put Windows on the first primary partition on the drive. Linux can go pretty much anywhere else.

Rule 2: Always install Windows first, then Linux, so the boot loader will get set up properly. I knew this, but chose to do things the other way around anyhow (yep, I’m stupid that way). And of course, the XP install hosed the Linux boot loader, so I had to manually restore it, which was a big pain.

Here’s how I reinstalled the boot loader, for the next time I ignore my own advice…

  1. Boot into the Debian netinst CD, or Knoppix, or Tom’s Root Boot, or whatever flavor of standalone Linux you prefer.

    With netinst, you’ll need to walk through the install process until it gets to the disk partitioning part (this ensures that the disk devices are loaded). Then, hit ALT-F2 to get a shell.

  2. Create a mount point, say /disk, and mount your root filesystem there. Example: mount /dev/hda5 /disk
  3. chroot /disk
  4. Mount any additional filesystems you might need, like /boot, etc.
  5. grub-install /dev/hda

That’s all I needed to do, but it took several unsuccessful attempts to arrive at this.

References: here and here.

Lots of fun stuff

A smorgasbord of various topics today.

I biked in for the first time since 6/21 today. A week of bad weather at the end of June, followed by a 5-day Independence Day weekend, then more unsettled weather the following week, all combined to keep me off the bike for awhile. Our boiler job starts tomorrow, which will potentially affect later rides this week, so I figured today was do-or-die if I’m going to get back into a routine. So, I did.

I also signed up for online bill-payment through our brokerage, now that they’ve kindly made it free. I haven’t used a bill payment service since the late 90s, and I’ve heard they’ve come a long way. I hope to try it out later in the week — I need to wait for some material to arrive snail-mail first.

And, lastly, I’m going to do a bit of computer shuffling..

[More:]

Currently, I have

Name Location CPU OS RAM Disk Use
sonata office P4 2.4ghz Linux 512mb 150gb Desktop
concerto office P3 450mhz Linux 576mb 8gb Server
doze office P3 700mhz Windows XP 384mb 20gb Windows Desktop
snorkelwacker home P2 300mhz Linux 384mb 16gb Server

Now that I’m running Remedy on a centrally-maintained Windows 2003 server, and I’ve switched from SQL Navigator to Oracle SQL Developer, I no longer need a full-time Windows desktop in my office. I actually will only need Windows when I’m watching my son, so he can play games. So, I’m thinking I’ll take the 700mhz box, add some memory to it, and make it my office server box. It’ll run Linux full time and dual-boot into XP on the rare occasion that Michael is here. Then I’ll take the 450mhz box home, make it my home server, and put my ancient 300mhz box out to pasture. That will buy me a bit of extra performance at home, and more memory (the 300mhz box is maxed out at 384mb).

For the future, I’m moving away from Linux on the desktop in favor of OS X. So, my next new desktop computer will most likely be a Mac, which will then free up the 2.4ghz box. Then, the bubble-down process will begin again. Fun fun!

Dell Widescreen Monitor + Linux: Utter & Complete Frustration

I just wasted an entire day trying to get a new Dell 2007WFP working with my 3-year-old Dell GX270 desktop, which runs Debian. The result: total failure. It works at 1280×1024 (and looks all stretched out), but flatly refuses to work at the monitor’s optimal resolution of 1680×1050.

In the xorg.config file, first I tried just adding a new “Screen” resolution for 1680×1050. This failed with “no such mode” or somesuch. After much head scratching and googling around, I learned that I needed to patch my video BIOS (Intel 810 series) to add the new mode. I did this via a utility called 915resolution. This seems to work.

With 1680×1050 programmed into the BIOS, I then tried several different ModeLine entries in the config file, with varying different modes of failure. At certain settings, X would refuse to use the mode, complaining about “vrefresh out of range” or something. At other settings, when I started X, the monitor would go into power-save mode as if it wasn’t receiving a signal. At other settings, the monitor would display a message stating that the signal was out of range and it couldn’t use it. I fiddled with turning DDC on and off, to no effect. Everything I tried resulted in one of these three failure modes. I’m pretty sure I’ve got good ModeLine numbers, as they match what’s reported by DDC in the X.org log.

I’m pretty much at my wit’s end here, so I’ve given up for now and am back to using my old dual-monitor xinerama setup. Needless to say, I’m not too high on the whole Linux/X desktop scene right now. I’m ready to punt the thing and go to a Mac.