Saturday, March 24, 2012
building the command-line reliably from user provided input
Back to the programmer who made the wrong assumption that surnames don't contain spaces. Our friend is trying to process a dataset of names by calling a utility for each one and he's using shell scripts. His first attempt didn't go so well.
name=`get_next_surname`
process_surname $name
The process utility expects just one argument--a surname. But when get_next_surname returned Mozes Kriebel, the shell's word splitting turned this into two arguments: "Mozes" and "Kriebel".
The programmer would soon overcome this problem by adding double quotes to prevent word splitting:
name=`get_next_surname`
process_surname "$name"
That will do for today's lesson. But wait, there is more!
Let's assume that the process_surname utility is actually richer in functionality, and may accept some options with arguments.
if [ "$name" != "$origname" ]; then
option="--nee $origname"
fi
process_surname $option "$name"
See what happened? If not, here's a short run-down. The first line checks if the name is identical to the original name (assume for the sake of the example that a name change could happen through marriage). Here, the double quotes are required as in the original script, to defend against shell script errors. If $name was unquoted, the test would read 'Mozes Kriebel != ...' which is incorrect syntax. The second line sets the --nee option, to pass to the processing function later on. This is also a quoted string.
The catch is in the final line. Observant readers spot the lack of quotes around $option. This is not a mistake! If $option were quoted, it would be passed as $1 to process_surname in its entirety, i.e. including the space following the '--nee' and the original surname after that. If this utility scans it's arguments looking for an exact match of '--nee', it won't find it. So we need the shell's word-splitting to separate '-nee' from what comes after it.
The problem is now clear. If $origname happens to be 'Jemig de Pemig' there seems to be no way to preserve the spaces on passing it as an argument to --nee.
I won't dwell on my journey along the Path of Many Misconceptions About the Shell, but I will show you just about the simplest way to do this generally.
set --
if [ "$name" != "$origname" ]; then
set -- --nee "$origname"
fi
process_surname "$@" "$name"
This is one of the few times I found a use for setting the positional parameters. The magic bit is in the use of "$@", which expands to the positional parameters, with quotes around each individual parameter. There is no other construct in the shell that does this. The set on line 3 made $1 equal to "--nee" and $2 equal to "Jemig de Pemig". The last line is then equivalent to
process_surname "--nee" "Jemig de Pemig" "Mozes Kriebel"
which is exactly what we need.
Thursday, July 21, 2011
CREAM did it, using bugs in path length constraints, in OpenSSL/Globus
Wednesday, May 11, 2011
notes from a dirty system installation
But Sven asked me to transfer a system from a virtual machine to a physical box, for some reason that I won't mention here now. He would provide me with the (small) disk image of the virtual machine, and the physical box was something old that had already seen some use and was hooked up with the network.
I quickly realised that this was going to be an interesting exercise. I would need to write the image to disk, which meant that I had to boot into a ramdisk of sorts. The first problem that presented itself was that the box would only do PXE boot, and as the network was not under my control I would have to involve other system administrators.
The box had a previous installation on it (Backtrack, which is Debian based), and I figured I might as well try to do everything from within this installation.
After adding my ssh key to /root/.ssh/authorized_keys (and turning off the firewall) I could get out of the noise of the machine room and work from the peaceful quiet of my office. By inspecting /proc/partitions I found out the machine had 2 disks, and Sven agreed that we should set up a (software) RAID1 mirror set.
Now the system was running from /dev/hda1, and I couldn't mess with that disk live. (You should try this some day if you feel in a particular evil mood; run dd if=/dev/zero of=/dev/sda in the background while you continue to work. Observe how the system develops amnesia, dementia and finally something close to mad cow disease.) I decided to do something dirty: I would create a RAID1 set with just one disk. The mdadm program thinks this is a bad idea, so you have to --force the issue. The command-line was mdadm --create --level=1 -n 1 --force /dev/md0 /dev/hdb1 or something. Of course I first repartitioned /dev/hdb to have just a single partition of type raid autodetect. Next step: losetup -f vmdisk.img to treat the disk image as a block device, and kpartx -a /dev/loop0 to have mapped devices for each of the partitions inside. Now a simple dd if=/dev/mapper/loop0p1 of=/dev/md0 was all I needed to write the image.
The next step was to boot into the newly written system (a Debian 5). This involved some grubbery, after mount /dev/md0 /mnt and chroot /mnt I could navigate the system as if it was already there. I had to edit /boot/grub/menu.lst to set the root device to (hd1,0) and root=/dev/md0, and I had to install grub on the first bootable disk, which was (hd0,0). After cloning /etc/network/interfaces from the present system and setting up ssh keys again to ensure acces, I rebooted with fingers crossed.
Call it luck, but it worked. I was now running the cloned system from a RAID1 root device with only one disk. I did a resize2fs /dev/md0 because the image I originally wrote to it was really small compared to the disk. Now it was /dev/hda's turn to be added to the RAID set. After repartitioning to have the same size as its counterpart (the two disks weren't the same size), I added it with mdadm --manage --add /dev/md0 /dev/hda1 which unfortunately didn't work as expected, as the new addition just became a spare.
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hda1[1](S) hdb1[0]
120053632 blocks [1/1] [U]
Notice the (S) which indicates that hda1 is a spare. It won't be used until another disk fails, but as this set unfortunately only has a single disk, a single disk failure means game over.
The final command to activate the spare was mdadm --grow --raid-devices=2 /dev/md0. This enlarges the raid set, and the spare will now be activated. Indeed, the system started to recover immediately:
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hda1[2] hdb1[0]
120053632 blocks [2/1] [U_]
[>....................] recovery = 0.0% (84608/120053632) finish=70.8min speed=28202K/sec
It took a while, but eventually I had a mirrored two-disk raid set!
Wednesday, April 21, 2010
Moving to Ubuntu 10.04 (Lucid) on my Macbook 2,1
It was really Willem who set me on to this; I don't think I would have had the guts to do this to my Mac if he hadn't been adamant that it would work. And it does! It did require quite a few tweaks, though, as the Ubuntu community page testifies.
But overall I was happily running 9.10 (Karmic); I learned to live with the few quirks, like having to log off every time I came into work and attached the external monitor (it would crash the video driver if I didn't). And suspend/resume was just really slow.
Now that Lucid is just around the corner, and after my previous success with installing it on my wife's new desktop, I got feisty and tried it on the Mac. First, I used the live CD and that worked so splendidly that I just had to do the actual upgrade. It lasted all night, but in the end it worked like a charm.
So it's worth at this point to make a few notes.
- The new theme did not appear the first time I booted. This was to be expected because I did an upgrade, not a fresh install. Most things should preferably stay as they were.
- The ssh-agent environment variables were gone from my terminal shells. Why? I don't know, but a somewhat related bug report suggests the use of the keychain package.
- Thunderbird 3.0 is chugging away on indexing all my mail. I could turn it off but I think I'll let it run for a while and check out the new search capability.
- Sound didn't immediately work on the live CD but this was resolved by killing the pulseaudio daemon. Sound does work after the upgrade.
- The volume and brightness buttons work too. Very nice.
- Attaching the external monitor turns off the desktop effects. This may be related to the crash bug I mentioned earlier. But you can turn it back on right after.
Friday, April 9, 2010
My new inspiron 560
I decided to go all-in and finally buy myself (and my wife) a new desktop machine. It replaces a machine that I got for a token fee, an old box that my company was going to recycle. At the time I thought it would be good enough for running Linux, even though it was slow, didn't have a big disk and little memory; Linux is frugal, right? Well, the Linux desktop has grown up considerably in the last decade and the combined memory footprint of Evolution, Gnome and Firefox combined rivals that of mainstream systems.
So the new machine which now proudly hums away on my desk (more on the humming later) is an all-black, all-shiny, DELL Inspiron 560. The cheapest DELL has to offer through their website, for just over 400 euros including delivery to my home. Even though it is cheap, it is a step up from what I had: it has a dual-core 2.7 GHz CPU, 4 GB memory, 320 GB disk. Funny detail: black was the cheapest color. Reminds me of the Ford model T days.
My wife is the principal user of the desktop, since I do most of my work on a laptop that I carry everywhere. To save myself from a support nightmare, I switched to Ubuntu LTS releases some time ago. Just mid 2009, we made the switch to Ubuntu 8.04. So Ubuntu 8.04 was the first and most logical choice for the new box; the smoothest possible transition I could imagine.
After popping in a fresh Ubuntu 8.04.4 CD and starting up, most things looked alright. Except there was no network. This puzzled me somewhat as a device clearly showed up in the output of ifconfig; there were just no packets coming through at all. A quick search for the precise hardware spec revealed a known issue with the driver, and the workaround to download and use the Realtek provided (open-source) driver. I was worried that this problem would just keep coming back with every kernel update, but it fixed the immediate issue.
Meanwhile, my attention was drawn to another, quite severe problem. The machine was making quite a bit more noise than I expected, to the point of being irritating. There was clearly a fan spinning loud and hard in there. My first suspicion went to the case fan near the rear of the box, but this turned out to be wrong. To make matters worse, the loud fan had a bearing problem and started to make horrible rumbling sounds.
Since I had already done away with Windows completely I had no chance to verify that this was caused by some software defect. It started as soon as the computer was turned on, so in any case it wasn't due to something Linux did. It was such a depressing conclusion that I bought a lemon. Just the thought of having to spend time and energy getting this fixed (imagine explaining a support person over the phone that you run Linux, not Windows...) caused agony.
Luckily DELL shipped a bootable diagnostics CD with the computer, and it allows you to run several tests to verify the correct workings of the machine. Two tests were especially interesting: a CPU fan test and a case fan test. Both tests drive up the fans to a high RPM, and then down again. I should explain that while I was running from the diagnostics disc, the terrible noise persisted.
The CPU fan test revealed that the CPU does indeed have a fan (or maybe more than one) and that it can be heard, but only at high RPM. At low settings (normally, if the CPU is not under load) it can hardly be made out (1700 RPM or so). The case fan was even lower; it's a big one so it runs only at 500 RPM nominally, but can be stepped up to 1500 or so if the case runs hot. Bot tests produced different sounds, and the noisy fan stayed noisy. It could only mean one thing: the video card fan. The card is an Nvidia GeForce 310.
The Ubuntu 8.04 system worked, but not very smoothly. The network driver was a kludge, I couldn't get anything but the VESA driver working for video (or I didn't try hard enough) and the noise made the whole thing just unworkable. I decided that maybe, just maybe, the upcoming LTS release, Ubuntu 10.4, would be a better option. This proved to be sheer lucidity.
Ubuntu 10.4 is not even out, but you can get beta 2 already and this is actually encouraged as the experience of more people trying the system at this stage will help iron out the remaining wrinkles. From the moment I popped in the CD and started, I was amazed by the ground they've covered since the last release (I'm running 9.10 on my laptop so I'm close to cutting-edge there). The installation was smooth, very few questions asked and in no time at all I had a new OS running plus network. (Still, noise.) I logged on, clicked around appreciatively, and then I selected 'maximum visual effects'. This triggered the system to prompt me whether I wanted to install the proprietary Nvidia drivers (of course, you silly!). After the installation there was some hiccup about not being able to switch or load drivers (somekind of fb kernel driver got in the way? I couldn't tell), but a reboot set things straight. And how! As soon as the Nvidia driver loaded, the computer became silent (relatively speaking). And this makes all the difference. First I had regretted my choice for this particular desktop machine, and now I find it very good value for money. I should probably still chase up DELL about the resonance in the video card fan, but it no longer prevents me from enjoying the new computer. It's funny that during startup and shutdown the noise can still be heard, that is before loading and after unloading the Nvidia driver.
And Ubuntu 10.4 is just fine, even at beta 2. I was so confident that I replaced my old desktop with the new one just the day I had to leave for four days to visit the last EGEE User Forum, after I rsync'd all user data and tested that my wife could still read her e-mail.
Friday, February 12, 2010
Chicken and egg: install rpm using rpm
What do you do when a collegue has deleted the rpm and yum packages from a CentOS system (by mocking around with the sqlite package)? Reinstall them of course. Hmm but how, when rpm is absent?
Setting up a local rpm installation
The solution is to copy rpm from another computer with (approximately) the same operating system. The following files are required (substitute lib64 for lib when you're on a 32-bit system), put them in a temporary directory on the target system:
- /bin/rpm --> bin/
- /usr/lib64/librpm*.so --> lib/
- /usr/lib64/libsqlite*.so --> lib/
- /usr/lib/rpm/macros --> lib/rpm
Some configuration files are expected to be present, though, and rpm needs to be told to look for them in the correct location. This is done with a little wrapper script (named rpm.sh) like this:
#!/bin/sh
export LD_LIBRARY_PATH=`dirname $0`/lib
mv ~/.rpmmacros ~/.rpmmacros.orig
cp `dirname $0`/lib/rpm/macros ~/.rpmmacros
`dirname $0`/bin/rpm --rcfile `dirname $0`/lib/rpmrc --define "_rpmlock_path /var/lock/rpm" "$@"
[ -e ~/.rpmmacros.orig ] && mv ~/.rpmmacros.orig ~/.rpmmacros
You can then use the temporary rpm installation by going to the temporary directory and running ./rpm.sh. It will still work on the system's package database.
Installing rpm's RPMs
This is easy now. First download rpm and required packages from a CentOS mirror. You need the packages for rpm, rpm-libs and sqlite (make sure you choose the right platform, i386 or x86_64). Then do a ./rpm.sh -i *.rpm so all these packages are installed at once. Now you can run the system's rpm again, phew!
Installing Yum
You may still need to get yum back. This is done similarly, by downloading the packages yum, yum-fastestmirror, yum-metadata-parser, rpm-python and python-sqlite. Then do a ./rpm.sh -i *.rpm for these and you've can install packages easily again.
Wednesday, July 1, 2009
User SSH configuration for virtualised remote hosts
But sometimes it is convenient to use graphical tools available on the desktop to do something on the remote guest machine. This would be possible with a direct SSH connection. The straightforward solution would be to use SSH port forwarding.
There is a more convenient way to get the remote guests appear as ordinary hosts from the desktop via ssh (without resorting to a VPN or so): using the ssh configuration file located in ~/.ssh/ssh_config:
Host coolhost.testdomain coolhostThe key line here is the last one: this opens an ssh connection to the Xen host, and uses netcat to open a connection to the guest's ssh socket.
Hostname coolhost.testdomain
Protocol 2
User root
# avoid often changing host fingerprint prompt
CheckHostIP no
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ForwardX11 yes
# route through the Xen host
ProxyCommand ssh -q -A xenhost.domain nc %h %p 2>/dev/null
The configuration above also removes the hostkey check present in SSH. Usually one would really want this, but as I'm generating and destroying machines all the time and the connection to the xenhost is verified already, it doesn't really bring much additional security.