fogbound.net




Wed, 10 Oct 2007

Further eAccelerator weirdness

— SjG @ 4:34 pm

As I described back in this article, I was getting segfault errors from eAccelerator.

I’m experiencing it on another, similarly-equipped GoDaddy VPS server. Same software versions, even, for Apache, PHP, eAccelerator, and OS, although this is not a old CMS Made Simple install.

Still, no good solutions out there, as far as I can google.

Here’s the clues I have found:

  • The syntax function fun_name ($arg = array (blah)) is fatal.
  • The order in which PHP extensions are loaded might matter.
  • There are issues with files that get included or required multiple times via different paths.
  • Software versions are just Too Old.

This is teh suckage.

Next thing I’ll try, when I have the time, is to upgrade the VPS to Fedora Core 6, and see if pushing those version numbers up a bit helps.


Mon, 9 Jul 2007

Preventing “Overlapping” cron Processes

— SjG @ 9:43 am

I have a number of very time consuming processes that get run by cron on various machines. Some of these processes would cause problems if they “overlapped” — e.g., a new one gets started before the old one is done.

Now, there are a lot of ways to make sure you’re unique if you’re a process, but often I don’t want to modify the source of the process to add that (for many packages, I’d rather not patch and merge and recompile every time a new version comes out). So I write a simple shell script to run the process; cron calls my shell script, and prevents the overlap.

This uses the magic of “pgrep” — unfortunately, different versions of pgrep have different flags, so the code I originally wrote (which used the “-c” flag, which counts the matching processes) didn’t port to most systems. It’s easy enough to pipe the output through a “wc -l”.

I did have to move the pgrep exec out of my if statement, though, since the comparison was going against the return code, not the output. Doh!

#!/bin/bash

RUNNING_PROCS = `pgrep -f longRunningProcess | wc -l`
if [ "$RUNNING_PROCS" -gt "0" ]
then
        echo `date` longRunningProcess still running. I\'ll let it finish. 
else
        echo `date` Starting longRunningProcess.
        /path/to/longRunningProcess -flags 
fi
echo "----------------------------"

Filed in:

Wed, 27 Jun 2007

Unix: How to find files lacking certain strings

— SjG @ 4:10 pm

So, I’m working on a convoluted web site, and a problem comes up. It seems that some vitally important code was not included in some pages (for the sake of argument, let’s say it’s a copyright string). This particular site has an ungodly mix of files, including .htm, .html, and .jsp files. Some of the .jsp files are actual pages, and others are stubs to be included in other .jsp pages. The majority of the full .jsp pages include a “footer.jsp” that has the desired string, so they’re good. But I need to generate a list of the full pages, of whatever sort, that lack this string.

The inverse of this problem is easy, and is the kind of thing I use all the time:
find . -name \*.htm -o -name \*.html -o -name \*.jsp -exec grep -il "myString" {} \;

Initially, I thought using the -v flag to grep would work for me, but grep -vl returns all files it sees, because -v returns the lines that match the invert expression, not the files that match the invert expression. Then there’s the problem that I need to match “full” pages rather than included .jsp stubs.

So here’s how the Mighty Power of Unix came to my rescue:

find . -name \*.htm -o -name \*.html -o -name \*.jsp | xargs grep -il "</html>" | sort -u > full_pages.txt

provides me with a list of pages that are not mere inclusions, if you accept my assumption that an inclusion won’t match the closing HTML tag.

Then I generate a list of full pages that contain the magic string and or include the footer.jsp that would contain the magic string:
find . -name \*.htm -o -name \*.html -o -name \*.jsp | xargs grep -il "</html>" | xargs grep -le "uniqueCopyrightTag\|footer\.jsp" | sort -u > pages_no_string.txt

Then I compare the files to find out which full pages lack both the magic string and the include:
comm -3 pages_no_string.txt full_pages.txt

Wow. There it is!

I bet there’s an easier way. Post an example in the comments if you know of one!

NOTE: All commands are on a single line, regardless of whether they wrap in this particular display.


Sun, 25 Mar 2007

Backups, cont.

— SjG @ 9:50 pm

OK. I’m a bonehead. The link I provided to my backup script tarball was broken. The link is fixed.

But wait! A new version of the scripts will be posted in a few days. It’s got some bug fixes and some new features. With it, the little birds really do sing more cheerfully, and the colors really will be brighter.

(As an aside … I don’t know why none of the people who clicked on the broken link bothered to send me an email or leave a comment to tell me there was a problem. Could that all have been robot traffic?)


Thu, 8 Mar 2007

Automated Backups – Updated!

— SjG @ 3:50 pm

[Update — fixed the link!]

Automated Backups are a good thing. Automated Backups make the little birds sing, the rainbows shine, and little fauns gambol about in beautiful green forests. When computers are backed up, the butterflies flutter, the flowers bloom, and the fruit from the trees taste just a little sweeter. But when computers are not backed up, the universe becomes angry.

An angry universe is not a good thing. An angry universe makes little birds cry. An angry universe makes Cthulhu come and visit.

So. Automated backups. I’m partial to rdiff-backup because it allows me to not only back up data, but keep previous versions available. Backing up nightly doesn’t help if you accidentally overwrite the contents of a file with something, and don’t notice for a day or two. But with rdiff-backup, you can restore the version before the error.

Unfortunately, rdiff-backup really is designed for server-to-server backups, where each end of the transaction has shell access. Enter duplicity, a related project. It’s more designed for storing backups on servers that you don’t control and/or don’t trust. It allows encryption of your backup sets, as well as supporting a wider variety of protocols (ftp, scp, s3, etc.)

So with a combination of these two scripts, you can backup pretty much any POSIX-ish server to pretty much anything that you can ftp or ssh into. Still, it’d be nice if you could:

  • Check that the backups completed successfully, and get email confirming that success or warning on a failure.
  • Configure up all of your various backups by a simple text file, rather than remembering the different command-line formats.
  • Create groups of options that can be applied to backup tasks.
  • Issue commands on the backup source and destinations before and/or after the backup (good for dumping databases into a flat file, for example, and then deleting it after it’s backed up).
  • Get email confirmation on completion of backups.
  • Have some tools to simplify the securing of the backup process.

For these reasons, I put together this backup script, which is basically a Ruby wrapper for rdiff-backup and duplicity. It’s almost entirely configured via two human-readable yaml files.

It’s flexible, reasonably simple to use, and comes without any guarantees whatsoever. Feel free to use it yourself!

DISCLAIMER: it’s as-is. Not to be used in place of a certified Cthulhu-deterrent. Use at your own risk. To quote the duplicity page: “[it] is not stable yet. It is thought to have a few bugs, but will work for normal usage, and should continue to work fine until you depend on it for your business or to protect important personal data.” — that goes for me too, only double.