fogbound.net




Mon, 20 Oct 2008

Fixing crap PHP applications

— SjG @ 1:18 pm

I regularly end up in the situation where I have to fix a crap PHP application.

The latest one has lots and lots of PHP 3.x-era code, that references hashes without quoting the index, e.g.,


$foo = $bar[baz]

Now, the PHP interpreter understands this in actuality. It figures out that since the constant is not defined, the programmer probably meant that the index should be ‘baz’. It does, however, throw a well-deserved warning.

The code I’m trying to fix throws lots and lots of warnings. Rather than wade through all the warnings to find which ones are important, I started with the following:


find . -name \*.php -exec grep -ne '\[[^\$0-9\'\''\"]' {} \;

The thousands of lines of output convinced me bigger guns were needed.

So we got ugly.

find . -name \*.php -exec perl -p -i.bak -e 's/\[([^\$\d\'\''\"]+)\]/\[\'\''$1\'\''\]/g' {} \;

Note that that’s one line, and that WordPress seems to want to change some single quotes into back-ticks. Don’t be fooled!

All those extra backslashes and single quotes are to allow passing single-quotes within the regex, and not have bash consider them problematic.

Also note that this could be catastrophic if you have regular expressions in the code you’re operating on — do a diff with the backup version, and merge back the regexes.

I’m sure there are far more elegant solutions… primary among them, not using crap PHP apps in the first place!


Sat, 13 Sep 2008

Generating Plausible Fake User Data

— SjG @ 6:45 pm

So it’s a familiar problem, where you’re developing a data-driven application, and you want to optimize the queries that will run against your database (I’ll have more interesting stuff on this later). The problem, of course, is that to really optimize those queries, you need a lot of sample data.

So I needed to do some address lookup code against a huge collection of users. But because there was the possibility of having to demo the prototype, I really didn’t want 100,000 users named “Foo McBar” living at “10101 Binary Place.” So, with the help of the almighty Internet, the all-frobnicating Perl, and the all-knowing US Bureau of the Census, I created a quick, semi-flexible script to generate people with plausible names and addresses that, if not Google-mappable, at least had agreement on city/state/zip. The city/state/zip is a collection of 250 random zip codes. If you have good zip code data, you can easily extend this to be complete! Names are generated from the most popular forenames and surnames, with a probabilistic bias towards the most common ones. The script also allows you to specify “pick one of n item” type fields, pick a number from a range, plausible email addresses, not-very-plausible phone numbers with or without extensions, and the ability to export as CSV or tab-delimited.

In principle, this should be easy to adapt to other countries, although you’ll need lists of common first names, surnames, street names, and a way of mapping cities to regions, states, districts, cantons, or whatever’s appropriate.

You can grab a copy of it here. It requires a Perl interpreter with the Text::CSV and Getopt::Long CPAN modules.

Usage: user-data-maker.pl [OPTIONS]
   -t, --header : header, a colon-delimited list of column headers
   -f, --format : format string, a colon-delimited list of column contents
       data types:
         fn - first name
         ln - last name
         a1 - street address
         a2 - apartment number
         c - city*
         s - state*
         z - zip 5*
         e - email address
         pne - phone (US), no extension
         pwe - phone (US), with extension
         [a,b,c] - one of a, b, or c
         {a,b,c} - one of a, b, or c in decreasing probability
         [x-y] - a number between x and y, inclusive

         * city, state, and zip will be agree to create a valid address
           if you need multiple addresses, use the code ! to reset the
           synch. The reset works on a left-to-right scan of the format string.

   -n, --number : number of records to create

   Flags:
  -c, --csv : output CSV format (otherwise, tab-delimited).
  -v, --(no)verbose : verbose mode (default false)

Example:


Viajante:samuelg$ user-data-maker.pl --header "First:Last:Age:Email" --format "fn:ln:[10-100]:e" -n 5 --c
First,Last,Age,Email
Margot,Sawyer,33,Margot.Sawyer@netscape.com
Francisco,Cantrell,18,Cantrell@sbcglobal.com
Lynetta,Orozco,28,Lynetta@mac.com
Latrice,Dunlap,41,Latrice.Dunlap@sbcglobal.com
Anissa,Fitzgerald,59,Anissa@hotmail.com

or, more exotically:


Viajante:samuelg$ user-data-maker.pl --header "First Name:Last Name:Address:City:State:Zip:Super Power" --format "fn:ln:a1:c:s:z:[Invisibility,Invincibility,X-Ray Vision,Flight,Likes Squirrels]" -n 5 -c
"First Name","Last Name",Address,City,State,Zip,"Super Power"
Roseanna,Best,"8821 7th Str.",Manati,PR,00674,Flight
Euna,Crawford,"8195 Lee Str.","Fort Washington",PA,19034,Invincibility
Ted,Williams,"7140 Birch Ave.",Monroe,CT,06468,Invincibility
Mariano,Miranda,"2657 1st Way",Lyford,TX,78569,Flight
Tammy,Flowers,"2135 Washington Blvd.",Duluth,MN,55806,"Likes Squirrels"

Enjoy!


Fri, 18 Jul 2008

Using Regular Expressions for HTML Processing in PHP

— SjG @ 4:16 pm

Well,not really. This is just one example of a bad approach.

The problem: an HTML file is read, but needs to be entity-escaped. However, not all entities need escaping. Specifically, double quotes with anchor tags need to be left alone.

The right solution: process the HTML via a DOM parser, escape nodes that are not anchor tags. Oh, but did I mention these HTML files may be crappy, non-validating files, or even snippets?

The next solution: Use a regular expression. Yes, this is ugly. Yes, it also works 🙂

Originally, I tried using variable-length lookahead, but ran into problems (PHP 4.x). But PHP provides another solution which is perfect for this sort of thing. Here’s the code:

function pre_esc_quotes($inner)
{
return preg_replace('/"/','QUOTE',$inner[0]);
}
function post_esc_quotes($inner)
{
return preg_replace('/QUOTE/','"',$inner[0]);
}
$tmp=preg_replace_callback('/<a([^>]*?)>/s','pre_esc_quotes',$raw_html);
$tmp = html_entities($tmp);
echo preg_replace_callback(('/</a><a([^>]*?)>/s','post_esc_quotes',$tmp);

This, of course, presumes that the string “QUOTE” won’t show up anywhere in your raw html. Consider replacing it with an opaque string (like “JHG54JHGH76699597569” or something creative and long that will choke the interpreter).

This code is furthermore inefficient in a number of ways. It’s not something you should use. But it does show how preg_replace_callback avoids some scary regex work.


Tue, 20 May 2008

Email Round-Robin using Procmail

— SjG @ 2:56 pm

The need arose to have a specific email address round-robin (e.g., cycle through a collection of destination email addresses).

A solution was achieved through use of procmail and a little perl script. It probably could be done more easily and/or better, but I figured other people might find this interesting.

So, first, an alias was created in /etc/aliases (used by postfix in this case, but it should work for sendmail, and variants should work for other MTAs):

rrtest:         |"/usr/bin/procmail -m /etc/postfix/roundrobin_procmail.rc"

Then, the following file was saved as /etc/postfix/roundrobin_procmail.rc:

:0 w:/tmp/rrlock
{
        :0
                dest=|/etc/postfix/rr.pl
        :0
                ! ${dest}
}

And then, of course, we need the perl program. Here’s /etc/postfix/rr.pl:

#!/bin/perl
# ----------------------------------------------------------
@recipients = (
'address1@sample.com',
'address2@sample.com',
'address3@sample.com'
);

$index_file = 'rr-index.txt';

# ----------------------------------------------------------

$index_exists = 1;
open(IN,";
        close(IN);
        $index++;
        }
else
        {
        $index = 0;
        }


if ($index > $#recipients)
        {
        $index = 0;
        }
open(OUT,">/tmp/${index_file}");
print OUT "$index\n";
close(OUT);

print STDOUT $recipients[$index];

exit 0;

Elegant? Not really. But it seems to work 🙂


Mon, 24 Mar 2008

Open Source Software Development, Rant #1

— SjG @ 3:15 pm

Loath as I am to admit it, I know why Microsoft products all suffer from creeping featuritis. It’s because users are so damn creative.

In developing modules for CMS Made Simple, I’m continuously receiving feature requests. Some are reasonable. Many are not.

Reasonable:
“Could you extend your can opener to handle sardine tins as well as standard cylindrical cans?”

Unreasonable:
“I know it’s supposed to be a can opener, but I find it works well in extricating people from burning wreckage, so I was wondering if you could add a fire-hose feature, and maybe a siren or flashing lights.”

The skill I need to develop is saying “no” in an acceptable way. It’s easy when the requester phrases the question like “add this, or I won’t use your stupid system!” Yeah. Well. Golly, I’ll be awfully sad to see ’em go. Similarly, the ever-popular “it’s embarrassing to tell my client that I can’t provide them feature Y because you didn’t implement it!” always brings me copious, bitter tears at the thought of their shame and tragedy. Cry me a river indeed.

It’s a bit harder when the request is along the lines of “to be a truly professional system, it really should have feature Z,” because then I have to assess whether or not it really would be a professional grade feature.

Hardest yet is when someone requests a feature and gives at least a basic explanation of why it would be good for the project as a whole (in addition to their specific need). Even if I can’t see that I would use the feature myself, this will often sway me and I’ll add features, even against my better judgment.

Then, of course, there’s cash, which has a peculiar way of getting features added, no matter how ridiculous.