fogbound.net




Page 1 of 212

Wed, 30 Jan 2013

Failures in Typographical Experimentation

— SjG @ 8:35 pm

This started with an idea.

Perhaps it would be interesting to create a family of type faces where the density of the characters was related to the frequency of their use. This font, to be called Densitas, would have variants based upon the text analyzed. For example, Densitas Shakespeare would use the collected works of Shakespeare for the character frequency corpus, while Densitas Brontë would use the works of the Brontë sisters for the corpus. For aesthetic purposes, perhaps the initial faces could be selected based on relevance to the source corpus as well.

What would this accomplish? It might reveal something interesting about the difference in usages between authors. It might end up being environmentally friendly by using less ink on more common characters. It might enhance readability. After all, it’s popularly understood that we tend to look at the shapes of words rather than the constituent letters. De-emphasizing the more common shapes may even make it easier to process text.

Any time I have an idea of this nature, I start thinking about code and design and try to avoid thinking about the end result. As my Father is wont to say, it is just as difficult to create something ugly as it is to create something beautiful. If I think too much on the end result, I will obsess over whether it will be worth the effort, and never get to the actual work. If I just dive in, I may find myself wasting a lot of time, but at least I will learn something.

This turns out to be one of those experiences. I thought it was an interesting idea. The end result is mediocre at best, dull perhaps, a waste of time. Still, I learned something in the process.

Step one was to write a character frequency analyzer. This code does a few things:

  • read a text file
  • compute the character frequencies
  • scale the results across the frequency range, so the least frequent character has a value of zero and the most frequent character has a value of one
  • map the characters to glyph names
  • write out a chunk of code to substitute into the next step

The next step is a FontLab Studio/RoboFab script, hence glyph names instead of raw character names. Since FontLab/RoboFab scripts are in Python, I figured I’d write this in Python as well (I don’t really know Python, but that kind of ignorance never stops me from writing code).

I ended up with this program: cf.py

I ran it against the plaintext
The Complete Works of William Shakespeare from Project Gutenberg
(after stripping out the Project Gutenberg-specific text, which I believe is permitted since I’m not redistributing the text, merely crunching it with code).

The FontLab/RoboFab script accepts two font sources, and interpolates each glyph according to the frequency computed in the previous step, where the less frequently used glyphs are darkest. For my test, I used the current state a sans-serif font I’ve been developing1. I have it in several weights, so I interpolated between the lightest and heaviest. The code to do this interpolation looks like shakespeare_weighter.py.

The results were unimpressive, to say the least:

(click to enlarge)

There are some of the obvious problems: distribution between is too stark; there seem to be only two or three densities. Similarly, kerning gets really disrupted by the different densities. But first things first. Why is the density contrast so extreme? Looking at the weighted frequency data answers that question:

For this chart, punctuation and other glyphs have been omitted.

So the next approach is to make the differences more gradual. Instead of doing by pure letter frequency, we use a gradient based on the ranking of frequency. In other words, the least common glyph is the darkest, the next least common glyph is one increment lighter, and so on, until the most common glyph is the lightest. This code to compute this looks like cf2.py, and the output distribution looks like this:

Looks more promising, does it not? We substitute the values into our FontLab/RoboFab script (like this: shakespeare_weighter2.py), and run it. Alas, the end results are still pretty dull:


(click to enlarge)

For the last try, we’ll do a few things differently. First, the thing that probably jumped out at you when you saw the first distribution graph: we’ll ignore all non-alphabetical characters when doing the frequency calculation. For the sake of readability, we’ll set all non-alphabetical characters to the median value. Secondly, we’ll take accented characters and consider them the same weight as their non-accented versions, so, for example, “á” and “ä” are the same density as “a.” Lastly — and this might be the big shift — we won’t interpolate between two weights of a font based on the frequency, but instead we will effectively halftone each glyph with a screen density based on the frequency.

To do this, we use the RoboFab halftoneGlyph() pen for inspiration. We do a much blunter approach: we impose a grid over the glyph, determine which points on the grid are inside, and replace those points with squares. The size of the squares is the same across a given glyph, and is based on the frequency. This process will then convert a nice, smooth glyph into a rougher, pixellated gray version of itself.

The revised frequency computation code is here (cf3.py), and the resulting frequency graph looks like this:

From this, we generate the final FontLab/RoboFab script (this one: shakespeare_weighter3.py), and run it.

And yet again, we look at the results, and sigh. All this work, and really nothing to show for it. There are a number of problems. The font stresses most rendering engines with its very high contour count, and either gets blurred into oblivion or converted into a plaid checkerboard nightmare when viewed on a display. The differences in shades are only apparent when the characters are enormous, even when printing. And, of course, aesthetically, it’s nothing to write home about.

(click to enlarge)

The lack of results are dispiriting enough to resort to quoting that reprobate Thomas Edison: “Results! Why, man, I have gotten a lot of results! I know several thousand things that won’t work.”

I can’t claim to know thousands of things that won’t work, but I do have another handful to add to the collection.

1 The font will be released as WL Hope Grotesque, when and if I ever complete it to my satisfaction.


Tue, 20 Dec 2011

Kerning Pairs

— SjG @ 11:22 pm

I’ve been playing around with font creation for a couple of projects (more on that will be posted here at some point). One of the more surprising aspects of computer typography is the sheer complexity of it — I may have once naively thought that just it was just a matter of splatting characters … er … glyphs out to some display device based on simple shapes, but I was sadly mistaken. In fact, True Type and its successor Open Type not only use complex mathematical equations for creating the curves that define font outlines, but they also contain rules for scaling, hints for rendering these “mathematically perfect” curves on a bit-mapped display, and metrics for spacing character combinations. Open Type has its own internal language for doing such complex tasks as replacing some glyph pairs with ligatures, or doing fancy substitutions of glyphs depending on the surrounding glyphs or other rules. This allows ambitious font designers to do such things as imitate handwriting or handle non-Roman languages naturally (for example, in Semitic languages, the same letter may be written quite differently if it’s at the beginning or end of a word, and sometimes also depending on where it is in the sentence).

There’s a lifetime of complexity in typography, and, as yet, I’ve only been swimming in the shallow end. Still, I was deep enough to be playing with kerning pairs. Kerning involves moving letters so they fit together nicely. For a visual demonstration and nice game, take a look here. This does more to explain kerning than anything I could write.

The program I’m using for font creation has a facility for creating kerning pair metrics. You can type in a pair of letters, and then adjust the spacing for that particular pair. Of course, you can’t really go through and tune them all1: consider the case where you only have upper case letters and digits from zero through nine. Neglecting accented characters, we’re talking 36 glyphs, or 666 combinations. Now throw in lower case, punctuation, etc, and you have an enormous list of possible combinations to tune.

But think about it for a moment. There are characters combinations that will want tuning in just about every kind of Roman-character-based font, like “VA” or “To” or “ij”. Equally, depending on your language, there are character combinations that will almost never need to be combined. For example, in English, you’ll almost never see a lowercase letter followed immediately by an uppercase, or combinations like “Yq” or “Td” or “zn” in sequence.

So in the interest of selecting kerning pairs intelligently, I wrote a script to analyze character combinations. My target audience is English-speakers, so for my source data, I used English-language texts. But which English texts to use? Being an absurdist, I selected Emma by Jane Austen, At The Mountains of Madness by H. P. Lovecraft, The Adventures of Tom Sawyer, by Mark Twain, An Inquiry into the Nature and Causes of the Wealth of Nations by Adam Smith, Alice, or The Mysteries, Complete by Edward Bulwer Lytton, Tales of the Jazz Age by F. Scott Fitzgerald, Tarzan of the Apes by Edgar Rice Burroughs, An Unsocial Socialist by George Bernard Shaw, the collected writings of Thomas Jefferson, the complete works of William Shakespeare, the Project Gutenberg license text, and the Unix version of the English Dictionary that lives in /usr/share/dict/words.

To analyze the data, I loaded up the text, and stripped out all but the letters, digits, and the following punctuation: period, single-quote, double-quotes, exclamation mark, question mark, comma, semicolon, colon, left parenthesis, and right parenthesis2. I took all of the two-character combinations, and filtered out all pairs where one character was a space. Then I simply counted the number of instances.

Of course, the statistical analysis doesn’t match the experience of reading. While the frequency of combinations that start with an uppercase character followed by a lowercase character is low, those are possibly more important than combinations of lowercase characters. After all, they start out each sentence, and are very visually prominent. Additionally, the shapes of letters increases the propensity of these combinations to need kerning adjustments. With these thoughts in mind, I generated a file of statistics from the same texts, but based solely on combinations containing an uppercase character.

You can download the lists for your own nefarious purposes. Here’s the complete list, and here’s the list containing caps. In the complete list, there is what appears to be bad data. Keep in mind that the text contained such things as Roman Numeral chapter headers, older style numeric abbreviations (e.g., “3dly” and “23d”), some currency abbreviations (e.g., “1s.6d” or “1/6d”, both of which stand for 1 shilling and sixpence), and poetic contractions (e.g., “oer,” “stol’n,”, or “capdv’d”). I also see what I suspect are errors due to imperfect OCR of the original texts.

Last, but not least, I have two files which are my collection of The 128 Vitally Important Kerning Pairs and The 255 Important Kerning Pairs With One Repeat which comprise the most common combinations from the other two files as a single text for examination when testing a font.

1 Ideally, the way you define the spacing of the glyphs themselves saves you from having to tune all combinations. Most should start out looking pretty good. But you do, of course, want your font to lay out perfectly, hence the rest of this discussion.

2 This was admittedly an arbitrary choice of allowable punctuation. I also excluded accented characters like ü and à which would obviously need to be taken into consideration for many European languages. Since my focus was on English, I deemed them rare enough to ignore.


Tue, 18 Oct 2011

Publishing Old Projects

— SjG @ 9:52 pm

I’ve been publishing a bunch of old projects that I may have posted here, or simply left on my hard drive to suffer the slings and arrows of outrageous bit-rot. Most of these are projects that I created for some specific purpose or another, and have either coded to the point where I’m satisfied with them, or abandoned them.

I’m publishing this stuff in the hopes that it’ll be useful to somebody somewhere. In some cases, the code’s primary use may be as an example of how not to accomplish a task. In other cases, they’re projects that are being used in mission-critical operations, and so are reasonably robust.

I’ll be maintaining them on GitHub, if you want to get creative with the definition of “maintaining.”


Mon, 4 Oct 2010

More Plausible User Data

— SjG @ 4:44 pm

Back a few years ago, I posted a quick’n’dirty tool for generating plausible user data. I had a need for some improvements, so I’m posting the new version here.

The new version supports back-references, composite fields, and SQL output. So, for example, you could do:

./user-data-maker.pl -t id:lname:fname:city:state_code:zip:company -f i:ln:fn:c:s:z:/1+^+[Cars,Trucks,Boats,Planes,Motorcycles,Ships,Trains]+^+of+^+/3 -s -m tbl_dealer -n 5

and get the following output:
-- generated data from ./user-data-maker.pl
INSERT INTO tbl_dealer (id,lname,fname,city,state_code,zip,company) VALUES (0,'Nelson','Leslee','Akron','OH',44311,'Nelson Boats of Akron');
INSERT INTO tbl_dealer (id,lname,fname,city,state_code,zip,company) VALUES (1,'Bowen','Beatriz','Miami','FL',33176,'Bowen Trucks of Miami');
INSERT INTO tbl_dealer (id,lname,fname,city,state_code,zip,company) VALUES (2,'Hammond','Raymond','Ninilchik','AK',99639,'Hammond Motorcycles of Ninilchik');
INSERT INTO tbl_dealer (id,lname,fname,city,state_code,zip,company) VALUES (3,'Kim','Arielle','Columbus','MI',48063,'Kim Ships of Columbus');
INSERT INTO tbl_dealer (id,lname,fname,city,state_code,zip,company) VALUES (4,'Estrada','Warner','Iuka','IL',62849,'Estrada Cars of Iuka');

Nothing earth-shattering, but useful to me. Maybe to you too!

Download it here: user-data-maker.pl.gz


Sat, 13 Sep 2008

Generating Plausible Fake User Data

— SjG @ 6:45 pm

So it’s a familiar problem, where you’re developing a data-driven application, and you want to optimize the queries that will run against your database (I’ll have more interesting stuff on this later). The problem, of course, is that to really optimize those queries, you need a lot of sample data.

So I needed to do some address lookup code against a huge collection of users. But because there was the possibility of having to demo the prototype, I really didn’t want 100,000 users named “Foo McBar” living at “10101 Binary Place.” So, with the help of the almighty Internet, the all-frobnicating Perl, and the all-knowing US Bureau of the Census, I created a quick, semi-flexible script to generate people with plausible names and addresses that, if not Google-mappable, at least had agreement on city/state/zip. The city/state/zip is a collection of 250 random zip codes. If you have good zip code data, you can easily extend this to be complete! Names are generated from the most popular forenames and surnames, with a probabilistic bias towards the most common ones. The script also allows you to specify “pick one of n item” type fields, pick a number from a range, plausible email addresses, not-very-plausible phone numbers with or without extensions, and the ability to export as CSV or tab-delimited.

In principle, this should be easy to adapt to other countries, although you’ll need lists of common first names, surnames, street names, and a way of mapping cities to regions, states, districts, cantons, or whatever’s appropriate.

You can grab a copy of it here. It requires a Perl interpreter with the Text::CSV and Getopt::Long CPAN modules.

Usage: user-data-maker.pl [OPTIONS]
   -t, --header : header, a colon-delimited list of column headers
   -f, --format : format string, a colon-delimited list of column contents
       data types:
         fn - first name
         ln - last name
         a1 - street address
         a2 - apartment number
         c - city*
         s - state*
         z - zip 5*
         e - email address
         pne - phone (US), no extension
         pwe - phone (US), with extension
         [a,b,c] - one of a, b, or c
         {a,b,c} - one of a, b, or c in decreasing probability
         [x-y] - a number between x and y, inclusive

         * city, state, and zip will be agree to create a valid address
           if you need multiple addresses, use the code ! to reset the
           synch. The reset works on a left-to-right scan of the format string.

   -n, --number : number of records to create

   Flags:
  -c, --csv : output CSV format (otherwise, tab-delimited).
  -v, --(no)verbose : verbose mode (default false)

Example:


Viajante:samuelg$ user-data-maker.pl --header "First:Last:Age:Email" --format "fn:ln:[10-100]:e" -n 5 --c
First,Last,Age,Email
Margot,Sawyer,33,Margot.Sawyer@netscape.com
Francisco,Cantrell,18,Cantrell@sbcglobal.com
Lynetta,Orozco,28,Lynetta@mac.com
Latrice,Dunlap,41,Latrice.Dunlap@sbcglobal.com
Anissa,Fitzgerald,59,Anissa@hotmail.com

or, more exotically:


Viajante:samuelg$ user-data-maker.pl --header "First Name:Last Name:Address:City:State:Zip:Super Power" --format "fn:ln:a1:c:s:z:[Invisibility,Invincibility,X-Ray Vision,Flight,Likes Squirrels]" -n 5 -c
"First Name","Last Name",Address,City,State,Zip,"Super Power"
Roseanna,Best,"8821 7th Str.",Manati,PR,00674,Flight
Euna,Crawford,"8195 Lee Str.","Fort Washington",PA,19034,Invincibility
Ted,Williams,"7140 Birch Ave.",Monroe,CT,06468,Invincibility
Mariano,Miranda,"2657 1st Way",Lyford,TX,78569,Flight
Tammy,Flowers,"2135 Washington Blvd.",Duluth,MN,55806,"Likes Squirrels"

Enjoy!


Page 1 of 212