fogbound.net




Wed, 7 Sep 2011

Goldstein’s Laws, and Personal Adventures in Epistemology.

— SjG @ 7:28 pm

When I was a kid, my parents attended a lecture on the threat of religious cults. This must have been the early to mid 70s, in the heyday of the Moonies and countless counter-cultural communes. Being somewhat concerned at my credulity (and perhaps that of my siblings) and our inclination to trust the things we read, my father codified Goldstein’s First and Second Laws1:

  1. It’s ain’t necessarily so.
  2. But it might be.

The exact language for The First, of course, comes from Porgy & Bess, one of Dad’s favorites, but the sentiment often extended to a much more general skepticism than merely doubt. I can’t remember how many times that I’d repeat something I’d heard, or read, or just happened to believe, to be told “apply the First, son.” While this was first and foremost a warning to reconsider the “facts” involved in the subject at hand, it could also invite an invocation of The Second.

The thing is, though, you can’t just call upon The Second. Simply replying “apply the Second, Dad” is a wholly inadequate defense — akin to “is so!” on the playground. No, invoking The Second really requires offering more support for the questioned ideas, and is often the entrée into a much longer discussion. Goldstein’s first two laws are a prompting towards skepticism, or at least the application of critical thinking.

In my teens and twenties, I sometimes complained that the first two of Goldstein’s Laws were really a recipe for ambivalence. How could I hold a strong opinion on anything? This is a question that rarely plagues self-identified “skeptics,” nor did it really stop me from holding strong opinions. Yet this question is a key lesson from these laws. Questioning everything is not only exhausting, it’s impossible. You have to make some base assumptions to live by, assumptions you don’t continuously question. As Hume suggests, there are things you have to just accept as known. But how do you know what can you safely accept?

Along related lines, I once told my grandmother Ilse that I had known something, and later discovered that it was incorrect. “Then you didn’t really know it,” she corrected me. “You believed it.” With her definition, you can’t know something that’s untrue, because knowledge is understanding of what is true. Belief, on the other hand, does not require factual truth2. Given Goldstein’s First and Second, this definition suggested to me that I didn’t actually know anything, but that I simply had a collection of beliefs.

As I’ve gotten older, I worry less about what I can believe. Goldstein’s Laws, along with the inevitable lessons of experience, have imbued me with a healthy sense of skepticism. But it’s less about the beliefs themselves than it is the process of belief, or rather, the process of examination. In a sense, ideas are like shiny pebbles on the beach: fascinating to pick up, turn this way and that, examine and admire, but, in the end, few are worth keeping.

Skeptic or not, I can still get taken in by the better Internet hoaxes, and I’m not immune to the influence of the siren-song of advertising — but I do feel like I have a foundation I can rely on. I can comfortably consider new ideas without the need to immediately accept or reject them, and it seems that half of gullibility is in the speed of judgment3.

There is also the realization that Truth is a slippery creature, and often somewhat difficult to grasp. While I am convinced that there are, in fact, absolutes, most of the subjects where the issue of truth arises are less well defined. For example, there are hard, physical truths: pure gold melts at 1064.43 °C, an electron has a charge of 1.60217646 x 10-19 coulombs, and the planet earth weighs around 5.972 x 1024kg 4. Similarly, there are defined and a priori truths: the internal angles of a triangle on a plane will add up to 180 degrees, eΠi = -1, and there are no prime numbers between 3 and 5. There are improvable beliefs that are identified as Truths: Science will eventually explain the workings of the universe, or (my) Religion tells people the only right way to live their lives. Then there are the Truths that are considered true by convention or repetition but which have no bearing on reality: the Democrats are on the side of the poor, or the Republicans are fiscally responsible.

But most of the time, when we’re talking about Truth, we’re more concerned about a human dimension, whether it’s a recounting of history (“the compass was invented in China” or “the American Civil War was fought to free the slaves”), a character description (“Kurt Vonnegut was a misanthropist” or “Marie Antoinette was willfully ignorant”), or even self description (“I can’t paint” or “I’ll never understand quantum mechanics”). Many of these Truths are best replied to along the lines of “well, yes, but …” because there is some element that may be true, delivered in a thick coating of supposition. This, of course, is the infamous nuance problem, in that most things are surprisingly complicated, and a simple statement can’t adequately address that complexity.

Rather than argue about the Truths or dissecting the degrees of truth in complex issues, I find that I’m increasingly interested in less tangible things that don’t easily accept labels like “true” or “untrue.” These are matters like wonder, beauty, love, and even melancholy. They are emotional, or induce emotion. Unlike the shiny idea pebbles mentioned earlier, they are more experiential — the ephemeral process of picking them up, turning them about, examining them and admiring them is effectively the same as keeping them.


1 For the sake of completeness, there are two other formal laws:

  1. Not all errors must be corrected and not all insults must be avenged.
  2. But some of them must.

These latter two laws are also often accompanied by a paraphrase of Ecclesiastes 9:11: “the race is not always to the swift nor the battle to the strong.”
2 Her definition, to be fair, is the one more or less accepted by philosophers as far back as Plato. It may date earlier still.
3 The human brain is highly optimized for pattern recognition, and the optimization is biased towards speed over accuracy (although, in general, it’s very good at both). While this touches on a much larger subject, it’s clear that rapid decision-making is key for survival in some contexts, yet it introduces an as-yet unpatched vulnerability in human consciousness — a vulnerability which is systematically exploited in everything from advertising to politics to religion.
4 If these measurements are inexact, it should not invalidate the idea that there are physical truths — the measurement is not the fact. Perhaps this measurement of the earth’s weight is wrong, but the earth still has a weight. In the face of imprecise measurement, we could potentially argue whether or not these things are invariant. Furthermore, solid physical truths can get a little slippery when you get outside of the “classical” range of physics. For example, in Quantum Mechanics, a lot of measurements replace absolutes with probability functions. That, however, is a topic for other discussions.

Filed in:

Fri, 19 Aug 2011

Timelapse Photography and the Evolution of Hardware

— SjG @ 10:29 pm

We’ve had some new hatches in the Butterfly Fort, and there are at least six chrysalids which will be eclosing in the next couple of days1. This reawakens my interest in time-lapse photography.

I used to have a setup with a Harbortronics D2000 which I hooked up to my Nikon Coolpix 950 (and, later, Coolpix 995). It was good for a lot of things, but I succeeded in burning a nice streak across the sensor of the camera when the sun passed directly through the scene — the Coolpix line didn’t have a physical shutter, so the lens focused the sun onto the delicate sensor for the full time it was in view.

I’ve been using the Brinno Gardenwatch Cam that I received as a gift
a few years ago. It’s a dedicated, all-in-one time-lapse device. Once I learned a few things, I was able to use it successfully. First off, it really needs to be run on Nickel-metal hydride batteries. Next, you have to listen carefully when turning it on, because it’s not always obvious when you’ve powered it on and then off again by holding down the button a bit too long.

The Gardenwatch Cam does a decent job. It creates AVI format movies. It has 7 interval settings ranging from 1 minute to 24 hours. It has two focus settings, one for close up, and one for landscape.

With the monarchs, though, I want to be able to get in closer, and have sharper images than the Gardenwatch cam will give me. I still have a Nikon D70 which served me well for many years, but has been supplanted by the D90 in recent years. I also have a small assortment of lenses that I’ve accumulated over the past fifteen years. I’m thinking that the six megapixels of the D70 should be far more than adequate for doing some nice macro time-lapse work.

So the only problem is intervalometry — how do I trigger the camera to take pictures? Nikon sells intervalometers for most cameras, but the D70 is notably excluded from that list. There are a number of people making kits (or generously giving away their designs). I thought I might be able to rig something together.

I was successful. Taking an ancient Gateway Solo 9300 notebook that I’d bought for a king’s ransom back in 1998 or 1999, I installed Ubuntu 11.4 desktop on it. This was a mistake. The machine has a 366MHz Pentium II processor, and 128 MB of memory — lesser specs than your iPhone2. It got part way through the boot sequence, and locked up (terminal swapping? driver issues? I don’t know). I then installed a version of Ubuntu 8 Server for which I happened to have a CD. After installation, I booted into Matrix-esque screen garbage, but after some fighting with boot parameters was able to get running cleanly. Next, I installed gPhoto2. Putting the D70 into PPT mode, I hooked it up to the notebook with a USB cable.

Voilà! Now, all it took was a few commands:

Find the camera:

gphoto2 --auto-detect

Store pictures on the camera’s compact flash card:

gphoto2 --set-config capturetarget=1

And take pictures at a 30-second interval:

gphoto2 --capture-image --interval 30

If I get any worthy results, I’ll post ’em on Archie’s Garden.

1 There sure are a lot of great insect words. “Eclose” and “chrysalid” are just two among many.
2 The two generations of iPhone had 128 MB of memory, and a 412 MHz ARM processor.


Wed, 17 Aug 2011

Adventures in Geocoding

— SjG @ 5:34 pm

An iPhone locator app I’ve been building had a weird thing happen: if you denied it access to Location Services and entered a valid ZIP code, it would work — but an invalid ZIP code would always home in on Lancaster, Pennsylvania. At first, I thought it was a bug in how I was sending coordinates to MKMapView, but I quickly was able to confirm that the problem originated in my server-side geolocation service.

My server-side geolocation service uses Google’s deprecated Geocoding API version 2. The problem arose from sloppy coding on my side, coupled with Google’s map intelligence, and the v2 API’s reporting. Here’s how:

My code would assemble an address string from street number, city, state, and ZIP code (if provided). In this particular case, however, it was only receiving a ZIP code. But my code was crappy, and the address string it assembled looked like “null, null, null 90066”. If the ZIP code was legit (like 90066), Google’s geolocator is smart enough to figure out that it’s a ZIP code, ignore the “nulls,” and do the right thing. But the interesting thing happens when the ZIP code is not legit. Google’s algorithm evidently tries its best to match up the provided address with something you might be looking for. Perhaps due to previous searches with the API key we were using, perhaps for other unknown reasons, those “nulls” were matched to Noll Airport, East Hempfield, Lancaster, PA 17603.

Interestingly, the Geo Address Accuracy resolution value returned for that specific match is 9, or “Premise level accuracy” — the highest level of accuracy. Again, my crappy code had assumed that the Geo Address Accuracy was a confidence factor, not a resolution indicator. So that guess appears to be a really good fix, when, in fact, it’s no better than any other guess.

Revising the code to leave out the “nulls” resulted in another interesting result. Again, legitimate ZIP codes were found right off. Bad ZIP codes ended up being matched against other numeric codes, so, for example, “90000” ended up matching Belfort, France, and “91000” ended up matching Tawau, Malaysia. These results all come back with a Geo Address Accuracy of 5, or ZIP-code level.

One solution to this problem is to validate ZIP codes before submission to Google. Another solution would be to upgrade to the v3 API, where there’s more information about what’s going on.


Wed, 10 Aug 2011

pfSense saves the day

— SjG @ 7:48 am

Several years ago, we replaced our commodity hardware firewall (a Sonicwall SOHO from ’01) with pfSense running on an unused Dell 4100 desktop from that same year.

pfSense was a little confusing to configure the first time through (doing 1-to-1 NAT with virtual IPs and CARP was initially confusing, but the pfSense forums and The Google came to our rescue). Once in place, though, it did a great job. And when I say a great job, I mean that we could pretty much forget about its existence. It just hummed away in the background, and everything worked. When we needed to check up on our ISP, we discovered that quality of service logging was already supported, as well as pretty graphs of various connection properties. Very nice!

Over the last weekend, the 4100 locked up, and our connection was interrupted. Rebooting gave a firmware error about a bad disk in drive A: — but there was no disk in the drive. Power cycling, opening the machine, wiggling some cables, and blowing out some dust brought it back up, and all was well. Except it wasn’t, really. The machine spontaneously rebooted a number times over the next few days, and occasionally got into the “bad disk in drive A:” boot failure, requiring a hard power cycle. As I watched on the console, I saw the kernel fault out after too many memory checksum errors. The old machine was giving up the ghost.

After commissioning another old desktop (an ’07 vintage Dell, this time), I was able to install pfSense on it. I had to disable some of the extraneous hardware in the BIOS, but after about an hour I had it installed, booting, and ready to go. I was able to simple dump the configuration from the old firewall, load it into the new machine, reassign the LAN and WAN interfaces to the proper devices, and swap the boxes out. voila! Back in business!

With any luck, I won’t have to repeat this process for another five years.


Thu, 4 Aug 2011

Mac OS Automator for the Win!

— SjG @ 3:59 pm

I’m accustomed to having a hot-key in my text editor for inserting a time-stamp. Now I have a plain-text note-taking application that I want to use for managing my time, but it has no bells or whistles. It doesn’t allow the creation of macros and it doesn’t have a time-stamp function.

All is not lost! Using Automator, I created a service which calls a shell script to generate a nicely formatted time-stamp. I haven’t found a way to assign the service to a hot-key, but in many text input areas, a contextual services menu can be brought up with a simple right-click of the mouse.

Simple, nice, and convenient.

Here’s how to do it:
Fire up the Automator application. Create a new “Service” workflow:
(click to enlarge)

For the operation, double-click on “Run Shell Script” and set it up as shown in the image below:
(click to enlarge)

You’re done! Now you can insert 2011-08-04 16:58:06 time-stamps 2011-08-04 16:58:13 everywhere 2011-08-04 16:58:20.

Note: this is under Snow Leopard / Mac OS X 10.6.8. It probably will work under anything from Leopard onward.