fogbound.net




Tue, 23 Jun 2015

The File Format Future Problem

— SjG @ 4:26 pm

Trying to find inspiration on a current geometric art project, I went to look at some old work I’d done. Well, or I tried to.

See, back in the 1990s, I spent lots and lots of time doing geometric art using my favorite vector graphics package. You know, MacDraw II. And then, oh happy day, I upgraded to MacDraw Pro, and later yet to Claris Draw. I don’t recall whether the file format changed between MacDraw II and MacDraw Pro, but there were definitely changes when it became Claris Draw.

Claris Draw continued to run on Mac OS until Apple abandoned PowerPC code support back in 10.5.

Now, five years ago, I started looking for a way to use those old files, and discovered Intaglio, which would read the files — mostly. Some of the really large files didn’t work so well. Even though I purchased a license, there was an upgrade to Intaglio that would have required I re-buy it to fix some bugs. It didn’t seem like there were new versions being released, and support was half-hearted at best1. So I gave up.

A year ago, I tried a number of other programs. I need to convert an old architectural diagram, and found that EazyDraw’s Retro version would read the formats. What’s more, they had a neat “rent-for-nine-month” license for just $20. I bought that, converted a few files, planned to convert all my old backlog, and promptly got distracted and let the nine-month license expire before doing anything.

So tonight, I bought another nine-month license, and went through and converted a few hundred files.

The opening of the old files is not perfect. One class of problem is fonts that I no longer have. Some of the problems seem to be positioning of elements (particularly on the oldest MacDraw files), which sometimes all pile up in one corner — but the elements are all there and could be rearranged to restore the original if I wanted to put the work into it. In some cases, it appears that complex elements (like groups) have acquired a background color. That too is easily remedied.

But what to convert to?

My vector graphics program of choice these days is Affinity Designer, and of course I can’t convert directly to their format. SVG worked well, until I discovered that ClarisDraw layers got ignored and left out of the converted file. Also, if you enable SVG Tidy, enough resolution gets removed from points that lines can shift around.

In the end, PDF results in the cleanest transfer. It preserves all the geometry and groupings.

But things do get lost. This is a probably a function of the lack of universal standard for vector images. One program may support the collecting of geometrical objects into layers. Another may not. In my conversion process, I’m losing the original layers. The geometry, however, is still preserved in a way that would let me move things around to different layers in Affinity Designer, so I’m OK with that.

But this is related to a larger problem. Future-proofing is hard. I say this as a fool with boxes of 5.25″ floppies in a DSDD 40-track hard-sector format readable by only a handful of TRS-80s from thirty plus years ago. But even if the media was still good, and even if I could find hardware to read it, what then? What would I do with text files created in Electric Pencil? I have to face the fact that the games I wrote in Z-80 assembler are gone, buried in the sands of time. I mean, maybe I could find a way to read the disks (if they’re not completely faded away), and maybe I could find a TRS-80 simulator that supports some of the hacks I did, and maybe it would all work. But the amount of time required would be substantial. And for what? Revisit some nostalgia of my misspent youth?

But some of my old stuff I’d like to keep around. You could argue that I should have been paying attention and migrating data as I go. Guilty as charged. But it’s hard. And stuff inevitably falls through the cracks. For example, I use Apple’s Aperture for organizing and editing my photography (digital workflow, digital asset management, DAM … whatever you want to call it). Originally, I used a clumsy system of directories. Then I graduated to iView MediaPro. It was the same problem as MacDraw/ClarisDraw — the company behind the software shifted priorities, and support lagged. Then it was acquired by Microsoft, and any future Mac support looked questionable 2. In any case, I went through a painful process to export my edits, labels, keywords, projects from MediaPro into Aperture. But now Apple has ended development of Aperture. There’s a script to migrate to Adobe’s Lightroom. It’ll bring across my keywords and captions and maybe albums or projects, but it doesn’t preserve the nondestructve edits — and how should it, when there’s no real correlation between many of the edits available in the two programs.

I have close to 50,000 pictures, all of which are organized into albums and projects, most of which have keywords, maybe a third of which have edits, and a small set of which are organized into books. So I face a monumental task to migrate, and in the process I lose the “nondestructive” nature of my edits. I’ll have to export an edited version and an original if I think I’ll ever want to re-edit an image. Furthermore, my organizational approach will need to be revisited, and some of the work I’ve put into organizing will vanish. So I procrastinate. Aperture’s still working (for the time being). I’ll wait until I’m forced to do something.

This could turn into a rant supporting RMS’s philosophy of using pure Free / Open Source software. But that’s not really the solution either. I could just stop updating my Mac’s software, and I’d be able to continue to run the application as-is. I’d probably want to disconnect it from the Internet, because a lot of software fixes are security-related. It would be inconvenient, but it’d work. Until I had a hardware failure of some kind. These issues are not exclusive to non-Free software. Free software changes and evolves too. I have a backup server dependent on Fedora Core 6 for one of the drivers. It works, but if I want to do any security patches, I’m on my own. With Free software, I’m guaranteed to be able to maintain a working system, but I still have to be willing to do all the work. There’s no panacea.

And on the pedestal these words appear:
`My name is Ozymandias, King of Kings:
Look on my works, ye mighty, and despair!’
Nothing beside remains.
– Shelley

1 I’m happy to report that today there seems to be new life in Intaglio. There are posts in their support forum, and new versions have been released.
2 Now, it looks like MediaPro’s been spun off from Microsoft again.


Fri, 19 Jun 2015

Failed Javascript Experiment

— SjG @ 5:47 pm

I was thinking about textures that are traditionally called Islamic or Moorish tilings.

One simple pattern is built by placing circles on a staggered grid, placing points around their circumferences, and then connecting points to neighboring circles in a pre-defined pattern. Here’s one example:

basic-detail

I was thinking – hey, what if I vary the radius of those circles from row to row?

Three hours of clumsy Javascripting later, I found the answer:

pattern(click on image to see detail)

Unfortunately, it often takes hours of coding to learn that an idea’s not much good.

You can play with some of the variables yourself, or (horrors) look at the source code to see how it works.

Updated:
I couldn’t leave well enough alone, and have added a few features. And I’m getting slightly more interesting stuff now.
pattern2(click on image to see detail)


Tue, 16 Jun 2015

Localized images in Silverstripe with Fluent

— SjG @ 10:50 am

Say you’re building a web site using Silverstripe. And say you need to localize it, and you opted to use the fluent add-on. You have a nice normal page set up, along with a slick user-selected side image. But now, just to finish this hypothetical, say you not only need text on the page to be localized, but you need the image to be localizable too (e.g., a different image depending on someone’s country or language).

Here’s what I ended up doing, and what seems to work for Silverstripe 3.1.

My page model looks like this:


class Page extends SiteTree {
   private static $db = array();
   private static $has_one = array(
      'SideImage'=>'Image'
   );

   public function getCMSFields() {
      $this->beforeUpdateCMSFields(function($fields) {
         $fields->addFieldToTab('Root.Images', UploadField::create('SideImage','Image for Right Panel'));
      });
      return parent::getCMSFields();
    }

The page template has a simple image inclusion:


< % if $SideImage %>
   $SideImage.setWidth(310)
< % end_if %>

In my _config/config.php, I then added


Page:
  extensions:
    - 'FluentFilteredExtension'
  translate:
    - 'SideImageID'
    - 'SideImage'

Run http://www.yoursite.com/dev/build

Now your Page admin will have an Images tab where you can set the Side Image. If you set the Side Image while in the default locale, that image will show for all locales. But if you use the locale menu item to select a different locale, you can override the Side Image for the page.

Slick!


Fri, 5 Jun 2015

pfSense Can’t See the Outside World

— SjG @ 11:00 am

We had a static IP address change on a network that had been in operation for about six years. Since we have gradually been migrating services off to third-party hosting, we no longer need a block of local static IP addresses. To save some ca$h, we are down to one static IP — but that necessitated getting a new IP.

At midnight, the change occurred.

I went into the pfSense admin, got rid of all my 1-to-1 NAT mappings, virtual IPs, and all the firewall rules that protected the no-longer-extant servers. And I couldn’t see the outside world.

I couldn’t even ping the gateway.

Plugging a Mac into the same cable, however, and setting the network parameters, and I had immediate glorious interweb access everywhere.

It was perplexing. The pfSense firewall was configured exactly the same as the Mac. Why u no work firewall?

After a bunch of nonsense, I found the problem. I’d set the WAN interface to our new IP address, and specified it as a single IPv4. I thought I was setting the netmask correctly for a single IP:

IPv4 WAN Address: xxx.xxx.xxx.xxx/32

It turns out, I needed to reduce that netmask. That /32 means *all* of the address is the network submask.

For a single IP address, I used /24 (leaving the entire last byte as my address), although /31 should probably work and would lock it to the specific address.
Edit: The key is the netmask has to leave the gateway in the same subnet as your IP. Doh! You can see I don’t do this kind of stuff enough to know what I[‘m talking about.


Mon, 25 May 2015

Half-toning and Engraving

— SjG @ 11:40 am

Consider a photograph or painting. We see a range of colors (or shades of gray) reflected that form an image. But since our eyes and brain are highly optimized for teasing coherent images out of what we see, a great deal of manipulation can be done to those colors and we will still see the original. Artists have exploited this capability for millennia, using areas of light and dark paints to hint at the details which our brains happily provide.

With the introduction of printing, techniques for “half-toning” emerged to convert a continuous image into a two-color (normally black ink and white paper) image that maintained as much as possible the appearance of the original continuous tone image. There are many photographic processes for making such conversions. I’ll discuss one simple example here, using digital instead of photographic techniques.

We start with an piece of a photograph. This picture is sepia toned and not extremely contrasty.
original

The first step we take is to convert the picture to purely gray tones. When we do this, we also adjust the contrast by shifting the the tones so that the darkest shade in the image is assigned to pure black and the lightest shade in the image is assigned to pure white, and all other shades are adjusted according to their position between the lightest and darkest shade.
gray

The second step involves superimposing a pattern over the picture. The pattern should have be a continuous repeating variation between light and dark — when we superimpose, we multiply the two images, so that the darker of the pattern or the photo dominates. In this case, I used simple straight lines, which are made with a sine wave luminance cycle between pure white and pure black.

screen

The remaining step is to look at each pixel, and decide whether it should be black or white. We do this by simply comparing it to a threshold. Is it lighter than, say, 50%? If so, then it’s white. Darker? Then it’s black. But 50% may not be the best place to position our threshold. We can try various thresholds to see how it comes out:

Here are a few observations that may be relevant at this point. At each of these steps, we made decisions which I glossed over. For example, when we adjust the contrast of this image, we chose a linear conversion. We could, instead, have used different curves to emphasize bright areas, dark area, or middle tones. We could have used histogram equalization which adjusts the image such that there are roughly the same number of pixels for each shade used (often used to bring out details).

Similarly, our overlay pattern needn’t go from pure black to pure white; by changing the ranges of this overlay pattern we are doing the equivalent of adjusting the tonal curves of the original image. We can also have a strong influence on how the final output looks. With a pattern that includes shades darker than our threshold, we will end up with the pattern throughout the final image (as in this case, our final image has lines across all parts of it). By having a pattern of only half maximum density, the lighter areas will not show the pattern:
screen-half

The overlay pattern can be many shapes other than lines (like concentric circles), and there can even be multiple overlays. Traditional newspaper half-toning uses two linear patterns like the one we used, but set at an angle with respect to one another, thereby creating diamond patterns. Newspapers chose this diamond pattern because the size of the pattern relative to the detail in the image determines how much detail winds up in the final image.

I tried to use the above techniques for generating 3D halftones or etchings. While it’s probably a project best suited for use with a laser cutter, I don’t have a laser cutter. I do, however, have a Nomad CNC router!

I wrote a short script that analyzes an image file, and converts it into a set of 3D ridges. My first approach looked at the image row by row, and created a groove with a thickness inversely proportional to the luminosity of the pixels in the row.

2015.05.25-12.18.41-[3D] 2015.05.25-12.03.47-[3D]
Result (click to view) Detail (click to view)

This works well in theory, but neglected to take into consideration some limits of my machine: the work area is 20cm x 20cm, and the smallest end-mill (cutting bit) I have is 1mm in diameter. This functionally limits my smallest detail to somewhere around 1.05mm. Add the fact that the wood stock I had on hand was around 8cm on its narrow dimension, and this results in an image that I can’t carve.

My next algorithm analyzes three rows of the image at a time. As it steps along the rows, it uses the average of the three pixels at each column (call them a, b, and c, where a is the top row). If the combined density is greater than 50%, a 1mm ridge is created. The ridge is thickened on the top by the average density of a + b, and thickened on the bottom by the average density of of b + c.

2015.05.25-12.13.16-[3D]

2015.05.25-12.09.26-[3D]
Result (click to view) Detail (click to view)

This algorithm provides something that’s within the resolution I can carve, but loses an enormous amount of detail. Furthermore, it requires harder wood than the birch plywood I tested on. I did some minor tweaking of the threshold, and here’s what I got:

Photo May 24, 12 07 22 PM

So at this point, I have a set of 0.5mm cutters on order, and need to track down some good hardwood stock to try carving. As always, details will be posted here if notable.