fogbound.net




Fri, 19 Jun 2015

Failed Javascript Experiment

— SjG @ 5:47 pm

I was thinking about textures that are traditionally called Islamic or Moorish tilings.

One simple pattern is built by placing circles on a staggered grid, placing points around their circumferences, and then connecting points to neighboring circles in a pre-defined pattern. Here’s one example:

basic-detail

I was thinking – hey, what if I vary the radius of those circles from row to row?

Three hours of clumsy Javascripting later, I found the answer:

pattern(click on image to see detail)

Unfortunately, it often takes hours of coding to learn that an idea’s not much good.

You can play with some of the variables yourself, or (horrors) look at the source code to see how it works.

Updated:
I couldn’t leave well enough alone, and have added a few features. And I’m getting slightly more interesting stuff now.
pattern2(click on image to see detail)


Tue, 16 Jun 2015

Localized images in Silverstripe with Fluent

— SjG @ 10:50 am

Say you’re building a web site using Silverstripe. And say you need to localize it, and you opted to use the fluent add-on. You have a nice normal page set up, along with a slick user-selected side image. But now, just to finish this hypothetical, say you not only need text on the page to be localized, but you need the image to be localizable too (e.g., a different image depending on someone’s country or language).

Here’s what I ended up doing, and what seems to work for Silverstripe 3.1.

My page model looks like this:


class Page extends SiteTree {
   private static $db = array();
   private static $has_one = array(
      'SideImage'=>'Image'
   );

   public function getCMSFields() {
      $this->beforeUpdateCMSFields(function($fields) {
         $fields->addFieldToTab('Root.Images', UploadField::create('SideImage','Image for Right Panel'));
      });
      return parent::getCMSFields();
    }

The page template has a simple image inclusion:


< % if $SideImage %>
   $SideImage.setWidth(310)
< % end_if %>

In my _config/config.php, I then added


Page:
  extensions:
    - 'FluentFilteredExtension'
  translate:
    - 'SideImageID'
    - 'SideImage'

Run http://www.yoursite.com/dev/build

Now your Page admin will have an Images tab where you can set the Side Image. If you set the Side Image while in the default locale, that image will show for all locales. But if you use the locale menu item to select a different locale, you can override the Side Image for the page.

Slick!


Fri, 5 Jun 2015

pfSense Can’t See the Outside World

— SjG @ 11:00 am

We had a static IP address change on a network that had been in operation for about six years. Since we have gradually been migrating services off to third-party hosting, we no longer need a block of local static IP addresses. To save some ca$h, we are down to one static IP — but that necessitated getting a new IP.

At midnight, the change occurred.

I went into the pfSense admin, got rid of all my 1-to-1 NAT mappings, virtual IPs, and all the firewall rules that protected the no-longer-extant servers. And I couldn’t see the outside world.

I couldn’t even ping the gateway.

Plugging a Mac into the same cable, however, and setting the network parameters, and I had immediate glorious interweb access everywhere.

It was perplexing. The pfSense firewall was configured exactly the same as the Mac. Why u no work firewall?

After a bunch of nonsense, I found the problem. I’d set the WAN interface to our new IP address, and specified it as a single IPv4. I thought I was setting the netmask correctly for a single IP:

IPv4 WAN Address: xxx.xxx.xxx.xxx/32

It turns out, I needed to reduce that netmask. That /32 means *all* of the address is the network submask.

For a single IP address, I used /24 (leaving the entire last byte as my address), although /31 should probably work and would lock it to the specific address.
Edit: The key is the netmask has to leave the gateway in the same subnet as your IP. Doh! You can see I don’t do this kind of stuff enough to know what I[‘m talking about.


Mon, 25 May 2015

Half-toning and Engraving

— SjG @ 11:40 am

Consider a photograph or painting. We see a range of colors (or shades of gray) reflected that form an image. But since our eyes and brain are highly optimized for teasing coherent images out of what we see, a great deal of manipulation can be done to those colors and we will still see the original. Artists have exploited this capability for millennia, using areas of light and dark paints to hint at the details which our brains happily provide.

With the introduction of printing, techniques for “half-toning” emerged to convert a continuous image into a two-color (normally black ink and white paper) image that maintained as much as possible the appearance of the original continuous tone image. There are many photographic processes for making such conversions. I’ll discuss one simple example here, using digital instead of photographic techniques.

We start with an piece of a photograph. This picture is sepia toned and not extremely contrasty.
original

The first step we take is to convert the picture to purely gray tones. When we do this, we also adjust the contrast by shifting the the tones so that the darkest shade in the image is assigned to pure black and the lightest shade in the image is assigned to pure white, and all other shades are adjusted according to their position between the lightest and darkest shade.
gray

The second step involves superimposing a pattern over the picture. The pattern should have be a continuous repeating variation between light and dark — when we superimpose, we multiply the two images, so that the darker of the pattern or the photo dominates. In this case, I used simple straight lines, which are made with a sine wave luminance cycle between pure white and pure black.

screen

The remaining step is to look at each pixel, and decide whether it should be black or white. We do this by simply comparing it to a threshold. Is it lighter than, say, 50%? If so, then it’s white. Darker? Then it’s black. But 50% may not be the best place to position our threshold. We can try various thresholds to see how it comes out:

Here are a few observations that may be relevant at this point. At each of these steps, we made decisions which I glossed over. For example, when we adjust the contrast of this image, we chose a linear conversion. We could, instead, have used different curves to emphasize bright areas, dark area, or middle tones. We could have used histogram equalization which adjusts the image such that there are roughly the same number of pixels for each shade used (often used to bring out details).

Similarly, our overlay pattern needn’t go from pure black to pure white; by changing the ranges of this overlay pattern we are doing the equivalent of adjusting the tonal curves of the original image. We can also have a strong influence on how the final output looks. With a pattern that includes shades darker than our threshold, we will end up with the pattern throughout the final image (as in this case, our final image has lines across all parts of it). By having a pattern of only half maximum density, the lighter areas will not show the pattern:
screen-half

The overlay pattern can be many shapes other than lines (like concentric circles), and there can even be multiple overlays. Traditional newspaper half-toning uses two linear patterns like the one we used, but set at an angle with respect to one another, thereby creating diamond patterns. Newspapers chose this diamond pattern because the size of the pattern relative to the detail in the image determines how much detail winds up in the final image.

I tried to use the above techniques for generating 3D halftones or etchings. While it’s probably a project best suited for use with a laser cutter, I don’t have a laser cutter. I do, however, have a Nomad CNC router!

I wrote a short script that analyzes an image file, and converts it into a set of 3D ridges. My first approach looked at the image row by row, and created a groove with a thickness inversely proportional to the luminosity of the pixels in the row.

2015.05.25-12.18.41-[3D] 2015.05.25-12.03.47-[3D]
Result (click to view) Detail (click to view)

This works well in theory, but neglected to take into consideration some limits of my machine: the work area is 20cm x 20cm, and the smallest end-mill (cutting bit) I have is 1mm in diameter. This functionally limits my smallest detail to somewhere around 1.05mm. Add the fact that the wood stock I had on hand was around 8cm on its narrow dimension, and this results in an image that I can’t carve.

My next algorithm analyzes three rows of the image at a time. As it steps along the rows, it uses the average of the three pixels at each column (call them a, b, and c, where a is the top row). If the combined density is greater than 50%, a 1mm ridge is created. The ridge is thickened on the top by the average density of a + b, and thickened on the bottom by the average density of of b + c.

2015.05.25-12.13.16-[3D]

2015.05.25-12.09.26-[3D]
Result (click to view) Detail (click to view)

This algorithm provides something that’s within the resolution I can carve, but loses an enormous amount of detail. Furthermore, it requires harder wood than the birch plywood I tested on. I did some minor tweaking of the threshold, and here’s what I got:

Photo May 24, 12 07 22 PM

So at this point, I have a set of 0.5mm cutters on order, and need to track down some good hardwood stock to try carving. As always, details will be posted here if notable.


Sun, 8 Feb 2015

Further Adventures in Makering

— SjG @ 6:16 pm

When I last posted, I was playing with plastic and metal prints from Shapeways.com. So here’s an experience printing in a different material — gold-plated brass. As with last time, I used MoI modeler. I wanted to make a butterfly pendant for my wife for our anniversary. My initial try used an actual photograph of a monarch butterfly that I had taken in our garden. I traced the outlines, and created extruded forms for the wings. I then created the shape of the body, and a ring for the necklace chain to pass through.

When I uploaded it to Shapeways, however, I immediately ran into printability problems. I had misunderstood the limitations for different materials. When they say that the minimum dimension for an unsupported wire is 1mm in a given material, I had for some reason interpreted that as having a minimum cross-section of 1mm2. So, for example, I thought I could get away with a 0.5mm width if the thickness was 2mm. This interpretation was not based on their actual requirements. So it was back to the drawing board.

It turns out that using the actual dimensions was not going to be a successful approach. In the end, I bit the bullet, and did a complete redesign inspired by — but not matching — an actual butterfly. Since I was now dispensing with reality, I chose to make it more dramatic by having the wings all spread apart rather than overlapping as monarch actually hold their wings. The design that would be printable looks like this:
Screen-Shot-2015-01-13

I ordered the print three weeks before our anniversary, since the estimated turn-around was 10 days. In the end, it shipped in five days before our anniversary, but it didn’t actually arrive until the day after. Bummer. Then, while it was en route, Shapeways sent out an email extolling their new approach to precious-metal-plated items. Same price, better results. Well, thanks, guys. I mean, I know there has to be a transition between approaches at some point, but I wish it wasn’t between the time I’d placed an order and received my print.

Still, as even this shaky iPhone picture shows, the final result was OK.

Photo-Feb-08

(click on images for larger versions)

Coming up soon: some posts on a completely different path I’ve taken to 3D production. Here’s a hint.