Consider a photograph or painting. We see a range of colors (or shades of gray) reflected that form an image. But since our eyes and brain are highly optimized for teasing coherent images out of what we see, a great deal of manipulation can be done to those colors and we will still see the original. Artists have exploited this capability for millennia, using areas of light and dark paints to hint at the details which our brains happily provide.
With the introduction of printing, techniques for “half-toning” emerged to convert a continuous image into a two-color (normally black ink and white paper) image that maintained as much as possible the appearance of the original continuous tone image. There are many photographic processes for making such conversions. I’ll discuss one simple example here, using digital instead of photographic techniques.
We start with an piece of a photograph. This picture is sepia toned and not extremely contrasty.
The first step we take is to convert the picture to purely gray tones. When we do this, we also adjust the contrast by shifting the the tones so that the darkest shade in the image is assigned to pure black and the lightest shade in the image is assigned to pure white, and all other shades are adjusted according to their position between the lightest and darkest shade.
The second step involves superimposing a pattern over the picture. The pattern should have be a continuous repeating variation between light and dark — when we superimpose, we multiply the two images, so that the darker of the pattern or the photo dominates. In this case, I used simple straight lines, which are made with a sine wave luminance cycle between pure white and pure black.
The remaining step is to look at each pixel, and decide whether it should be black or white. We do this by simply comparing it to a threshold. Is it lighter than, say, 50%? If so, then it’s white. Darker? Then it’s black. But 50% may not be the best place to position our threshold. We can try various thresholds to see how it comes out:
Here are a few observations that may be relevant at this point. At each of these steps, we made decisions which I glossed over. For example, when we adjust the contrast of this image, we chose a linear conversion. We could, instead, have used different curves to emphasize bright areas, dark area, or middle tones. We could have used histogram equalization which adjusts the image such that there are roughly the same number of pixels for each shade used (often used to bring out details).
Similarly, our overlay pattern needn’t go from pure black to pure white; by changing the ranges of this overlay pattern we are doing the equivalent of adjusting the tonal curves of the original image. We can also have a strong influence on how the final output looks. With a pattern that includes shades darker than our threshold, we will end up with the pattern throughout the final image (as in this case, our final image has lines across all parts of it). By having a pattern of only half maximum density, the lighter areas will not show the pattern:
The overlay pattern can be many shapes other than lines (like concentric circles), and there can even be multiple overlays. Traditional newspaper half-toning uses two linear patterns like the one we used, but set at an angle with respect to one another, thereby creating diamond patterns. Newspapers chose this diamond pattern because the size of the pattern relative to the detail in the image determines how much detail winds up in the final image.
I tried to use the above techniques for generating 3D halftones or etchings. While it’s probably a project best suited for use with a laser cutter, I don’t have a laser cutter. I do, however, have a Nomad CNC router!
I wrote a short script that analyzes an image file, and converts it into a set of 3D ridges. My first approach looked at the image row by row, and created a groove with a thickness inversely proportional to the luminosity of the pixels in the row.
|Result (click to view)
||Detail (click to view)
This works well in theory, but neglected to take into consideration some limits of my machine: the work area is 20cm x 20cm, and the smallest end-mill (cutting bit) I have is 1mm in diameter. This functionally limits my smallest detail to somewhere around 1.05mm. Add the fact that the wood stock I had on hand was around 8cm on its narrow dimension, and this results in an image that I can’t carve.
My next algorithm analyzes three rows of the image at a time. As it steps along the rows, it uses the average of the three pixels at each column (call them a, b, and c, where a is the top row). If the combined density is greater than 50%, a 1mm ridge is created. The ridge is thickened on the top by the average density of a + b, and thickened on the bottom by the average density of of b + c.
|Result (click to view)
||Detail (click to view)
This algorithm provides something that’s within the resolution I can carve, but loses an enormous amount of detail. Furthermore, it requires harder wood than the birch plywood I tested on. I did some minor tweaking of the threshold, and here’s what I got:
So at this point, I have a set of 0.5mm cutters on order, and need to track down some good hardwood stock to try carving. As always, details will be posted here if notable.