fogbound.net




Mon, 29 Jun 2015

Failed Javascript Experiment, cont.

— SjG @ 4:38 pm

So I got a few questions about that Javascript experiment, so I thought I’d add some explanation. In the process, I added some features to it.

So what’re all those variables, I was asked. And what do they mean?

To start with, this whole experiment was based on a traditional Islamic tiling pattern. I don’t know the actual origin, but I’m sure it goes way back.

So, step 1, you tile a plane with equilateral triangles. The number of triangles that fit across our page is what I called “Spacing.”

exp-step1

Next, you draw a circle at each intersection of lines. The size of the circle is “Radius” — in this model, a radius of 100% means that the neighboring circles touch.

exp-step2

Last, you divide each circle into twelve equal pieces. Then, according to some predetermined pattern, you connect the points on neighboring circles. Here’s a traditional pattern:

exper-step3

The only other variables of interest are “Column Radius Growth” and “Row Radius Growth” which is the percentage by which the radius of each circle changes depending on the number of the column or row (starting in the upper left-hand corner).

This time around, I’ve added a few different connection patterns.


Fri, 19 Jun 2015

Failed Javascript Experiment

— SjG @ 5:47 pm

I was thinking about textures that are traditionally called Islamic or Moorish tilings.

One simple pattern is built by placing circles on a staggered grid, placing points around their circumferences, and then connecting points to neighboring circles in a pre-defined pattern. Here’s one example:

basic-detail

I was thinking – hey, what if I vary the radius of those circles from row to row?

Three hours of clumsy Javascripting later, I found the answer:

pattern(click on image to see detail)

Unfortunately, it often takes hours of coding to learn that an idea’s not much good.

You can play with some of the variables yourself, or (horrors) look at the source code to see how it works.

Updated:
I couldn’t leave well enough alone, and have added a few features. And I’m getting slightly more interesting stuff now.
pattern2(click on image to see detail)


Mon, 25 May 2015

Half-toning and Engraving

— SjG @ 11:40 am

Consider a photograph or painting. We see a range of colors (or shades of gray) reflected that form an image. But since our eyes and brain are highly optimized for teasing coherent images out of what we see, a great deal of manipulation can be done to those colors and we will still see the original. Artists have exploited this capability for millennia, using areas of light and dark paints to hint at the details which our brains happily provide.

With the introduction of printing, techniques for “half-toning” emerged to convert a continuous image into a two-color (normally black ink and white paper) image that maintained as much as possible the appearance of the original continuous tone image. There are many photographic processes for making such conversions. I’ll discuss one simple example here, using digital instead of photographic techniques.

We start with an piece of a photograph. This picture is sepia toned and not extremely contrasty.
original

The first step we take is to convert the picture to purely gray tones. When we do this, we also adjust the contrast by shifting the the tones so that the darkest shade in the image is assigned to pure black and the lightest shade in the image is assigned to pure white, and all other shades are adjusted according to their position between the lightest and darkest shade.
gray

The second step involves superimposing a pattern over the picture. The pattern should have be a continuous repeating variation between light and dark — when we superimpose, we multiply the two images, so that the darker of the pattern or the photo dominates. In this case, I used simple straight lines, which are made with a sine wave luminance cycle between pure white and pure black.

screen

The remaining step is to look at each pixel, and decide whether it should be black or white. We do this by simply comparing it to a threshold. Is it lighter than, say, 50%? If so, then it’s white. Darker? Then it’s black. But 50% may not be the best place to position our threshold. We can try various thresholds to see how it comes out:

Here are a few observations that may be relevant at this point. At each of these steps, we made decisions which I glossed over. For example, when we adjust the contrast of this image, we chose a linear conversion. We could, instead, have used different curves to emphasize bright areas, dark area, or middle tones. We could have used histogram equalization which adjusts the image such that there are roughly the same number of pixels for each shade used (often used to bring out details).

Similarly, our overlay pattern needn’t go from pure black to pure white; by changing the ranges of this overlay pattern we are doing the equivalent of adjusting the tonal curves of the original image. We can also have a strong influence on how the final output looks. With a pattern that includes shades darker than our threshold, we will end up with the pattern throughout the final image (as in this case, our final image has lines across all parts of it). By having a pattern of only half maximum density, the lighter areas will not show the pattern:
screen-half

The overlay pattern can be many shapes other than lines (like concentric circles), and there can even be multiple overlays. Traditional newspaper half-toning uses two linear patterns like the one we used, but set at an angle with respect to one another, thereby creating diamond patterns. Newspapers chose this diamond pattern because the size of the pattern relative to the detail in the image determines how much detail winds up in the final image.

I tried to use the above techniques for generating 3D halftones or etchings. While it’s probably a project best suited for use with a laser cutter, I don’t have a laser cutter. I do, however, have a Nomad CNC router!

I wrote a short script that analyzes an image file, and converts it into a set of 3D ridges. My first approach looked at the image row by row, and created a groove with a thickness inversely proportional to the luminosity of the pixels in the row.

2015.05.25-12.18.41-[3D] 2015.05.25-12.03.47-[3D]
Result (click to view) Detail (click to view)

This works well in theory, but neglected to take into consideration some limits of my machine: the work area is 20cm x 20cm, and the smallest end-mill (cutting bit) I have is 1mm in diameter. This functionally limits my smallest detail to somewhere around 1.05mm. Add the fact that the wood stock I had on hand was around 8cm on its narrow dimension, and this results in an image that I can’t carve.

My next algorithm analyzes three rows of the image at a time. As it steps along the rows, it uses the average of the three pixels at each column (call them a, b, and c, where a is the top row). If the combined density is greater than 50%, a 1mm ridge is created. The ridge is thickened on the top by the average density of a + b, and thickened on the bottom by the average density of of b + c.

2015.05.25-12.13.16-[3D]

2015.05.25-12.09.26-[3D]
Result (click to view) Detail (click to view)

This algorithm provides something that’s within the resolution I can carve, but loses an enormous amount of detail. Furthermore, it requires harder wood than the birch plywood I tested on. I did some minor tweaking of the threshold, and here’s what I got:

Photo May 24, 12 07 22 PM

So at this point, I have a set of 0.5mm cutters on order, and need to track down some good hardwood stock to try carving. As always, details will be posted here if notable.


Sun, 8 Feb 2015

Further Adventures in Makering

— SjG @ 6:16 pm

When I last posted, I was playing with plastic and metal prints from Shapeways.com. So here’s an experience printing in a different material — gold-plated brass. As with last time, I used MoI modeler. I wanted to make a butterfly pendant for my wife for our anniversary. My initial try used an actual photograph of a monarch butterfly that I had taken in our garden. I traced the outlines, and created extruded forms for the wings. I then created the shape of the body, and a ring for the necklace chain to pass through.

When I uploaded it to Shapeways, however, I immediately ran into printability problems. I had misunderstood the limitations for different materials. When they say that the minimum dimension for an unsupported wire is 1mm in a given material, I had for some reason interpreted that as having a minimum cross-section of 1mm2. So, for example, I thought I could get away with a 0.5mm width if the thickness was 2mm. This interpretation was not based on their actual requirements. So it was back to the drawing board.

It turns out that using the actual dimensions was not going to be a successful approach. In the end, I bit the bullet, and did a complete redesign inspired by — but not matching — an actual butterfly. Since I was now dispensing with reality, I chose to make it more dramatic by having the wings all spread apart rather than overlapping as monarch actually hold their wings. The design that would be printable looks like this:
Screen-Shot-2015-01-13

I ordered the print three weeks before our anniversary, since the estimated turn-around was 10 days. In the end, it shipped in five days before our anniversary, but it didn’t actually arrive until the day after. Bummer. Then, while it was en route, Shapeways sent out an email extolling their new approach to precious-metal-plated items. Same price, better results. Well, thanks, guys. I mean, I know there has to be a transition between approaches at some point, but I wish it wasn’t between the time I’d placed an order and received my print.

Still, as even this shaky iPhone picture shows, the final result was OK.

Photo-Feb-08

(click on images for larger versions)

Coming up soon: some posts on a completely different path I’ve taken to 3D production. Here’s a hint.


Fri, 26 Dec 2014

Makering of Physical Digital Stuff

— SjG @ 6:11 pm

I’ve been playing with moving from digital space to physical space. Thus far, I’ve been doing it the easy way — I’ve been building models using OpenSCAD or Moment of Inspiration, and have been relying on Shapeways to perform the actual translation from digital to physical.

First project was to create a stocking-stuffer for Pastafarians. The Flying Spaghetti Monster was modeled in MoI, and printed into plastic by Shapeways. For the “Strong and Flexible Plastic” material, Shapeways uses selective sintering rather than the extruder technology used by most inexpensive desktop 3D printers. This technology allows for printing of larger unsupported shapes, thus my FSM is easily printable using their machines while it would not be with a home 3D printer (probably not impossible, but it would involve printing a lot of extra supports).

Anyway, here’s the FSM printed in purple amidst some whole-wheat penne (created using a commercial wheat extrusion system).
DSC_2043

The next challenge was jewelry. Starting with a sketch, the idea was a ring with a colored pattern. This would involve mating halves, each printed in a different color or material. The sketch was quickly modeled in MoI.

sketch moi

I added only a tiny, tiny tolerance between the two shapes. I created the “cut out” portion of each half using a boolean subtraction of the other half, and then did an overall reduction of one half by a tiny percentage. In retrospect, of course, the approach used to create tolerance was incorrect for a number of reasons. But at the time, I measured up the resultant output file (using Autodesk’s MeshMixer) and it looked like it was good.

meshmix02 meshmix01

Yup. I was going to print a physical model with a tolerance of 3.34 microns (those measures on the MeshMixer are in millimeters). What could possibly go wrong with expecting that kind of resolution?

I had Shapeways print these in steel; one half with “stainless” finish and the other with “matte bronze” finish. Their process for printing steel involves using printing layers using liquid binder and a fine steel powder, then infusing the combined structure with (presumably liquid) brass. Pretty crazy! They claim an accuracy of ±1% with a layer thickness of 0.1mm.

Here’s what I got back:
IMG_5016
Pretty nice!

It’s no big surprise that they don’t interlock as originally envisioned. For Christmas, I received a very nice micrometer set (thanks to E & Simon!), so I tried measuring my rings. I’m still learning proper micrometer technique, and it’s an interesting challenge to measure complicated shapes.

IMG_5021 IMG_5019

These measurements are farther off than I expected, even after reviewing my initial assumptions and realizing they were unrealistic. I see three possibilities to explain this:
1. My measuring technique still needs refinement.
2. My digital modeling tools are less accurate than I thought.
3. Shapeways is less accurate than they report.

In all likelihood, it’s a combination of the first two (and maybe all three). I strongly suspect I am not measuring the very edge dimension of a flared shape correctly. MoI is not really designed for high-precision engineering models. It’s NURBs based, and I had to convert the model into a polygonal STL file for printing. I did my measurement of the resultant STL file in MeshMixer.

In any case, this may not have produced the exact end results I desired, but it’s been educational. As I learn more, I’ll post more here.