fogbound.net




Fri, 5 Jun 2015

pfSense Can’t See the Outside World

— SjG @ 11:00 am

We had a static IP address change on a network that had been in operation for about six years. Since we have gradually been migrating services off to third-party hosting, we no longer need a block of local static IP addresses. To save some ca$h, we are down to one static IP — but that necessitated getting a new IP.

At midnight, the change occurred.

I went into the pfSense admin, got rid of all my 1-to-1 NAT mappings, virtual IPs, and all the firewall rules that protected the no-longer-extant servers. And I couldn’t see the outside world.

I couldn’t even ping the gateway.

Plugging a Mac into the same cable, however, and setting the network parameters, and I had immediate glorious interweb access everywhere.

It was perplexing. The pfSense firewall was configured exactly the same as the Mac. Why u no work firewall?

After a bunch of nonsense, I found the problem. I’d set the WAN interface to our new IP address, and specified it as a single IPv4. I thought I was setting the netmask correctly for a single IP:

IPv4 WAN Address: xxx.xxx.xxx.xxx/32

It turns out, I needed to reduce that netmask. That /32 means *all* of the address is the network submask.

For a single IP address, I used /24 (leaving the entire last byte as my address), although /31 should probably work and would lock it to the specific address.
Edit: The key is the netmask has to leave the gateway in the same subnet as your IP. Doh! You can see I don’t do this kind of stuff enough to know what I[‘m talking about.


Mon, 25 May 2015

Half-toning and Engraving

— SjG @ 11:40 am

Consider a photograph or painting. We see a range of colors (or shades of gray) reflected that form an image. But since our eyes and brain are highly optimized for teasing coherent images out of what we see, a great deal of manipulation can be done to those colors and we will still see the original. Artists have exploited this capability for millennia, using areas of light and dark paints to hint at the details which our brains happily provide.

With the introduction of printing, techniques for “half-toning” emerged to convert a continuous image into a two-color (normally black ink and white paper) image that maintained as much as possible the appearance of the original continuous tone image. There are many photographic processes for making such conversions. I’ll discuss one simple example here, using digital instead of photographic techniques.

We start with an piece of a photograph. This picture is sepia toned and not extremely contrasty.
original

The first step we take is to convert the picture to purely gray tones. When we do this, we also adjust the contrast by shifting the the tones so that the darkest shade in the image is assigned to pure black and the lightest shade in the image is assigned to pure white, and all other shades are adjusted according to their position between the lightest and darkest shade.
gray

The second step involves superimposing a pattern over the picture. The pattern should have be a continuous repeating variation between light and dark — when we superimpose, we multiply the two images, so that the darker of the pattern or the photo dominates. In this case, I used simple straight lines, which are made with a sine wave luminance cycle between pure white and pure black.

screen

The remaining step is to look at each pixel, and decide whether it should be black or white. We do this by simply comparing it to a threshold. Is it lighter than, say, 50%? If so, then it’s white. Darker? Then it’s black. But 50% may not be the best place to position our threshold. We can try various thresholds to see how it comes out:

Here are a few observations that may be relevant at this point. At each of these steps, we made decisions which I glossed over. For example, when we adjust the contrast of this image, we chose a linear conversion. We could, instead, have used different curves to emphasize bright areas, dark area, or middle tones. We could have used histogram equalization which adjusts the image such that there are roughly the same number of pixels for each shade used (often used to bring out details).

Similarly, our overlay pattern needn’t go from pure black to pure white; by changing the ranges of this overlay pattern we are doing the equivalent of adjusting the tonal curves of the original image. We can also have a strong influence on how the final output looks. With a pattern that includes shades darker than our threshold, we will end up with the pattern throughout the final image (as in this case, our final image has lines across all parts of it). By having a pattern of only half maximum density, the lighter areas will not show the pattern:
screen-half

The overlay pattern can be many shapes other than lines (like concentric circles), and there can even be multiple overlays. Traditional newspaper half-toning uses two linear patterns like the one we used, but set at an angle with respect to one another, thereby creating diamond patterns. Newspapers chose this diamond pattern because the size of the pattern relative to the detail in the image determines how much detail winds up in the final image.

I tried to use the above techniques for generating 3D halftones or etchings. While it’s probably a project best suited for use with a laser cutter, I don’t have a laser cutter. I do, however, have a Nomad CNC router!

I wrote a short script that analyzes an image file, and converts it into a set of 3D ridges. My first approach looked at the image row by row, and created a groove with a thickness inversely proportional to the luminosity of the pixels in the row.

2015.05.25-12.18.41-[3D] 2015.05.25-12.03.47-[3D]
Result (click to view) Detail (click to view)

This works well in theory, but neglected to take into consideration some limits of my machine: the work area is 20cm x 20cm, and the smallest end-mill (cutting bit) I have is 1mm in diameter. This functionally limits my smallest detail to somewhere around 1.05mm. Add the fact that the wood stock I had on hand was around 8cm on its narrow dimension, and this results in an image that I can’t carve.

My next algorithm analyzes three rows of the image at a time. As it steps along the rows, it uses the average of the three pixels at each column (call them a, b, and c, where a is the top row). If the combined density is greater than 50%, a 1mm ridge is created. The ridge is thickened on the top by the average density of a + b, and thickened on the bottom by the average density of of b + c.

2015.05.25-12.13.16-[3D]

2015.05.25-12.09.26-[3D]
Result (click to view) Detail (click to view)

This algorithm provides something that’s within the resolution I can carve, but loses an enormous amount of detail. Furthermore, it requires harder wood than the birch plywood I tested on. I did some minor tweaking of the threshold, and here’s what I got:

Photo May 24, 12 07 22 PM

So at this point, I have a set of 0.5mm cutters on order, and need to track down some good hardwood stock to try carving. As always, details will be posted here if notable.


Sun, 8 Feb 2015

Further Adventures in Makering

— SjG @ 6:16 pm

When I last posted, I was playing with plastic and metal prints from Shapeways.com. So here’s an experience printing in a different material — gold-plated brass. As with last time, I used MoI modeler. I wanted to make a butterfly pendant for my wife for our anniversary. My initial try used an actual photograph of a monarch butterfly that I had taken in our garden. I traced the outlines, and created extruded forms for the wings. I then created the shape of the body, and a ring for the necklace chain to pass through.

When I uploaded it to Shapeways, however, I immediately ran into printability problems. I had misunderstood the limitations for different materials. When they say that the minimum dimension for an unsupported wire is 1mm in a given material, I had for some reason interpreted that as having a minimum cross-section of 1mm2. So, for example, I thought I could get away with a 0.5mm width if the thickness was 2mm. This interpretation was not based on their actual requirements. So it was back to the drawing board.

It turns out that using the actual dimensions was not going to be a successful approach. In the end, I bit the bullet, and did a complete redesign inspired by — but not matching — an actual butterfly. Since I was now dispensing with reality, I chose to make it more dramatic by having the wings all spread apart rather than overlapping as monarch actually hold their wings. The design that would be printable looks like this:
Screen-Shot-2015-01-13

I ordered the print three weeks before our anniversary, since the estimated turn-around was 10 days. In the end, it shipped in five days before our anniversary, but it didn’t actually arrive until the day after. Bummer. Then, while it was en route, Shapeways sent out an email extolling their new approach to precious-metal-plated items. Same price, better results. Well, thanks, guys. I mean, I know there has to be a transition between approaches at some point, but I wish it wasn’t between the time I’d placed an order and received my print.

Still, as even this shaky iPhone picture shows, the final result was OK.

Photo-Feb-08

(click on images for larger versions)

Coming up soon: some posts on a completely different path I’ve taken to 3D production. Here’s a hint.


Fri, 26 Dec 2014

Makering of Physical Digital Stuff

— SjG @ 6:11 pm

I’ve been playing with moving from digital space to physical space. Thus far, I’ve been doing it the easy way — I’ve been building models using OpenSCAD or Moment of Inspiration, and have been relying on Shapeways to perform the actual translation from digital to physical.

First project was to create a stocking-stuffer for Pastafarians. The Flying Spaghetti Monster was modeled in MoI, and printed into plastic by Shapeways. For the “Strong and Flexible Plastic” material, Shapeways uses selective sintering rather than the extruder technology used by most inexpensive desktop 3D printers. This technology allows for printing of larger unsupported shapes, thus my FSM is easily printable using their machines while it would not be with a home 3D printer (probably not impossible, but it would involve printing a lot of extra supports).

Anyway, here’s the FSM printed in purple amidst some whole-wheat penne (created using a commercial wheat extrusion system).
DSC_2043

The next challenge was jewelry. Starting with a sketch, the idea was a ring with a colored pattern. This would involve mating halves, each printed in a different color or material. The sketch was quickly modeled in MoI.

sketch moi

I added only a tiny, tiny tolerance between the two shapes. I created the “cut out” portion of each half using a boolean subtraction of the other half, and then did an overall reduction of one half by a tiny percentage. In retrospect, of course, the approach used to create tolerance was incorrect for a number of reasons. But at the time, I measured up the resultant output file (using Autodesk’s MeshMixer) and it looked like it was good.

meshmix02 meshmix01

Yup. I was going to print a physical model with a tolerance of 3.34 microns (those measures on the MeshMixer are in millimeters). What could possibly go wrong with expecting that kind of resolution?

I had Shapeways print these in steel; one half with “stainless” finish and the other with “matte bronze” finish. Their process for printing steel involves using printing layers using liquid binder and a fine steel powder, then infusing the combined structure with (presumably liquid) brass. Pretty crazy! They claim an accuracy of ±1% with a layer thickness of 0.1mm.

Here’s what I got back:
IMG_5016
Pretty nice!

It’s no big surprise that they don’t interlock as originally envisioned. For Christmas, I received a very nice micrometer set (thanks to E & Simon!), so I tried measuring my rings. I’m still learning proper micrometer technique, and it’s an interesting challenge to measure complicated shapes.

IMG_5021 IMG_5019

These measurements are farther off than I expected, even after reviewing my initial assumptions and realizing they were unrealistic. I see three possibilities to explain this:
1. My measuring technique still needs refinement.
2. My digital modeling tools are less accurate than I thought.
3. Shapeways is less accurate than they report.

In all likelihood, it’s a combination of the first two (and maybe all three). I strongly suspect I am not measuring the very edge dimension of a flared shape correctly. MoI is not really designed for high-precision engineering models. It’s NURBs based, and I had to convert the model into a polygonal STL file for printing. I did my measurement of the resultant STL file in MeshMixer.

In any case, this may not have produced the exact end results I desired, but it’s been educational. As I learn more, I’ll post more here.


Tue, 2 Dec 2014

Compiling Kannel 1.4.4 under Centos 7.0

— SjG @ 4:28 pm

This took me while to get to work. If you follow these steps in order, it should work nicely.


# yum install mariadb-devel
# yum install libxml2-devel
# yum install bison
# yum install byacc
# cd /usr/local/src
# wget http://kannel.org/download/1.4.4/gateway-1.4.4.tar.gz
# tar xzvf gateway-1.4.4.tar.gz
# cd gateway-1.4.4
# ./configure --prefix=/usr/local/kannel --with-mysql --with-mysql-dir=/usr/include/mysql --disable-wap
# make

There are a few tricks here. First, just having libxml2 installed is not enough. You need the libxml development headers, etc. Should be obvious, but tricked me. Next, if you run ./configure before you have some of the dependencies installed (e.g., Bison), you will have modified source files that will still fail even after you install the dependency. Thus it’s important to install all that stuff before you run ./configure.

This stuff isn’t really that hard, but it can be time consuming to track down why it’s not working.