fogbound.net




Sat, 20 Jul 2019

Maccabeam™ Part 1: Simulating candle-light with pseudo-random sequences

— SjG @ 3:05 pm

Background: Introduction

There are many components to the the Maccabeam™ Menorah project. In describing the build, I’ll address each component separately. This first posting will cover the simulation of candle-light.

The warm flicker of candlelight is a familiar sight, and unsurpringly there are a lot of examples and techniques you can find on the internet for simulating it. You could buy pre-made candle light LEDs from Evil Mad Scientist or JMFX, you could read Tim’s excellent analysis and implement something based upon it, or you could look up a myriad examples and how-tos at YouTube or Hackaday or elsewhere. Being perverse, I decided to implement my own approach.

My requirement is that my light source act as a binary source: either on or off. If I were to use LEDs, for example, I could actually change the current to change the brightness. But I’ll be using laser diodes which are either illuminated or not.

Since I’m using a Teensy microcontroller, I can use pulse width modulation (PWM) to control brightness. That’s a fancy way of saying turning the light on and off very rapidly, and simulating intensity. The higher the duty cycle (i.e., the larger percentage of the time the light is “on”), the brighter our eyes see illumination. This works on the property of the human eye known as “persistence of vision.” Our eyes and visual cortex integrate the incoming signal over time. It enables us to look at screens that rapidly flash changing images and see motion (i.e., movies) or a quickly moving point of illumination painting out a pattern and seeing an image (raster images, old tube-style TVs, etc).

So now the question is how to make the pattern of on and off that will look most like a candle flicker? Like many other people presented with this challenge, my first thought is to turn to a pseudo-random sequence generator. Specifically, my first thought is a linear shift-register sequence. Why? A couple of reasons, but the primary is that in the late sixties, my father worked at JPL and assisted Solomon Golomb, who wrote the definitive book on the subject. Thus, when I was in high school, my father helped me with an electronics project where we implemented one.

So, what’s this linear shift-register sequence? It starts with a shift register. This is a circuit with a series of cells or registers, each of which holds a bit with a value of either one or zero. Every time an external signal (like a clock pulse) comes, every value gets moved over to the neighboring cell. So a linear shift-register sequence adds circuity so that after each shift, it populates the input bit using some algorithmic function of its previous state. Depending on the function, it can create a long sequence that seems nearly random.

Consider the following 8-bit shift register:

8-bit linear shift register
8-bit linear shift register

At each tick of the clock, new value is computed by XOR-ing the values of bit numbers 3 and 7 (these are called the “taps”). The contents of each bit shifts to the position to its left, and the newly computed value populates bit number 0. In this case, the function is exclusive-or (XOR) which has the following truth table:

Input 1Input 2Output
000
011
101
110

So imagine only bit number 0 is set and the others are empty. The sequence will go as follows (a space is added between bits 3 and 4 to enhance readability):

0000 0001 (0 xor 0 -> 0, so we'll inject a 0) 
0000 0010 (0 xor 0 -> 0, so we'll inject a 0)
0000 0100 (0 xor 0 -> 0, so we'll inject a 0)
0000 1000 (0 xor 1 -> 1, so we'll inject a 1)
0001 0001 (0 xor 0 -> 0, so we'll inject a 0)
0010 0010 ...
0100 0100 
1000 1000 
0001 0000 
0010 0000 
0100 0000 
1000 0000 
0000 0001 

This particular configuration is considered “non-maximal” because even though 8 bits can represent 256 combinations, this sequence will repeat after every 12 clock cycles. If you move where the tap is, you can get better sequences. However, it turns out there is no maximal single-tap sequence for an 8-bit register. With an 8-bit register, you can get up to 217 step sequences if you put the tap at bit number 2 or 4. If you add more taps, you can achieve the full 255 states (the fully zero state is omitted, since it will always stay zero).

For our simulation, we want more than 255 steps anyway, so we go up to a 16-bit shift register and use taps that will yield a 65,535 step sequence. Several pages on the web (I like Burton Rosenberg’s page, as well as the Wikipedia reference) will help you find the right taps for any reasonable register you wish to create.

16-bit linear shift register
16-bit linear shift register

This is implemented in hardware with a handful of chips, or in software with a few lines of code like:

// code is unforgivably formatted, as good C should be[?]
uint16_t srs; // 16-bits for our shift register
srs = 1;
while (1)
{
   srs = (srs << 1) | (
        (
           (
              (
                 ((srs & 0x8000) == 0x8000) ^
                 ((srs & 0x2000) == 0x2000)
              )
              ^
              ((srs & 0x1000) == 0x1000)
            )
            ^
            ((srs & 0x0400) == 0x0400)
        )
);

So for the Maccabeam™, we’ll have a maximum of nine candles going at any one time. So if we have a stream of pseudo-random bits flying by in our 16-bit register, we could just pick nine arbitrary bits and route each one to our laser diode. The problem here is that the values keep shifting left, so there will be a visible pattern. So even if candle number 1 is driven by bit 5 and candle number 2 is driven by bit 8, candle 2 will always flicker three clock cycles after candle 1. Even if the order of the candles is different than the order of the bits, our brains are very good at detecting repeating patterns. It just won’t look very good.

So to break this visible pattern, we will drive each candle by OR-ing two bits together. The spacing between the two bits will be as close to unique as we can make it — of course, given that we only have 16 bits to play with, and nine candles, there will be some overlap.

Candle OR pattern

So you can see that there are two sets of candles that will always be illuminated simultaneous: candle 2 and candle 8; and candle 5 and candle S (the shamos/shamas/shamash. Here’s Wikipedia’s explanation of menorah candles).

There’s a question of timing, too. How frequently does the shift register shift? What looks good may or may not be what’s most like actual candle light. My first stab uses a 40ms cycle time. Adjustments may be in order.

I’m not at the point where I’m ready to hook up the lasers (still practicing the proper intonation of the cry “fire ze laaaaasers!“), but below is some sample video of this approach being run into lowly LEDs.

Blinky lights!

Tue, 7 May 2019

The Maccabeam™

— SjG @ 5:43 pm

Back in 2017, I laser-cut a menorah out of poplar. When the family showed up for Hanukkah, I mentioned my “laser menorah.” My nephew’s eyes lit up with excitement, but I could see his disappointment upon presentation of the actual product.

Subsequently, I’ve been building something closer to what he probably envisioned in the first place.

I’m slow at building things, and have lots of other commitments taking up time. I spend maybe an hour or two a week on the project, and tend to forget a lot of important details between sessions. There is a whole lot of learning and re-learning. However, I thought that documenting the various processes here would be good for me (my external memory), and may be of interest to others.

The first step of any project, of course, is to give it a good name and maybe a logo. Since it’s a laser menorah, I’m calling it the Maccabeam™, and my initial version of the logo looks like this:

The Official Maccabeam™ logo

So there are a lot of things to talk about here. I’ll post a lot of circuit design ideas, physical design ideas, and details on the software that drives it. I’ll also probably post some ambivalent thoughts on the whole holiday of Hanukkah1. But for now, I’ll start with the list the requirements I’ve been using for the project:

  1. Instead of candles, I’ll be using lasers!
  2. The lasers will probably be illuminating vials of olive oil rather than shining on the ceiling.
  3. There will be more lights, too. Color LEDs! NeoPixels!
  4. The whole thing will be driven by an embedded controller I can program. Since I like the Teensy and Paul & Robin seem like the kind of people I want to support, I’ll go with a Teensy LC. Update: I have ended up using a Teensy 3.2 because I needed more memory.
  5. Since I have a microcontroller, it should take advantage of the smarts, and not just rely on an on/off switch.
  6. Hey, if it’s gonna be smart, it should use a GPS receiver to figure out the location and date, and automatically run itself on Hanukkah.
  7. It will need some kind of display so you can tell what it’s doing, what time it is, how long until Hanukkah, etc.
  8. It’ll be cool if it could play some music too.
  9. OK, maybe I don’t want an on/off switch, but I do need a switch to trigger a simulation mode. That way, I can show it off to people at any time of year.
  10. [update 13 May 2019] Oh, I forgot an important one. The Maccabeam™ wants to be stand-alone. It doesn’t want any dependencies on the Interet, wireless networks, or the like. It should only depend on a source of electricity and a constellation of 20-some-odd highly sophisticated satellites and their ground support network.

So with that set of requirements, I got started. I hope to write something here about each of those requirements as I complete the build.

Update 1: The Flickering Candles.
Update 2: Some Physical Structure
Update 3: When Exactly is Hanukkah?
Update 4: A Typical Code Problem
Update 5: Oil and Lasers
Update 6: Final Product Gallery

1 I mean, it’s celebrating the victory of a family of intolerant religious fanatics over both their foreign imperial enemies and their more moderate coreligionists. Their victory established the shaky Hasmonean dynasty whose infighting and collapse resulted in Herod’s rise to power in the region, etc.


Sun, 20 May 2018

Moving away from Aperture

— SjG @ 7:39 pm

I’ve finally moved away from Aperture.

I thought I’d share a few thoughts.

Back in the day, when I was still largely shooting film and occasionally scanning prints or negatives, I organzied things by directories. I’d have descriptive names like “hike_at_paseo_miramar”. Over time, this became unweildy, and I had the startlingly brilliant and utterly original idea of organizing by date as well, so I had a directory the looked like:

photos
-> 1997-08-02
-> 1997-08-04-pasadena
-> 1998
—> 1998-02-01-hike
—> 1998-02-07-beach
birthday_party
hike_at_paseo_miramar

You can see the problem. I found I could keep this going by using Adobe Bridge, but I needed a better system. For a while, I organized my pictures on a web server with a cobbled-together collection of perl programs. That was manageable for about two months. Eventually, I tracked down some novel software to organize photos: iView Media Pro. It used the now-familiar idiom of folders (which represented physically where they were on the disk), virtual albums for grouping, and keywords.

iView was great until it wasn’t. The company didn’t have the resources to support it to the degree it needed, and occasional bugs (like the one that wiped out keywords for a few thousand pictures) were frustrating. When the product was sold and the team migrated to Microsoft, I figured it was time to move on. I went with Apple’s $300 “Professional” product: Aperture.

I had to write some code to export my keywords and organization from iView into Aperture, but that left me with a system that worked. It served me well for ten years. For the last two of those years, a cloud hung over me. Apple end-of-lifed the product, and, while it continues to run, did not get any new features or bug-fixes. With each new version of Mac OS, there was the risk that I’d no longer be able to run it.

I did a trial of Capture One, and was impressed with the RAW processing and the workflow. Where it failed for me, though, was in the cataloging. At this point, I had about 55,000 photos in my catalog. Because I like to have them all “with me” at all times, I kept my library on my notebook. I realize this is a strange requirement for most people, but I like the ability to bring up a collection of pictures from a given trip or a given event when I get together with someone else who participated.

Capture One had a hard time keeping up with that many photos. I considered breaking the catalog into separate, smaller collections, but was still resistant to any little impediment to finding the picture I want at a given moment. Ironically, Phase One, the company who creates/sells Capture One has acquired iView Media Pro from Microsoft somewhere along the line. I contemplated moving back to it, but realized I’d need to get that, plus a RAW-processing package, etc, and decided against it.

The 10,000 ton elephant in the room, of course, is Adobe Lightroom. It dominates the photo processing/organizing space. I didn’t want to move to it because of the software subscription model. I don’t want to be beholden to Adobe for all eternity.

Still, after trials of several products, I have been forced to surrender. Lightroom is the only product I could find that worked as well as Aperture.

Lightroom will import Aperture libraries, but the non-destructive editing from Aperture does not come across. Some adjustments might, a lot of metadata will, but those painstaking edits do not. I invested in ApertureExporter, which helps the export process by building a copy of the catalog, and creating extra copies of edited images where it “bakes in” the changes. So you’re left with the original (in case you wish to re-edit), and a JPG or TIFF version with all your changes. I left to run all night, and the result was a nice clean export.

Importing into Lightroom was not difficult. It was interesting to see all the cores of the CPU go to nearly 100% utilization while LR generated previews. I opted for large previews (typical screen resolution). This allows me to have the versions to show off to people (as I mentioned above) while keeping the originals on an external drive that may or may not be attached at any given moment. LR has something called “smart” previews that you can even edit while offline. I didn’t think it made sense for me, but I may revisit this decision later.

I was graciously given a nice tutorial on the basics of using Lightroom by Tomas Fjetland, which helped get me up to speed.

Some observations. Some things in Lightroom are much easier than in Aperture. Lens correction is a single click, instead of requiring a separate plugin. Some things in Lightroom are harder. Having separate panels that you have to switch through for “Library” functions (cataloging and managing keywords) and “Developing” (adjustments and edits) seems unnecessary, especially as the switch doesn’t really result in more efficient use of screen real-estate. Tagging keywords requires a lot more clicking. An auto-completed keyword requires one hit of the return key to accept the auto-completion, and a second hit to apply it to the image.

I’m still pretty slow at the editing tools, and the paradigms are not identical. All that old muscle memory in my fingers is slow to retrain. Overall, I’m pleased with some of the results I’m getting, but it will still be a while before I’m as quick or adept at Lightroom. No doubt I’ll revisit this posting at some later date.


Sat, 9 Dec 2017

Laser Menorah

— SjG @ 6:49 pm

You know, this title is misleading. The reality is a whole lot more boring. Maybe next year, I should take inspiration from the title.

This is more of a lazy Saturday afternoon project. I wanted to use some designs that I’ve been kicking around. So I took a sea of hexagons and a tree in Affinity Designer and mucked about for a bit until I had something where I more or less liked the look.

Next, I grabbed a slice of poplar (available in 8″ x 24″ x 0.25″ slabs at Home Deport, as “Hobby Poplar”) and drove over to CRASH Space. While it’s mega-take-apart-day, I scurried over to the laser cutter. I converted the design to PDF, loaded it up in Corel Draw, used the Epilog printer-driver, and sent it to the laser cutter. The poplar cuts very nicely.

Here’s a link to the PDF of the laser-cut portion, if you want to cut a copy yourself.2017-12-09-hexonorah-cut.pdf

I brought the pieces home, sanded lightly, drilled a few holes, and mounted the vertical piece onto the base, carefully mis-aligning it with the major axis of the elliptical base. Ah well.

I drilled holes where I would mount the candle holders themselves (after all, poplar is pretty, but not ideal as a holder for things on fire). For the actual sockets, I used some nice quarter-inch brass compression caps (also from Home Depot). I drilled a center hole, pushed through a brad, and then soldered it with a torch.

Next, let things cool, dried off the sockets, and put it all together.

The final result is not as attractive as I had imagined it. It’s a little … I dunno, squat? Perhaps the next iteration will have more dramatic tree-like branches emerging to hold the candles.

OK. Next year, forget the design. We’ll just go with lasers.


Wed, 27 Sep 2017

Seasonal Palettes

— SjG @ 7:43 pm

Over the years, I’ve written various JavaScript mandala-generators. I like giving variety to the color sets used, and in the past, I’ve hand-crafted collections of colors which I’ve given descriptive names like “Earthy,” “Angst,” and “Scorchio.”

For a new project, I wanted seasonal palettes. Being a northern-hemisphere dweller, I think of January as cool colors, May as yellows and greens, August as ambers and oranges, etc. Rather than hand assemble them, I thought this would be a good use for the Interwebs.

So I wrote a bash/php/ImageMagick script that would hit flickr.com with a seasonal search term to bring back the first twenty-five matching pictures. It then made a composite of the pictures, did a pixelation process, reduced the colors to a minimum set, and built a palette from them.

With excuses of fair use, here’s a visual of that process, using the example where the search terms were “Landscape July”:

1. Images are brought down, each scaled to fit in a 64 x 64 pixel square, and then they’re all combined into a single image.

2. The combined image is pixelated by scaling to 5% of the original size, then scaling back up to a larger size.

3. To get a little more punch and a little less muddy, the pixelated image has its histogram equalized

4. For good measure, the script then reduces the image to 32 colors.

Now, some of this may be redundant. For example, we could easily skip step 2, since we’re reducing colors in step 4. However, this way we sort of reduce the color space before we equalize the histogram. Maybe I should experiment with other paths here.

In any case, the results for my first search term “($month) Landscape” was not very good:

I tried some other search terms for good measure.

Here’s “($month) colors”:

Here’s “($month) thoughts”:

And finally, here’s “($month) skies”:

I have a few conclusions. First, it’s obvious that a hand-created set of palettes would be better. The pictures Flickr returned for each search term didn’t match my expectations very well. Perhaps I’d have done better with season names instead of month names. Lastly finding the best palette from an image is a problem that Google tells me many have worked on. I’m assuming others have probably done better than I.

But it’s a curious question — what are the “characteristic” colors from an image? My approach largely comes down to the number of pixels of a given general color. Are there lots of blues? My approach will have at least some blue. But if an accent color is “important,” whatever that means, my approach will probably lose it.

In any case, it’s probably back to mandalas and hand-crafted palettes for the next project.