fogbound.net




Mon, 24 Oct 2022

Spider mating rituals

— SjG @ 12:48 pm

Here’s video of Phidippus adumbrata jumping spiders mating.

The male has two primary concerns: he wants to mate, but he doesn’t want to get eaten. His elaborate dance is not only gauging interest, but possibly also determining his risk level in approaching the female.

Jumping spiders, in general, are very visually-oriented creatures. They have excellent vision in color. According to some studies, the arm-waving behavior is not purely visual, however, but also vibrational, and the dance differs if the male is approaching the female in her nest or out in the open.

Anyway, spoiler alert, this male succeeded in mating and avoided being eaten (at least for the time being).

And they say romance is dead

Fri, 21 Oct 2022

The Future

— SjG @ 5:11 pm

Hey, with Twitter about to crash out hard, maybe this blog will get a little more attention. Or maybe not.

Filed in:

Sun, 4 Sep 2022

Illustration, in History and in the Future

— SjG @ 7:51 am

We’ve been watching many episodes of Pete Beard’s YouTube channel on the unsung heroes of illustration. Beard does a nice job of giving you the life details of these people, along with showing you representative samples of their work.

For a period that lasted roughly 150 years, a talented illustrator could become rich and famous. The increasing general literacy rates and growth of leisure time created an expanding market for books in the late 18th century. There were also technological breakthroughs in printing, which made adding illustrations to books more affordable, and publishers found that the illustrated books sold better. And while the concept of periodical magazines goes back to the 17th century, it was in the later part of the 19th century that illustrated magazines exploded in popularity.

Magazines were the YouTube or Twitter of their day, the place where culture was discussed and developed, and where art movements gained their momentum. They commissioned illustrations for their covers and their content, and once an illustrator had a reputation, they could make a good living. Once an artist was well known, they became sought for advertising and poster work. Some did merch, too.

It’s fascinating to see the enormous quantity and quality of work. As someone who has dabbled in art, it’s a little overwhelming. You can view a lot of the work online. Beard’s channel is a good introduction, but then you can go down the rabbit hole and look at scans of Jugend magazine, the outstanding Illustration History site from the Norman Rockwell Museum, commercial sites like Granger, and so on. More than you can possibly absorb is just a web search away.

The growth of photography and techniques for photographic printing changed the equation in the 1950s. Illustration took a back seat, with many localized renaissances like 60s music posters and 80s ‘zine culture.

Today, another revolution is happening. Over the past year, “Artificial Intelligence” art has gone from being a fairly obscure discipline to the point where it is producing stunning results. I’ve been playing with Stable Diffusion, and the results are both impressive, unsettling, and fascinating. This system works on a mathematical model that was fed a huge volume of images and text from the internet. Given a text prompt or general directing image, it uses that model to creates a set of mathematically weighted values representing how it “understands” your request. It then uses random noise and repeated filters to create an image that satisfies those mathematical weights.

The creation of the initial model is extremely computationally expensive and involved. The creation of images using that model, however, is something that can be done on any reasonably modern home computer.

There’s debate about “who creates the art,” when using a system like this. After all, it only “knows” the art it was fed in the initial model creation. So, in that sense, it can’t create anything “new” — and yet, the way it combines things is at a low enough level that there isn’t any reproduction of any of the original work. That being said, it’s still biased by the work it was trained on. It “understands” beauty and ugliness based on the descriptions of the images it was created with. This bakes in a lot of other subtle prejudices: ranging from abstract ideas like what constitutes futuristic, to more fraught ideas like what traits define racial terms. It’s telling that the initial release of the software put a “not safe for work” filter in place, because such a high percentage of the images of women in the initial model were nude.

People are already using this software to generate printed images and even movies. There’s definitely an art to creating the source prompts to get the results you want, but this is getting easier and easier. I think it will be a matter of weeks or months before the clip-art business is just a front-end to this kind of software. It will be fascinating to see where this ends up.


Thu, 16 Jun 2022

Ugh. Change.

— SjG @ 10:36 am

It looks like some update to WordPress has broken all of the image galleries on this site.

Oh Joy! Now I’ll need to find time to redo the theme so it all works again. I mean, I just did a redesign in … 2014. Sigh.


Tue, 17 May 2022

Linux Command Line Magic

— SjG @ 12:24 pm

In day-to-day operations, cirumstances often arise where you need simple answers to fairly complicated situations. In the best scenario, the information is available to you in some structured way, like in a database, and you can come up with a query (e.g., “what percentage of our customers in January spent more than $7.50 on two consecutive Wednesdays” is something you could probably query). In other scenarios, the information is not as readily available or not in a structured format.

One nice thing about Linux and Unix-like operating systems, is that the filesystem can be interrogated by chaining various tools to make it cough up information you need.

For example, I needed to copy the assets from a digital asset management (DAM) system to a staging server to test a major code change. The wrinkle is that the DAM is located on a server with limited monthly bandwidth. So my challenge: what was the right number of files to copy down without exceeding the bandwidth cap?

So, to start out with, I use some simple commands to determine what I’m dealing with:

$ ls -1 asset_storage | wc -l
10384

$ du -hs asset_storage
409G	asset_storage

So that first command lists all the files in the “asset_storage” directory, with the -1 flag saying to list one file per line, which is then piped into the word-count command with the -l flag which say to count lines. The second command tells me the storage requirement, with the -h flag asking for human-readble units.

I’ve got a problem. Over 10,000 files totalling over 400G of storage, and say my data cap is 5G. The first instinct is to say, “well, the average file size is 40M, so I may only be able to copy 125 files.” However, we know that’s wrong. There are some big video files and many small image thumbnails in there. So what if I only copy the smaller files?

$ find asset_storage -size -10M -print0 | xargs -0 du -hc | tail -n1
630M	total

Look at that beautiful sequence. Just look at it! The find command looks in the asset_storage directory for files smaller than 10M. The list it creates gets passed into the disk usage command via the super-useful xargs command. xargs takes a list that’s output from some command and uses that list as input parameters to another command. To be safe with weird characters (i.e., things that could cause trouble by being interpreted by the shell, like single quotes or parens or dollar signs) we use the -print0 flag from find (which forces it to use null terminators after each result output) and the -0 flag on xargs, which tells it to expect the null terminators. This takes the list of small files, passes them to the disk usage command with the -h (human-readable) and -c (cumulative) flags. The du command gives output for each file and for the sum total, but we only want the sum, so we pipe it into the tail command to just give us that last value.

So if we only include files under 10M, we can transfer them without getting close to our data cap. But what percentage of the files will be included?

$ find asset_storage -size -10M -print | wc -l
7708

Again, the find command looks in the asset_storage directory for files smaller than 10M and each line is passed into the word count as before. So if we include only files smaller than 10M, we get 7,708 of the 10,384 files, or just under 75% of them! Hooray!

But when I started to create the tar file to transfer the files, something was wrong! The tar file was 2G and growing! Control C! Control C! What’s going on here?

What was wrong? Well, this is where it gets into the weeds a bit. It took me longer than I’d like to admit to track down. The shell command buffer has limitations on its length, and xargs has its own limitations. If the list it receives exceeds those limits, xargs splits the input and invokes the destination command multiple times, each with a chunk of the list. So in my example above, the find command was overwhelming the xargs buffer and the du command was called multiple times:

$ find asset_storage -size -10M -print0 | xargs -0 du -hc | grep -i total
6.1G	total
630M	total

My tail command was seeing that second total, and missing the first one! To make the computation work the way I’d wanted, I had to allocate more command line length to xargs (the size you can set is system dependent, and can be found with xargs --show-limits):

$ find asset_storage -size -10M -print0 | xargs -0 -s2000000 du -hc | grep -i total
6.6G	total

Playing with the file size threshold, I was finally able to determine that my ideal target was files under 5M, which still gave me 68% of the files and kept the final transfer down to about 3G.

In summary, do it this way:

$ find asset_storage -size -5M -print0 | xargs -0 -s2000000 du -hc | tail -n1
2.9G	total

$ find asset_storage -size -5M -print | wc -l
7094

$ find asset_storage -size -5M -print0 | xargs -0 -s2000000 tar cf dam_image_backup.tgz