fogbound.net




Sun, 4 Sep 2022

Illustration, in History and in the Future

— SjG @ 7:51 am

We’ve been watching many episodes of Pete Beard’s YouTube channel on the unsung heroes of illustration. Beard does a nice job of giving you the life details of these people, along with showing you representative samples of their work.

For a period that lasted roughly 150 years, a talented illustrator could become rich and famous. The increasing general literacy rates and growth of leisure time created an expanding market for books in the late 18th century. There were also technological breakthroughs in printing, which made adding illustrations to books more affordable, and publishers found that the illustrated books sold better. And while the concept of periodical magazines goes back to the 17th century, it was in the later part of the 19th century that illustrated magazines exploded in popularity.

Magazines were the YouTube or Twitter of their day, the place where culture was discussed and developed, and where art movements gained their momentum. They commissioned illustrations for their covers and their content, and once an illustrator had a reputation, they could make a good living. Once an artist was well known, they became sought for advertising and poster work. Some did merch, too.

It’s fascinating to see the enormous quantity and quality of work. As someone who has dabbled in art, it’s a little overwhelming. You can view a lot of the work online. Beard’s channel is a good introduction, but then you can go down the rabbit hole and look at scans of Jugend magazine, the outstanding Illustration History site from the Norman Rockwell Museum, commercial sites like Granger, and so on. More than you can possibly absorb is just a web search away.

The growth of photography and techniques for photographic printing changed the equation in the 1950s. Illustration took a back seat, with many localized renaissances like 60s music posters and 80s ‘zine culture.

Today, another revolution is happening. Over the past year, “Artificial Intelligence” art has gone from being a fairly obscure discipline to the point where it is producing stunning results. I’ve been playing with Stable Diffusion, and the results are both impressive, unsettling, and fascinating. This system works on a mathematical model that was fed a huge volume of images and text from the internet. Given a text prompt or general directing image, it uses that model to creates a set of mathematically weighted values representing how it “understands” your request. It then uses random noise and repeated filters to create an image that satisfies those mathematical weights.

The creation of the initial model is extremely computationally expensive and involved. The creation of images using that model, however, is something that can be done on any reasonably modern home computer.

There’s debate about “who creates the art,” when using a system like this. After all, it only “knows” the art it was fed in the initial model creation. So, in that sense, it can’t create anything “new” — and yet, the way it combines things is at a low enough level that there isn’t any reproduction of any of the original work. That being said, it’s still biased by the work it was trained on. It “understands” beauty and ugliness based on the descriptions of the images it was created with. This bakes in a lot of other subtle prejudices: ranging from abstract ideas like what constitutes futuristic, to more fraught ideas like what traits define racial terms. It’s telling that the initial release of the software put a “not safe for work” filter in place, because such a high percentage of the images of women in the initial model were nude.

People are already using this software to generate printed images and even movies. There’s definitely an art to creating the source prompts to get the results you want, but this is getting easier and easier. I think it will be a matter of weeks or months before the clip-art business is just a front-end to this kind of software. It will be fascinating to see where this ends up.


Thu, 16 Jun 2022

Ugh. Change.

— SjG @ 10:36 am

It looks like some update to WordPress has broken all of the image galleries on this site.

Oh Joy! Now I’ll need to find time to redo the theme so it all works again. I mean, I just did a redesign in … 2014. Sigh.


Tue, 17 May 2022

Linux Command Line Magic

— SjG @ 12:24 pm

In day-to-day operations, cirumstances often arise where you need simple answers to fairly complicated situations. In the best scenario, the information is available to you in some structured way, like in a database, and you can come up with a query (e.g., “what percentage of our customers in January spent more than $7.50 on two consecutive Wednesdays” is something you could probably query). In other scenarios, the information is not as readily available or not in a structured format.

One nice thing about Linux and Unix-like operating systems, is that the filesystem can be interrogated by chaining various tools to make it cough up information you need.

For example, I needed to copy the assets from a digital asset management (DAM) system to a staging server to test a major code change. The wrinkle is that the DAM is located on a server with limited monthly bandwidth. So my challenge: what was the right number of files to copy down without exceeding the bandwidth cap?

So, to start out with, I use some simple commands to determine what I’m dealing with:

$ ls -1 asset_storage | wc -l
10384

$ du -hs asset_storage
409G	asset_storage

So that first command lists all the files in the “asset_storage” directory, with the -1 flag saying to list one file per line, which is then piped into the word-count command with the -l flag which say to count lines. The second command tells me the storage requirement, with the -h flag asking for human-readble units.

I’ve got a problem. Over 10,000 files totalling over 400G of storage, and say my data cap is 5G. The first instinct is to say, “well, the average file size is 40M, so I may only be able to copy 125 files.” However, we know that’s wrong. There are some big video files and many small image thumbnails in there. So what if I only copy the smaller files?

$ find asset_storage -size -10M -print0 | xargs -0 du -hc | tail -n1
630M	total

Look at that beautiful sequence. Just look at it! The find command looks in the asset_storage directory for files smaller than 10M. The list it creates gets passed into the disk usage command via the super-useful xargs command. xargs takes a list that’s output from some command and uses that list as input parameters to another command. To be safe with weird characters (i.e., things that could cause trouble by being interpreted by the shell, like single quotes or parens or dollar signs) we use the -print0 flag from find (which forces it to use null terminators after each result output) and the -0 flag on xargs, which tells it to expect the null terminators. This takes the list of small files, passes them to the disk usage command with the -h (human-readable) and -c (cumulative) flags. The du command gives output for each file and for the sum total, but we only want the sum, so we pipe it into the tail command to just give us that last value.

So if we only include files under 10M, we can transfer them without getting close to our data cap. But what percentage of the files will be included?

$ find asset_storage -size -10M -print | wc -l
7708

Again, the find command looks in the asset_storage directory for files smaller than 10M and each line is passed into the word count as before. So if we include only files smaller than 10M, we get 7,708 of the 10,384 files, or just under 75% of them! Hooray!

But when I started to create the tar file to transfer the files, something was wrong! The tar file was 2G and growing! Control C! Control C! What’s going on here?

What was wrong? Well, this is where it gets into the weeds a bit. It took me longer than I’d like to admit to track down. The shell command buffer has limitations on its length, and xargs has its own limitations. If the list it receives exceeds those limits, xargs splits the input and invokes the destination command multiple times, each with a chunk of the list. So in my example above, the find command was overwhelming the xargs buffer and the du command was called multiple times:

$ find asset_storage -size -10M -print0 | xargs -0 du -hc | grep -i total
6.1G	total
630M	total

My tail command was seeing that second total, and missing the first one! To make the computation work the way I’d wanted, I had to allocate more command line length to xargs (the size you can set is system dependent, and can be found with xargs --show-limits):

$ find asset_storage -size -10M -print0 | xargs -0 -s2000000 du -hc | grep -i total
6.6G	total

Playing with the file size threshold, I was finally able to determine that my ideal target was files under 5M, which still gave me 68% of the files and kept the final transfer down to about 3G.

In summary, do it this way:

$ find asset_storage -size -5M -print0 | xargs -0 -s2000000 du -hc | tail -n1
2.9G	total

$ find asset_storage -size -5M -print | wc -l
7094

$ find asset_storage -size -5M -print0 | xargs -0 -s2000000 tar cf dam_image_backup.tgz


Sun, 13 Mar 2022

The Programming Curse

— SjG @ 10:28 am

Programming is fun. You can be off doing some chore and get this idea … “hey, wouldn’t it be cool if I could just have the computer help me with this …”

So, you come up with an idea, and you think through the first few steps, and throw together a script. Then you play with it, and you get excited. It works, sort of, but you can see ways to make it better. You make changes and discover a better approach to the problem — so you implement that, and before you know it, you’ve spent an evening or an afternoon. It’s exciting to watch your ideas turn into something.

But computers get ever more complex, and the power and complexity of interfacing also gets ever more complex. Keeping pace with that increased complexity are more and more powerful development tools. This is a double-edged sword: you can easily do amazing things that would have once been very difficult, but getting set up can be more challenging and when things go wrong, it’s harder to figure out why.

For some ideas, getting to the point of coding is still easy as entering php -a or python and starting to type. For other ideas, though, there is the dreaded setup problem. I call this phenomenon “The Programming Curse.”

For example, I had an idea for a phone app that I wanted to prototype. In the old days, I’d have had to break out XCode and learn Swift and all of the iOS libraries. Today, however, I can use more familiar (to me) web technologies, and build an app using the Ionic Framework. Now I have a toolchain that includes at least nodejs, the Ionic framework, Ruby Gems, and XCode. I know very little about any of these things’ internals, and I really don’t want to know a lot about them. I just want to explore my code idea!

Sadly, I have to learn something about the internals. My first attempt to install the toolchain failed deep inside a nodejs package setup. After extensive googling, I find that it’s because one of the components is not the latest version (but there’s a reason for that1).

Maybe I’ve just gotten old, or maybe I’m just lazy. I’m certainly not the first to gripe about this phenomenon2. It just dampens the fun when during that excited “wouldn’t it be cool” phase I have to spend hours getting a functional development environment together instead of actually getting to write code.

1 The problem is that I support a phone app that was written in an earlier version of the Ionic framework, and it depends on a Cordova plug-in that’s no longer supported. The plug-in still works, but I can’t update my development environment for my new project because the dependencies would clobber my ability to update new builds of my old project. Could that be resolved by selectively holding back some packages to previous versions? Maybe. Three or four hour’s worth of effort in that direction didn’t get me anywhere, other than dependency hell.
For my web-only projects, I use products like Docker to keep a fully isolated development environment per project. Since Ionic depends on nodejs which installs globally (and since I need XCode to perform the final build), I haven’t found a way to do that. I guess if I made some Mac OS virtual machines, I could, but it seems like a lot of overhead.

2Fifteen years ago, David Brin wrote an article on Why Johnny Can’t Code extolling the virtues of BASIC. I find myself grudgingly agreeing — not about his specific language objections (I don’t know why he felt Perl or Python are any further from the metal than BASIC), but about how and why it should be easier to write small programs.


Fri, 11 Mar 2022

Of House Mountains and AR

— SjG @ 12:01 pm

Many, many years ago, a Swiss exchange student introduced us to the concept of a “house mountain.” It’s sort of the landscape view equivalent of home base: the mountain that you see from wherever “home” is.

Separately, I just came across a discussion of augmented reality applications, which reminded me of the outstanding PeakFinder web site and mobile app. I first encountered PeakFinder in 2013 when I was loading up my first iPhone. It was one of the two applications that showed me the enormous promise of augmented reality (the other being the original Star Walk). I was able to install PeakFinder on my phone, and identify peaks when hiking in the Sierra Nevada, on a trip to the Atacama desert in Chile, from a ferry crossing Horseshoe Bay in British Colombia, and many other places.

In general, I find I use PeakFinder without the AR mode. I just point my phone around the horizon and see the peaks labeled, recognizing them by the basic shape. But if you want to know what’s that peak in a picture you took last year, PeakFinder has a neat feature where you can import your photo and then overlay the data. It requires that GPS coordinates were embedded in the picture or that you can find the spot where you took the picture on a map. Tilting the camera off level and/or lens distortion make the overlay approximate, but it’s almost always good enough!

Shot from the train in Alberta, Canada as we approached Jasper
Volcanos and Laguna Miscanti, Chile

So, marrying the concepts of AR and house mountains, PeakFinder lets you generate the view from any arbitrary point and even keep it as a “favorite.” So even if you aren’t in a place or don’t have a picture from that location, you can see your house mountain, like this view of my childhood house mountain.

AR House Mountain