We’re all sick of switching the clocks. We’re all tired of the argument over whether it’s better to wake up or go home in the dark during the winter months. I propose we chuck it all. Instead, we can each declare our own time zone.
For me, 6:45AM will be defined as the moment the sun rises. All my clocks will be calibrated on that basis.
You can declare your own time zone. You could choose to be UTC minus 7 hours, say. So how would we ever agree on a time to do anything?
Naturally, technology comes to the rescue as it does in all possible things. Each personal time zone will have associated with it a 64-bit number; the first 32 bits will map to the algorithm in use (sunrise/set, offset of UTC, etc.), and the second 32 bits will be data (e.g., coordinates and minutes after sunrise or hours offset). When scheduling with another person, one will merely say something like “let’s meet at 9AM 62c8179014da91e0042e74adc5e21485808faa6c” to which they’ll reply “oh, ok, that’s around 7:45AM 06d7c6a02a43c7d42dc67ac53874228209a349dc for me.”
Naturally, all of our phones, virtual assistants, and other devices will implement one version or another of this system. I can already foresee that the NIST standard will be implemented with proprietary extensions by Apple, and Android devices will support NIST standard 1.5 in a non-backwards-compatible way. Microsoft, of course, will license the Apple extensions, but only support about 30% of them.
Because of the beautiful and elegant simplicity of this system, programmers will no longer be driven to drink by the idea of time zones or daylight savings time. Instead, they will rejoice, download the libraries from GitHub, and everything will fall apart splendidly.
Here’s video of Phidippus adumbrata jumping spiders mating.
The male has two primary concerns: he wants to mate, but he doesn’t want to get eaten. His elaborate dance is not only gauging interest, but possibly also determining his risk level in approaching the female.
We’ve been watching many episodes of Pete Beard’s YouTube channel on the unsung heroes of illustration. Beard does a nice job of giving you the life details of these people, along with showing you representative samples of their work.
For a period that lasted roughly 150 years, a talented illustrator could become rich and famous. The increasing general literacy rates and growth of leisure time created an expanding market for books in the late 18th century. There were also technological breakthroughs in printing, which made adding illustrations to books more affordable, and publishers found that the illustrated books sold better. And while the concept of periodical magazines goes back to the 17th century, it was in the later part of the 19th century that illustrated magazines exploded in popularity.
Magazines were the YouTube or Twitter of their day, the place where culture was discussed and developed, and where art movements gained their momentum. They commissioned illustrations for their covers and their content, and once an illustrator had a reputation, they could make a good living. Once an artist was well known, they became sought for advertising and poster work. Some did merch, too.
It’s fascinating to see the enormous quantity and quality of work. As someone who has dabbled in art, it’s a little overwhelming. You can view a lot of the work online. Beard’s channel is a good introduction, but then you can go down the rabbit hole and look at scans of Jugend magazine, the outstanding Illustration History site from the Norman Rockwell Museum, commercial sites like Granger, and so on. More than you can possibly absorb is just a web search away.
The growth of photography and techniques for photographic printing changed the equation in the 1950s. Illustration took a back seat, with many localized renaissances like 60s music posters and 80s ‘zine culture.
Today, another revolution is happening. Over the past year, “Artificial Intelligence” art has gone from being a fairly obscure discipline to the point where it is producing stunning results. I’ve been playing with Stable Diffusion, and the results are both impressive, unsettling, and fascinating. This system works on a mathematical model that was fed a huge volume of images and text from the internet. Given a text prompt or general directing image, it uses that model to creates a set of mathematically weighted values representing how it “understands” your request. It then uses random noise and repeated filters to create an image that satisfies those mathematical weights.
The creation of the initial model is extremely computationally expensive and involved. The creation of images using that model, however, is something that can be done on any reasonably modern home computer.
There’s debate about “who creates the art,” when using a system like this. After all, it only “knows” the art it was fed in the initial model creation. So, in that sense, it can’t create anything “new” — and yet, the way it combines things is at a low enough level that there isn’t any reproduction of any of the original work. That being said, it’s still biased by the work it was trained on. It “understands” beauty and ugliness based on the descriptions of the images it was created with. This bakes in a lot of other subtle prejudices: ranging from abstract ideas like what constitutes futuristic, to more fraught ideas like what traits define racial terms. It’s telling that the initial release of the software put a “not safe for work” filter in place, because such a high percentage of the images of women in the initial model were nude.
People are already using this software to generate printed images and even movies. There’s definitely an art to creating the source prompts to get the results you want, but this is getting easier and easier. I think it will be a matter of weeks or months before the clip-art business is just a front-end to this kind of software. It will be fascinating to see where this ends up.