Blog Post
Paint to Pixels to AI
Ain't what it used to be!
When I started producing artwork for NASA and other clients the only way I could work was with traditional painting techniques. In fact, at that time, I was also using an airbrush quite a bit as I hadn’t gained enough confidence with brushwork so my art was about 80% airbrush and 20% brushwork. Gradually, over a few years the percentages shifted and my art was more brush and less airbrush. As I was working for commercial clients I mainly used acrylic paint which dried quickly. Also, once when I tried to use diluted oil paint through my airbrush I nearly asphyxiated!
As my artwork matured my images became much more realistic. My clients wanted to use my art to persuade decision makers that the projects and programs depicted were achievable by making it look like they had already happened in my “photos of the future”. I will explore in another blog my learning path that helped enable my art’s content to pass muster with high levels of scrutiny from engineers and scientists.
In 1987, I started using a crude 3D wireframe program on an Apple IIE computer to create very rough skeletons of the hardware in my art. It saved me from having to build as many models for photographic reference. I was aware through popular films like TRON that more realistic 3D computer imagery was on the near horizon. The issue I had was that I couldn’t afford to have a drop in realism from my paintings into the currently more “plastic” look of computer graphics (CG). I was, however, very attracted to the smoothness and precision of the mechanical objects that digital 3D provided. Eventually, in 1995, the software became sophisticated and affordable enough that I could start the transition into doing more computer generated art. I still did paintings, but the CG art became a viable alternative.
Now, almost 30 years later, artificially generated imagery is starting to dominate the commercial art scene. It can generate complex, photographically realistic pieces in just a few moments. At first I was pretty concerned about how this could impact art business in general, and my career specifically. I subscribed to Midjourney and did some experiments. I quickly discovered that the state of AI art, at this point, is to generate exquisitely detailed pictures that are 95% style and 5% substance. For example, I asked the program to do a “backlit,6-wheeled, pressurized rover that had no airlock with 2 astronauts that had just left the rover and were examining a robotic rover that had stopped in a Mars canyon”. The program generated 4 attractive images, two of which had 8 wheels instead of the requested 6 wheels. The detailing on the rover was very detailed and decorative, but bore no resemblance to any functional elements that I could recognize, and I’ve seen a lot of space hardware. I guess the lesson here is that the artists that have a depth of experience in various types of subject matter, whether it be spaceships, birds, racecars, et cetera, will still be needed to meet the needs of an educated, demanding audience. Unfortunately, the folks that will have a hard time are the generalists that are very talented, but have no particular specialization. I’m guessing that eventually the AI programs, if left to harvest visual content from billions of copyrighted and public domain online images, might get closer to generating some more insightful designs. I’ll be keeping my eyes open on this…