The memory of the first time I faced mortality has resurfaced as I explore the wold of AI generated images.
I have this distinct memory when I was 7 or 8 years old playing in my back yard. We had a bramble of berries behind our shed that barley had enough room for me to explore. One day I found a humming bird nest with two tiny eggs. By this age I had already heard that if you touch a birds nest, the mother would possibly never return.
Decidedly, “for science,” I would very gently pluck one tiny egg from the nest and try holding it to the sun to see if I could see inside with no success. Pondering about the possibilities I rolled the tiny egg in my palm, it was so light, and fragile.
My curiosity continued to build as I held the egg between my thumb and pointer finger. I came to the justification that the mother may have already written off this egg since I had already touched it, and without much more thought I let the pressure build until the egg began to crack.
The expectation was that the egg would have open with some clear liquid and a tiny yellow yolk, like a chicken egg you would get from the store, but instead it popped oozing life giving blood into my hand.
Seeing the blood hit me with the reality that I have taken a life. Tears poured down my face as I confronted the emotions that came from my actions. I was no longer innocent, it was no accident, a premeditated murder – I took a life.
It was my first true regret.
The images in this post were created by me giving a word prompt to an Artificial Intelligent Machine called DALL·E 2. Within several seconds the AI creates four similar images that I can download and/or share.
I also have the option to refine my result by asking the AI to create another set of variants from any singular image as shown below.
For me this technology am one of those people who thinks in pictures. When I close my eyes, I don’t see black. I see an endless stream of images as if plopped in front of a screen of my lucid dreams. When my wife can’t sleep she’ll ask me what I see. I’ll ramble off what I am seeing until one of us falls asleep (usually it is me).
In traditional learning environments my way of thinking could be a disability. I have found ways to cope, but for a long time I had a lot of trouble describing the thoughts and ideas in my head. A story, an idea, an answer could look so clear in my mind, but when I would open my mouth the words would fail to describe the vision. I would focus on the end result, without describing what I have seen to get there. I would get frustrated when I could see the picture perfectly and others could not. In my adult life I have learned to adapt to become a more functional member of society, but it is no surprise that I became a professional visual artist.
AI technology has been a cure for better communication. It has made it possible for me to tell stories that were locked in my head. I had wanted to make a photo project about this story in the past, maybe even a painting series, but I could never compile the reference material of such a specific story until now using an AI interface called DALL·E 2. I had tried similar prompts in other AI processes with really bizarre results. Below was one of my first attempts using Disco Diffusion to draw a “Humming Bird Nest”
The render time for this image was several minutes, DALL·E 2, came up like a slow search result. One negative of DALL·E 2, is that there are usage guidelines, that prevent explicit content to be created so when I wanted to display the bloody egg in the story, I got a warning telling me that it was against their Content Policy. “Shocking: bodily fluids, obscene gestures, or other profane subjects that may shock or disgust.”
ARTIFICIAL INTELLIGENCE IS A TOOL
I am very happy with the speed that this technology is improving. It will be a tool I plan to help me communicate ideas and tell stories. I can see it to be very helpful when I construct storyboards for my photoshoots and video production. It continues to empower me with ways to express myself in a time efficient, yet still, personal way.