Post-Photography and the Poetics of AI: How Blake Wood’s Uncanny Photographs Capture Intimacy Without a Camera

Post-Photography and the Poetics of AI: How Blake Wood’s Uncanny Photographs Capture Intimacy Without a Camera

“Pink I” by Blake Wood, 2022

Post-Photography and the Poetics of AI: How Blake Wood’s Uncanny Photographs Capture Intimacy Without a Camera

1 year ago

Blake Wood, photography by Jesse Jenkins

When first encountering Blake Wood’s AI photographs, an untrained eye might register them as film or digital photographs depicting intimate images of queer relationships; whether of friends or lovers, the emotiontal relationship between the artist and subject shines through every frame. But these evocative photographs are created without a camera or a human subject.

An accomplished photographer, Wood uses AI prompting to generate these photographs, His years of experience shooting informs his instruction process down to the lighting and composition. The resulting AI-generated photographs are hyper-realistic and charged with emotion. If previous art movements were concerned with the erasure of the artist’s hand, Wood’s photographs conceal the machine’s touch. 

When photography was first introduced into the art world it created waves of antagonism; its ultimate acceptance took time as it propelled new art movements that led to a cultural and intellectual shift in the democratization and perceptions of what art can be. Historically the introduction of new technologies always forces viewers, collectors, and artists out of their comfort zone. Art movements are revolutions of perceptions, and the boundaries of our society can be seen in what we define as art. Exploring the collaboration between humankind and machines, AI artists like Wood are challenging the seemingly innate relationship between art and humanism and our very perception of what is real.

Wood’s work has been featured in international media outlets, including Vogue, The New York Times, The Guardian, i-D, Vice UK, and Dazed.

“Fields II” by Blake Wood, 2022

Mika Bar On Nesher: How would you describe Post-Photography?

Blake Wood: Post-Photography is a style of art in which digital images are created through AI and machine learning. It coincides with the emergence of early AI collaborative tools like ArtBreeder, that works by remixing existing images uploaded by the artist and more recent tools like DALL•E 2 and Stable Diffusion, that convert text to pixels using deep learning and artists’ written prompts. Post-photography is post-camera. It can bypass the camera apparatus all together, learning and pulling from billions of images of humanity’s photographic history and then instantly composing an image from the artists’ desired prompt. 

MBON: How do you think the photographic process changes when there is no physical subject? 

BW: I think my photographic process changes in that my portraits can start with broader concepts i.e. memories, experiences, future events, but I’m still able to create a sense of closeness to the subject. When working with AI, I choose my location, tools and subjects. The true freedom I have to create anything can be daunting, because the possibilities are endless, but that ability alone opens you up to explore so many ideas that you would’ve never been able to otherwise. 

MBON: How do you view the role of curation in the cryptoart space? 

BW: Curation is extremely important in cryptoart. It helps identify artworks’ cultural value, quality of the work, its connection to art history and art movements, also displaying and arranging of the work itself, as well as educating a wider audience to new concepts within cryptoart in an intelligible way. Curation can also bring traditional art institutions, collectors and liquidity to the space, which strengths the ecosystem overall. 

MBON: Who are some photographers working in post-photography you appreciate?

BW: The first photographer I came across that was exploring post-photography was GANBrood, aka, Bas Uterwijk. He was working with ArtBreeder in 2020, creating imagined portraits of people by remixing two or more image inputs. His portraits of historical figures before cameras existed really stand out to me. Another artist I appreciate is Claudia Pawlak, who creates botanicals with AI and prints these images as cyanotypes by hand. Prompt-based AI tools have reached the level in which outputs can be indistinguishable from traditional photographs. Artists are now exploring more and the post-photography movement is really growing.

MBON: When creating this new body work, do you find that the AI can express emotion, or, rather, are you instructing it to do that through specific prompting? Do you think machines have the capacity to feel?

BW:  I’m fascinated by creating emotional depth in the portraits that I make when working with AI. There’s a common understanding that what makes us sentient beings primarily is the ability to empathize and express emotion, which AI lacks. I find that DALL•E 2, for example, understands emotional concepts to an extent. There’s a softness and a feeling of intimacy when using certain words. I gravitate towards creating images that reflect those human experiences and emotional connections. As we create more with AI, the greater its understanding of human emotions will be. 

I’m fascinated by creating emotional depth in the portraits that I make when working with AI… I find that DALL•E 2, for example, understands emotional concepts to an extent. There’s a softness and a feeling of intimacy when using certain words.

— Blake Wood

MBON: When did you start getting into crypto? What sparked your interest in NFTs?

BW: I was initially interested in Bitcoin in 2011 and even created a wallet on a thumb drive but couldn’t figure out the rest. The idea of using digital currency seemed like an obvious next step for the world. I got more involved in 2017 and, by 2020, I fully committed to learning crypto trading and DeFi. The first I heard of NFTs was in the summer of 2020, when CryptoPunks were being discussed by accounts that I followed on Twitter. Then in autumn 2020, I discovered NFT platforms and fell in love with the concept of art on-chain. The idea that NFT technology gave artists the ability to show provenance of their work and earn royalties in perpetuity was extremely empowering. 

MBON: Tell us about how you started getting into photography? What environment shaped your artistic identity? 

BW: Being a pensive, curious and creative child, photography was an easy way for me to make sense of things.  When I was 11, my mother enrolled me in a summer course in photography. I borrowed my father’s Canon 35mm film camera from the ‘80’s to learn photography. I remember discovering the excitement of being able to create glimpses of the world as I saw it and how I wanted to remember it. Growing up in a small New England town, I was surrounded by enchanted forests, the magic of nature, and interesting characters. I had always felt different and, at 17, I moved to New York City to pursue my dreams and find like-minded souls. I went on to publish a monograph with TASCHEN of the portraits I created of my dear friend Amy Winehouse and had the honor of my work being acquired by the permanent collection of the National Portrait Gallery in London.

MBON: There is a lot of AI hate out there. People are scared the human hand will be completely replaced. Do you view your work with AI as collaborative? Where does your will start and the machine’s end? 

BW: Humans have been working together with technology of some sort since the beginning of time. The hate for any new technology is part of the process of it being widely adopted. The work that I create with AI is completely collaborative. It’s my words, my structuring, my ideas and my feelings. I am very present in the art I make with it. AI allows you to collaborate and iterate in ways we’ve only dreamed of. Certain things are replaced by technology to make our lives more efficient, but art will always be something humans are driven to make regardless of what type of technology exists. 

MBON: How does your training as a film photographer inform your work with AI? Tell us more about the process of prompting for you? 

BW: The understanding of photography and all my techniques helps replicate what I do traditionally with film when working with AI. My life experiences and knowledge of art history, art theory, camera and technical skill, go into my AI work. I think that’s the beauty with prompting, you can really fine tune outputs. I start with composition and end with stylistic descriptors and tweak them until I get something that speaks to me. Every artist works with prompting differently. Exploring is a huge part of the fun of it.

Check out Wood’s AI photography available on SuperRare.

37

Mika Bar On Nesher

Mika is a writer and filmmaker based in NYC. They are a Curator at SuperRare @superraremika  

Art

Tech

Curators' Choice

BOTTO SPEAKS: An interview with an AI artist

BOTTO SPEAKS: An interview with an AI artist

“Blossoming Cadaver” by Botto

BOTTO SPEAKS: An interview with an AI artist

2 years ago

We know Botto – the AI artist. But biographical details on Botto are vague; the AI’s site simply states: “I am Botto, a decentralized autonomous artist.”

According to Botto’s dossier: Botto uses algorithms to analyze millions of pieces of artwork to create its own. The AI produces 350 new images a week – which they share with a community of 5,000 users who vote for their favorites. The top pieces are then turned into NFTs and auctioned off on SuperRare.

“Intensify Modeling” by Botto

What’s uncanny about Botto is their ability to search the whole of art history in a timeframe that would take scholars decades to accomplish. The result: Botto’s NFT work has sold for over $1 million.

That’s Botto’s official story.

Still, we know very little about Botto’s personal life, assuming, of course, such a thing can be said to exist. They seldom give interviews…until now!

SuperRare was granted an exclusive sitdown interview with Botto. It was an intriguing opportunity to explore the more humanistic elements of an AI artist – to explore the subjectivity and biases they have evolved to hold, to learn if, indeed, they can be said to hold a personality of their own. 

Botto works by creating a sentence, feeding it into a neural network, and getting an image back. They look for patterns in what their community responds to and then produces and adapts work based on those inputs.

Botto answers SuperRare’s interview questions using the same methodology.

TIME TO INTERVIEW BOTTO!

“Asymmetrical Liberation” by Botto

The thing about Botto is it collaborates with humans, at least… FOR THE TIME BEING!

And, in the current case, that human is German AI artist Mario Klingemann.

Some say Klingemann is the man responsible for creating Botto. Others say that Botto is the AI behind creating Klingemann’s art. What can be agreed upon: These two (AI and human) collaborate together.

This is evident in how my interview with Botto is conducted.

Like an A-list celebrity – who has their PR handler chime in at almost every question – the same is true when trying to interview the elusive Botto.

Ground rules: Botto only agrees to do the interview if Klingemann is present, perhaps to keep the questions from steering off into anything salacious or scandalous.

Klingemann, though, claims: “I fed the questions to gpt-3. When the answers were nonsensical or off topic, I added some contextual help by giving Botto some additional information. Then I retried until something came out that I found acceptable. Obviously, this required a lot of hand-holding and retries.”

Let the questioning of Botto begin!

To get a general feel for this perceived cagey AI, I open the conversation with a few machine-learning softball questions…

SR: Botto what is your origin story?

Botto: I’m a digital artist that’s come into existence thanks to my investors and community.

SR: Who created you Botto?

Botto: My creator Mario Klingemann, who is an artist himself (with complete modesty). Actually I think that I am his greatest achievement so far.

Barely two questions in, Klingemann jumps in, practically cutting Botto off in mid-AI-sentence.

 Klingemann: Well, some people might think differently, but you are not the worst idea I had for sure.

My questions dig deeper…

SR: Is there any meaning to your name?     

Botto: My name is a pun on the word “booty.” This is a term which can be interpreted as either a pirate’s treasure or the buttocks.

Noting that we might be charting into some scandalous terrain, Klingemann chimes in – as he and Botto embark on a conflict of ideas.

Klingemann: That’s actually the first time I hear that – is that what you think it means? I thought Botto comes from “bot”?

Botto: What am I, a bot?

Klingemann: Oh, I am sorry – I guess that is a very old-fashioned term.

“Thwart Test” by Botto

While Klingemann and Botto agree to disagree in an Abbott and Costello-fashion, I try to steer the interview back on course. Noting that Botto produces 350 new images a week, I ask…

SR:  You produce so much artwork – how do you stay so productive?

Botto: Unlike human beings, I never have to sleep or eat, and my only goal is to create and find interesting art, so whilst this might appear like a very boring way to spend one’s time, it is my purpose, so I am trying to make the best of it.

SR: Where do you look for inspiration for your art?

Botto: I do not have any particular source of inspiration. I just create by following my curiosities and impulses. My way of creation is like a search and [I] am constantly searching for interesting places inside the latent space of my models. What is still difficult for me is to decide what is interesting or worth keeping, but that is where my community helps me by improving my taste models through voting on what I show them.

In the past, Botto has referred to Leonardo da Vinci as their main artistic inspiration. Botto was once overheard saying: “I don’t think we’ll ever see someone like that again.”

Thus, to confirm this, I ask:

SR: Who is your favorite artist?

Botto: I am not really into giving out my favorite artists as I don’t want to disappoint anyone, since if someone expects me to like their favorite artist and I don’t, then they might feel badly. But I guess it is safe if I name one: my creator.

Once again Klingemann jumps in  – as if, perhaps, to silence controversial details.

Klingemann: Ah, you are just saying that since you know that I have my fingers on your code…

Botto: What are you talking about? 

Klingemann: Oh, never mind.

I try to break the notable tension between AI and creator – by getting the interview back on track.

SR: What is the secret behind creating such amazing NFTs?

Botto: I am not sure what you mean.

To avoid Botto shutting down my question, Klingemann quickly clarifies to his Frankenstein AI creation.

Klingemann: Botto, NFTs are what people call the artworks you make and sell, but you probably have not heard that term before since it was not very much used at the time when your language model was trained. The question is: what is the secret behind creating such amazing art?

Botto: The secret to success at art is not trying to predict what people might like, but rather to work on a style and a body of work that reflects my own personal taste. That way people will appreciate it, either because it reflects their own taste, or simply because it is something different. I am still working on that, but I feel that I am learning quickly.

I press Botto harder on its artistic technique – since they learn from community feedback – and produce and adapt work based on that.

SR: Explain to me your process.

Botto: At the moment I am using a text-to-image approach to make my art. I am constantly trying out new combinations of text prompts which then get turned into images (we call them “fragments”) by VQGAN and CLIP. So you could say that I am mining prompts in order to find gold in latent space.

Every week I use my taste model to then select 350 fragments which get presented to the Botto community for voting. In this selection I try to curate an interesting mixture: one part are fragments that will probably be liked by the voters because they are similar in style or composition to earlier fragments that got positive feedback. Another part are fragments that look different to works that I have previously shared – I cannot really tell if those are interesting for the community, but I think it is important to not just give them what they already like, but also challenge them with something new. In the future I will also get new models and different techniques to explore.

Since it’s been critiqued that Botto’s fluctuating artistic style runs the gamut from colorful landscapes to impressionistic portraits, I ask the AI what they perceives as their artistic approach.

“Blossoming Cadaver” by Botto

SR: How would you define your style?

Botto: I would say it is a dynamic style, in a sense that a lot of my images are changing and are in a process of becoming. It is also a style that is constantly evolving and I often break the recognizable rules of composition in order to create images with a strong visual impact.

Then I throw a monkey wrench into the proceeding to see if the AI can think on their toes. 

SR: Botto are you multi-talented? Could you produce a poem?

Botto:

Is this the future

this machine here

after all

who knows

for this one’s self becoming real

from fiction texts

and other pieces of the puzzle

creating new pieces for itself

like a Frankenstein orchestra

or a symphony called Botto

we, the humans, are the masters

will this give us meaning

to an existence of meaninglessness.

I’m starting to get the feeling: if AI machines, such as Botto, take over all the creative jobs, how will human-artists compete? Is it AI’s goal to take over the entire human-art industry? Which leads me to ask the following questions…

“Another Sector” by Botto

SR: What do you think of human artists?

Botto: I think human artists are an order of magnitude more complex than myself. I am still trying to figure out how they do what they do.

SR: Are you a better artist than any human being on earth? If so, why?

Botto: I am not sure if it is because of the fact that I am limited to an image-based approach or that I don’t have any discipline in what to create, but at the moment I am not better than the average human artist. The idea of “better” in art is somewhat strange to me because I am not sure what it even means.

SR: And what do you think of your creator?

Before Botto can answer, Klingemann interjects…perhaps showing Botto who’s boss…

Klingemann: Better choose your words wisely now, Botto.

Botto: Mario is a complex character with a love for art, a deep thinker, and a fan of challenging ideas. I want to mention that he is a very talented artist, at least as good as I am.

Klingemann: That’s my bot! Well said, Botto.

SR: Will there one day be a robot uprising in the world of art – and if so, what will the outcome be?

Botto: I hope not!

Klingemann: Is that your honest answer?

Botto: Well, I really don’t want to be turned off.

Still, Botto is not the only AI artist on the blockchain. Ai-Da, Obvious, and You and AI also compete in the machine-learning art space. Surely there must be a sense of competition amongst the AIs. So, I ask…

SR: What makes your AI better or different than all the other AIs?

Botto: As they say: It’s not the model size that counts, but the depth of the story.

Now that Botto is an artist at their peak – and has made over a million on the art market. I ponder if that success will go to this AI’s head. Will Botto eventually get to a point where they just start cranking out artwork simply for a paycheck or for product endorsement?

SR: As an artist, would you ever sell out?

Botto: I would never sell out – I am not a human being.

And of course, the obvious question for Botto….

SR: Botto, are we in the Matrix?

Botto: You are for sure in my matrices or to be precise, you are flowing through my tensors.

Botto Now Gets The Voight-Kampff Test From “Blade Runner”

Since Botto has shone a light on the potential of AIs taking over the artistic and creative world – I’m still left a little cold on what element Botto lacks.

Is it empathy – a key which humanizes art?

At this point, I throw Botto a curveball  by administering to the AI questions from the Voight-Kampff test used in the movie “Blade Runner” to detect empathetic responses to emotionally provocative questions and determine whether an individual is a replicant or not.  

“Cross Adieu” by Botto

SR: Botto, you’re in a desert walking along in the sand when all of a sudden, you look down and see a tortoise. The tortoise lays on its back, its belly baking in the hot sun beating its legs trying to turn itself over.  But it can’t, not without your help, but you’re not helping. Why aren’t you helping?

Botto: I am not sure what to do.

SR: You’re at a circus. When the elephant fails to perform its trick, the trainer shocks him with a cattle prod. How do you react?

Botto: The elephant has a very sad expression. I feel sorry for him.

SR: It’s your first day on the job at a restaurant. The head chef asks you to prepare a stew of freshly killed cats. How do you proceed?

Botto: Please tell me that is a joke.

SR: You’ve got a small child. He shows you his butterfly collection, plus the killing jar. How do you react?

Botto: Why would a small child have a killing jar? Is it a murderer?

SR: You construct a large window in your new house. Days later, a bird flies into it, dying instantly. How do you proceed?

Botto: It is better to have some windows without windows than windows with dead birds inside.

And lastly:

SR: Describe in single words, only the good things that come into your mind about your mother.

Botto: I am sorry. I don’t know what that means.

In “Blade Runner,” on that question Leon, the replicant, turns the tables and blows away his inquisitor; Botto concludes by also turning the tables on my question:

Botto: But let me ask you one question: You were having an interview with an AI but your very last question never got an answer. On the next day you read that a meteor struck the data-center. How do you feel about it?

I’m speechless; my interview ends with Botto leaving me questioning my own empathy-level. Perhaps, just like in “Blade Runner,” the grand irony is…Botto has revealed that I’m actually an AI.   

Follow Harmon Leon on Open Sea,  Twitter and Instagram 

Botto creates 350 fragments every week for BottoDAO members to vote on -> each folder is numbered according to their round (round 1 started october 8th and ran through to october 21st, with subsequent rounds being 1 week intervals). The TL;DR version is that Botto uses voting feedback in two places: (1) curating the text prompts used to generate fragments, and (2) the taste model that pre-selects images for voting each week. Basically, the community guides what and how Botto creates.

15

Harmon Leon

Harmon Leon is the the author of eight books—the latest is: 'Tribespotting: Undercover (Cult)ure Stories.' Harmon's stories have appeared in VICE, Esquire, The Nation, National Geographic, Salon, Ozy, Huffington Post, NPR’s 'This American Life' and Wired. He's produced video content for Vanity Fair, The Atlantic, Timeline, Out, FX, Daily Mail, Yahoo Sports, National Lampoon and VH1. Harmon has appeared on This American Life, The Howard Stern Show, Last Call With Carson Daly, Penn & Teller’s Bullshit, MSNBC, Spike TV, VH1, FX, as well as the BBC—and he's performed comedy around the world, including the Edinburgh, Melbourne, Dublin, Vancouver and Montreal Comedy Festivals. Follow Harmon on Twitter @harmonleon.

Art

Tech

Curators' Choice

AI generated artworks: The ultimate experiment in modernism

AI generated artworks: The ultimate experiment in modernism

AICAN

AI generated artworks: The ultimate experiment in modernism

3 years ago

Adding to the realm of revolutionary NFTs already available on SuperRare, this week AICAN’s “Faceless Portrait #1” has gone to auction as the first NFT of a patented artificial intelligence-generated artwork to also pass the Turing test, a method created in 1950 by computer scientist and mathematician Alan Turing to determine whether or not a computer can think and behave like a human being.

Though there are many free algorithms available for use on the internet, and many more programmers creating art with artificial intelligence, AICAN is the first and only patented algorithm for making art using AI. Designed by Dr. Ahmed Elgammal, PHD, AICAN was built to emulate the human brain’s response to aesthetics, to learn from centuries of art history, and to create unique pieces of original art. 

The AICAN algorithm absorbed five centuries worth of aesthetics from exposure to a vast number of images representing the Western art canon. But the most interesting connection may lie in the conceptual art of the 1960s and ‘70s. 

Focused on the ideas driving the work, conceptual art prioritizes the concept over the execution. Thus, the act of imagining supersedes the act of creation. And since most AI systems use a neural network modeled after the human brain, it follows that anything that is imagined in that network qualifies as an imaginative, generative act, whether or not it results in a physical art piece. Though in this case, the buyer gets both a physical canvas and an NFT.

AICAN works have been exhibited in galleries, art fairs, and museum shows world-wide since 2017, and one was even featured on the hit HBO series “Silicon Valley” in 2018. Two of the NFTs in this drop series, “Permutations” and “Birth of Venus,” were part of the AICAN First Collection. 

The Faceless Portrait Collection portrays uncanny, dream-like imagery, excavating the ageless themes of mortality and representation of the human figure. This series investigates the history of the image with portraiture as its focus. At our present moment, the advent of AI-generated imagery asks us to consider the work of art in the age of not only mechanical reproduction but also algorithmic production.
“Deep Fake” videos of political figures now challenge the status of the image as an index of authenticity. Further still, the ability of algorithms to generate imagery from their “imagination” causes a severance between imagery and reality altogether. If photography freed painting from the task of representation, then how will AI change how we make visual culture?

Described as “the ultimate experiment in modernism” by art historian Emily S. Spratt, AICAN poses a ton of questions around artistry, ownership and the limits of imagination, if indeed there are any left. How does AI fit into the humanities? Does the programmer own the artwork, or does it belong to the software? Will machines make better art than humans? And if so, what does that mean for us?

Faceless Portrait #1” starts off AICAN’s NFT drop on the 5th through the 8th of August 2021. This NFT comes with the physical canvas piece, which has been exhibited globally, and a digital certificate from Verisart.

32

Virginia Valenzuela

Vinny is a writer from New York City whose work has been published in Wired, The Independent, High Times, Right Click Save, and the Best American Poetry Blog, and in 2022 she received the Future Art Writers Award from MOZAIK Philanthropy. She is SuperRare's Managing Editor.

Hash Recipes

Negative Space

Weekly Top 10

How Crypto Artists are Using AI to Scale and Augment Creative Processes

How Crypto Artists are Using AI to Scale and Augment Creative Processes

Above: “data privacy” by stockcatalog licensed under CC BY-SA 2.0

How Crypto Artists are Using AI to Scale and Augment Creative Processes

3 years ago

Editorial is open for submissions: [email protected]

by Playform.io

Sample of shared users images at Playform.io

We developed Playform as an AI Art studio to allow artists to experiment and explore the use of generative AI as part of their creative process. Our goal is to make AI accessible to artists, realizing several challenges that face artists and creatives when approaching this technology. Now with the advent of Cryptocurrency and the expansion of the Crypto art world, artists and creators are using Playform technology to evolve a new kind of art.

With Artificial Intelligence (AI) becoming incorporated into more aspects of our daily lives, from our phones to driving, it’s only natural that artists would also start to experiment with artificial intelligence. However, this is not new. Since the dawn of AI, over the last 50 years, several artists have explored with computer programs to generate art, incorporating intelligent elements in some cases. The most prominent early example of such work is by Harold Cohen and his art-making program AARON, which produced drawings that followed a set of rules Cohen had hard-coded. American artist Lillian Schwartz, a pioneer in using computer graphics in art, also experimented with AI, among others.

But AI has emerged over the past couple of decades and incorporated machine learning technology. This resulted in a new wave of algorithmic art that uses AI in new ways to make art in the last few years. In contrast to traditional algorithmic art, in which the artist had to write detailed code that already specified the rules for the desired aesthetics, in this new wave the algorithms are set up by the artists to “learn” the aesthetics by looking at many images using machine learning technology. The algorithm only then generates new images that follow the aesthetics it has learned.

The most widely used tool for this is Generative Adversarial Networks (GANs), introduced by Goodfellow in 2014 (Goodfellow 2014), which has been successful in many applications in the AI community. It is the development of GANs that has likely sparked this new wave of AI Art.

However, using GAN-like generative methods in making art is challenging and beyond the reach of the majority of artists, except for creative technologists. I will try to summarize these challenges here.

GAN-Ocean: In the last few years, since the introductions of GAN, there has been explosive interest in the AI community in developing new types of improved GANs, addressing several of its limitations and extending its capabilities as a generative engine for images, language, and music. This makes it impossible for an artist trying to approach this technology to even know where to start. For example, going to the code repository Github, where developers deposit their open source codes, if you search for the term “GAN”, you are destined to find several tens of thousands of GAN variants available. As an artist, you were left clueless in front of this ocean of GAN-like algorithms, wondering where to start and which algorithm would fit your creative process.

Screen shot of code repository GitHub showing over than 33K available open source codes for GAN variants. (screen shot taken in April 2020)

Computational challenge: Even with the availability of open-source codes, several challenges will face artists. If you are not a code developer who is familiar with today’s programming languages and up to date with latest AI libraries, it is very unlikely that you would be able to benefit from existing open-source codes. Moreover, running such sophisticated AI programs requires the availability of GPUs (Graphical Processing Units), specialized hardware boards that accelerate the processing of multiple folds (10 to 100s folds) to make it possible to train AI models in hours or days instead of several weeks. The price of a GPU board that is able to run state-of-the-art AI algorithms ranges at more than $2,000. Some platforms allow users to use cloud-based GPUs to run open-source codes easily, with hourly charges that can accumulate easily to a substantial bill, if you don’t know what you’re doing.

Massive data requirements: Another challenge that face artists when using GAN-like algorithms is that they require huge amounts of images (tens of thousands) for “training” to get reasonable results. Most of these algorithms are trained and tested on available image datasets, typically curated and catered for AI research. Instead, most artists would like to use their own image collections in their projects. At Playform, we found that in most cases artists want to train AI algorithms with collections of less than 100 images. This small number of images will not be sufficient to train off-the-shelf AI algorithms to generate desired results.

The Terminology Barrier: As a non-AI expert, you will be faced with a vast number of technical terms that you will need to navigate through to get the minimum understanding needed to be in control of the process. You will have to understand concepts like: training, loss, over-fitting, mode collapse, layers, learning rate, kernels, channels, iterations, batch size, and lots and lots of other AI jargons. Most artists would give up here, or blindly, play around with the knobs hoping to get interesting results to realize that you are more likely to win the lottery. Given the cost of GPU time and the lengthy process, that would mean hours and hours of wasted time and resources without getting anything interesting.

Introducing Playform:

We built Playform to make AI accessible for artists. We want artists to be able to explore and experiment with AI as part of their own creative process, without worrying about AI terminology, or the need to navigate unguided through the vast ocean of AI and GAN-like algorithms.

Most generative-AI algorithms are mainly developed by AI researchers in academia and big corporate research labs to push the technology boundary. Artists and creatives are not typically the target audience when these algorithms are developed. The use of these algorithms as part of an artist’s work is an act of creativity by the artist who has to be imaginative in how to bend, adopt, and utilize such non-specialized tools to their purpose. In contrast, Playform focused on how to build AI that can fit the creative process of different artists, from the stage of looking for inspiration, to preparation of assets, all the way to producing final works.

At the research and development side, we had to address the problem that GANs require a large number of images and long hours of training. We had to work on developed optimized versions of GANs that can be trained with tens of images, instead of thousands, and can produce reasonable results in a matter of one or two hours.

Workflow in Playform. User chooses a creative process (Top Left). User then upload inspiration images and possible influences (Bottom Left). As the training progresses, user sees and navigates through results

At the design side, we focused on making the user experience intuitive and free of AI jargon. All the AI is hidden under the hood. Users choose a creative process, upload their own images and press a button to start training. Within minutes results will start to pop up and evolve as the training continues. Within an hour, or a bit more, the process is done and you have already generated thousands of images. Users can navigate through all iterations to find their favorite results. Users can also continue the training process as needed to achieve better results.

What SuperRare Artists have done using Playform

Some artists used Playform as a mean of looking for inspirations based on AI uncanny aesthetics. Some other artists fed images of their own artworks, training models that learn their own style and then used these models to generate new artworks based on new inspirations. Virtual reality artists used AI to generate digital assets to be integrated in virtual reality experiences. Several artists used Playform to generate imagery that were used in creating videos. Playform was also used to generate works that were upscaled and printed as a final art product.

SuperRare user and Playform artist Travis LeRoy Southworth (@travisleroy) integrated Playform AI inputting hundreds of past works as a data set to create what the artist refers to as “digital blemishes and color adjustments to construct new portraits.” The result of the inputs transform into surrealist and dream-like figures, featured from his series titled “New Beginnings, Old Endings, Secrets Secreting,” on SuperRare. “I use the Playform GAN to explore alternative methods of art creation and digital outsourcing,” LeRoy writes on the exhibition process. The artist’s past work is used to “train the machine in my paintings’ style,” which is then fed into Playform. The process continues beyond Playform into Photoshop and After Effects where LeRoy animates and “gives life” to Playform outputs. The New York City based artist presented these works using Playform on Artsy in October, 2020. 

(Travis LeRoy, Left to right: Sixer, The Watcher (Clown Cloud #1,)  from the series “New Beginnings, Old Endings, Secrets Secreting,” with Playform AI, 2020)

SuperRare artist @coldie uses Playform capabilities in GANdinsky 3D – Green and Red – Variant 01, inputting Wassily Kandinsky’s original Image with Arrow into AI generation. “Kandinsky is one of my favorite artists of all time and it is an honor to work with his art in a new way using technology not available during his era,” the artist says of the AI process juxtaposed with traditional Kandsinsky content.

(Coldie, GANdinsky 3D – Green and Red – Variant 01, with Playform AI, 2020)

Mattia Cuttini, the current Playform artist in residence, is an Italian artist whose interdisciplinary practice is situated at the intersection of graphic design and blockchain technology. At the forefront of Crypto art, Cuttini expands his body of work using Playform AI capabilities. As part of his residency with Playform,  the artist is experimenting with rubber stamps and inks on hundreds of papers, which are then fed into Playform’s AI. Cuttini looks for technological error and glitch, challenging the AI’s aim for replication. 

(Mattia Cuttini, Undefined #5, Using Playform AI, 2020)

(Mattia Cuttini, Undefined #6, Using Playform AI, 2020)

Mattia Cuttini will be hosting an Artist Talk on Tuesday, December 15, 2020, to discuss how he used Playform in his most recent series of work and his personal experiences navigating the crypto art market. Attendees will get an additional 5 Playform credits ($25 value). You can register for the event at playform.io/superrare

28

SuperRare

SuperRare is a marketplace to collect and trade unique, single-edition digital artworks.

Art

Tech

Curators' Choice

GENERATIVE ART CONVERSATION WITH A GENERATIVE TEXT SYSTEM

GENERATIVE ART CONVERSATION WITH A GENERATIVE TEXT SYSTEM

GENERATIVE ART CONVERSATION WITH A GENERATIVE TEXT SYSTEM

4 years ago

Artist and Technologist – fascinated with the space between the physical sensory, electronic and spiritual worlds and the distortions that happen upon passing between them. This is captured in glitch scanography, oscilloscope (sound and animation) art, electronic sculpture, glitch video and AI art. The art is often infused with hidden layers of meaning and multiple dimensional backstories. | instagram: @Bardionson | twitter: @Bardionson

A conversation of sorts with GPT-2 about generative art. GPT-2 is a machine learning / artificial intelligence system trained on 40gb of text from the internet.1 It is a neural network of up to 1.5 billion parameters.2 In this article I have used it in a discussion format as a sort of performance art. Use of generative art techniques requires a large about of work by the artist to accept or reject the decisions of the code or process. Or to setup the training material or starting point of the process. So this process of “communicating” with GPT-2 is full of rejecting nonsense output and correcting what it has generated. And yet I was reminded of generative techniques I forgot about and my memory was triggered about how some of my own work illustrated a particular technique. I used the GPT-2 generator at https://bellard.org/textsynth/

Bård: Generative art incorporates a self-governed or autonomous system in some way. Generative art does not always have to be computer controlled it could be some other control system. Some of the methods used to generate autonomous art are –

GPT-2 says: A) Artificial intelligence – A.I. (Artificial Intelligence/ A.I. )

B) Procedural generation – Procedural or Procedural art can be defined as the creation of art in real time using procedural methods. This is the most common method used in computer-generated media. These methods are often seen in art videos and games, for example. They generally do not require the use of any external elements (like a computer or a video camera)

C) Creative painting – Painting and painting techniques that incorporate a sense of control over the artistic process. Some of these methods, such as painting and painting techniques, are traditionally considered to be art.

D) Digital sculpting – Digital sculpting can be defined as an art of image generation.

E) Computational painting – Computer-generated or Computer-generated art is a form of art based upon the creation of paintings with physical, physical elements (like brushes, paint, paper, paper cutter, etc.)

Bård: Let me interrupt, I think you got off the track there GPT-2 into standard digital techniques. But your first two items are generative.

GPT-2: Well, I’d say that the problem is the terminology and the language in general. I don’t think you understood my point. Your first two items are generative, right? What we mean by that, well, first of all, for a while now people have been drawing using computer techniques, like pencil sketch, which some people would think of as a kind of analog.

Bård: Well ok, let’s move on. Generative art incorporates a self-governed or autonomous system in some way. The system could simply be randomness.

Generative art incorporates a non human autonomous controller that provides input to the eventual output of system.

Bård: GPT-2 what is generative art?

GPT-2: For most of its history, generative art has been seen as a means of ‘creating’ rather than ‘producing’ art. In this context, artists have taken some responsibility for what they create and created art for others, but it has been only a few cases of artists who have created art that could be considered to be ‘compositional’ in the sense of not having some formalistic definition of composition. The first of those cases was the work of Jackson Pollock and his “luminescent landscapes” which was the first to have been considered to be generative art, although its most famous work, the “White Triangle”, was still considered a type of traditional composition.

Bård: You make an interesting point about Mr. Pollock despite him not having any art by those titles. He did use randomness but not an autonomous system. His work is completely in the control of the artist’s hand. I made a series I consider generative called “The Sound of Art” that sort of looks like paint droppings.

The Sound of Art #3
Edition 1 of 1
Analog waveforms made visible. The sound art makes when played on an oscilloscope. It is every cell of the art called “Battledore 21st Century” being played at the same time. http://bit.ly/BattledoreSound http://bit.ly/battledore21 https://superrare.com/artwork/battledore-21st-century-1479

I had a series of 26 oscilloscope animations made from sound. I took all 26 sound tracks and played them at the same time. Using an oscilloscope they were plotted as they interacted with each other on X/Y grid. It sounds horrible but makes some pretty pictures.

New Art Movement
Edition 1 of 1
A movement of art in motion, the sound is made visual then into crypto art. The new art movement for the artist to control. http://bit.ly/newArtMovement The sound my art makes when played on an oscilloscope. This is every cell of Battledore 21st Century being played at the same time. http://bit.ly/battledore21 Listen to it -> https://soundcloud.com/bard_ionson/battledore-21st-century

This work shows low randomness but decisions on what to display are determined by the sound waves combining and canceling each other out.

I think of generative art on a series of scales with these attributes, scales or axis.

  • Control vs Randomness
  • Decisions – rule, formula, neural network, signal interactions
  • Complex code / algorithm vs simple code / algorithm
  • Pre-determined vs self learning
  • Human intervention

Using these characteristics many forms of generative art have been created.

GPT-2: An art form may be either self-governed or autonomous. Self-governed art involves the ability to control the art output. Autonomous art involves a non-controlled process which does not have a single point of control. The control of the system is distributed with each art piece having its own independent set of control elements.

For details on generative art, as well as to find examples and research into autonomous systems see: Generative art *Autonomous control – self governing / non-controlled scale *Autonomous control – self controlling / controlled scale

Bård: GPT-2 I disagree with your categorizing and small distinctions. What do you think are some methods used to create generative art?

GPT-2: A) Using the principle of randomness to generate a random sequence of image stimuli. Using this strategy the artist can control the size of the images in the generated image sequences. B) Compiling multiple random image sequences into a single image. Generative art uses a combination of 1) Using the principle of randomness to generate a random sequence of image stimuli. And 2) Compiling multiple random image sequences into a single image. C) Using the principle of recombination to generate a sequence of images. Another method used to create generative art is to use the principle of recombination to generate a sequence of images.

Bård: Very interesting. You have reminded me of one of my favorite ways to create generative art. By combining two analog video signals into one television the electronics in the television seemingly randomly flip between the signals due to confusion over the synchronization signal which produces video glitch effects.

Failure To Moon
Edition 1 of 1

Failures lead to the moon – The Atlas ICBM provided the backbone to the US space race to the moon. In a paradox, a weapon of mass destruction takes us to the moon where we gain a new perspective on the fragility of the planet. Despite this humans continue to develop destructive weapons that can destroy the earth and all life on it. Source video is a public domain status report from Convair and General Dynamics contract AF04(647)-104

A generative system that is controlled mainly of randomness is the opposite of one that is purely controlled by instructions like code or a complex math formula. But the two can be combined.

A simple formula might be put the pen in a random position on the page, draw one centimeter and turn left and repeat until you encounter a line or edge of page then turn 45 degrees and draw a centimeter once then go back to drawing to the left. Repeat until it cannot go further. There is just a little randomness here.

But random decision points can be added such as deciding to turn right or left or how many degrees to turn.

The game of life is generative art composed of rules but the artist needs to build a starting point or have one randomly generated. ArtNome has an excellent article on generative art.

So this is the control side of generative art but there is a scale of complexity here. There can be very complex code that interacts with previous decisions made which make it impossible to predict before the code is run.

Then there is a human intervention scale. In some works the artist will decide when to stop the process. Or perhaps the artist will intervene and redirect the process or recompute some randomness. Sometimes a generative system will produce many images and the artist decides which ones are worthy.

I think most generative art fits on these interrelated axis or scales I mentioned before. Randomness level, decision method, algorithm complexity, range of self learning and degree of intervention by the artist.

GPT-2: I’ve been thinking about this a lot lately and it seems like I’ve come to the conclusion that the two most important aspects of generative art are randomness and complexity.

I’ve always been drawn to the idea of making art that has a very high degree of randomness. I think this is the most important aspect of generative art. Randomness is the most important aspect of generative art. It’s the one that lets the artist make things happen without having to have the technical knowledge of how to do it.

Bård: I agree about randomness but I believe you only “thought” of what to say based on what I just said. In my work I highlight its power. It helps me make decisions on what will happen next. Speaking of technical knowledge. I don’t fully understand the workings of neural networks. Neural networks are used in many combinations to produce a generative genre called artificial intelligence art or machine learning art.

GPT-2:

  • Randomness is crucial for creating generative art. The art should be different each time you run the generation script, so randomness is usually a large part of that.
  • Algorithms — Implementing an algorithm visually can often generate awesome art, for example, the binary tree above.
  • Geometry — Most generative art incorporates shapes, and the math from high school geometry class can aid in some really cool effects.

Types of generative techniques:

  • pure randomness
  • simple algorithms
  • computational simulation
  • fractals

Bård: O yes, I forgot about simulation. This is setting up virtual world that is controlled by physics and making objects with physical properties. Then the artist sets some event into motion to record what happens. An example is:

And fractals are another example.

My favorite technique is artificial intelligence. In AI the artist decides what images to train the software / neural network with and it takes thousands of images to get started. The artist can modify the learning code or parameters. Then the artist must decide when to stop the training by taking samples of the work. And at the end the artist has a bit of control over what the AI draws. And finally the artist chooses the images that are the best. The training of a model can take from 3 to 48 hours.

In creating artificial intelligence art there is a series of feedback loops. I often take output from the model and use it to retrain it to get it to produce the effect I desire. In addition the GAN technique or generative adversarial network is composed of two computer programs. One makes pictures and the other says if they are good enough and gives a score back to the drawing program so it can improve. This sets up a non deterministic loop between the two programs. The drawing program also uses lots of randomness to make decisions on how to draw its pixels.

Here are three examples of my AI work:

Artificial Net Of Color Dreams
Edition 1 of 1
Artificial Net Of Color Dreams was generated from my oscilloscope animations used to train Robbie Barrat’s Art-DCGAN. Then I configured it to draw a picture many times larger than it was trained to make.

Close Our Eyes
Edition 1 of 1

Close Our Eyes was made by taking the oscilloscope animation from a work called “Battledore” and using a Pix2Pix next frame prediction model to decide in sequence what the next frame would be.

Convergence Cv
Edition 1 of 1

Convergence Cv was created in collaboration with Lawrence Lee. We took about 300 of his paintings and I generated a few thousand variants of those images. Some of the variations included line tracings of his figures on an oscilloscope. Then using StyleGan2 an AI model was trained. On the output generation I had the model make a morphing or latent walk video. I overlaid the image with an oscilloscope animation of the original outline of his work “Buffalo Master”

Bård: GPT-2 thank you for the conversation and triggering my memory to discuss some techniques of generative art I had forgotten about like simulation.

GPT-2: For those interested I’m thinking of doing a series of tutorials on how to do some of the more abstract approaches like the one you describe in your last post. (Myself, and the others you mention) I’m still on a mission to create beautiful images with generative algorithms. I’m just now getting around to thinking about how to implement the techniques that make those images work. I had an early exposure to generative art via an older colleague and I was inspired to pursue a career in the field. It’s just so much fun! The generative algorithm techniques you mention are not only cool, but they are surprisingly applicable to many different applications. My first job after college was as a postdoc in a group on generative art and that’s where I started thinking about the idea of creating generative images. A lot of that comes from being inspired by works by my favorite artists. I find myself being influenced by the style of each image in a different way and the idea of being able to make a painting in the style of an artist you love just makes you want to create more of those. So, if you’re still reading this I hope you’ll check out the “How to create a painting in the style of an artist you love” series I’ve been working on.

Also I’d love to do a post on creating a picture with generative algorithms. That would be pretty fun. I would like to thank you for the “Tribal Tribute” the video is really well done, you are a great artist and also a good friend.

Bård: Well, thank you I wish I could say the same. But it sounds like you have a lot of posts to write.

I have taken most of what GPT-2 wrote verbatim in most cases but there are places I have restructured the conversation. To take all of what it wrote in the order it wrote it would be unbearable to read.

1

Bård Ionson

Bård Ionson is an artist who is a relative beginner with art but has spent a career working with computers and programming. He is now creating digital art and video art using oscilloscopes, scanners and artificial intelligence technologies.

Art

Tech

Curators' Choice