AI art promises innovation, but does it reflect human bias too?

Can technology exist without the influence of human prejudice?

Oct 18, 2022 Tech

2 years ago

In 2015, Twitter user Jacky Alciné tweeted that he and his friends were mistakenly identified as “gorillas” by Google’s image-recognition algorithms in Google Photos. The solution? Google opted to censor the term “gorillas” on Google Photos entirely, with a spokesperson saying that the technology is “nowhere near perfect.”

Such incidents are not uncommon in the otherwise revolutionary arena of Natural Language Processing (NLP), the subset of Artificial Intelligence (AI) that allows computers to understand human language. NLP is responsible for tools like Siri and Google Translate, and now, in combination with deep learning–another subset of AI that enables algorithms to learn new things–it powers platforms like DALL-E 2 and Midjourney to process word prompts, generating stunning works of art.

c l a i r e (Claire Silver, 2022) on SuperRare, an example of AI generated art

With the only skills necessary being a dexterous wielding of the English language and a good imagination, AI has birthed an unprecedented medium for artists and artists-to-be. Anyone who may have lacked the technical skills necessary to paint on a canvas or use a camera can now sculpt their own vision algorithmically. That is not to say the scene is riddled with amateurs; many big names, like Mario Klingemann, have played a part in shaping the movement as it continues to evolve today.

Looking at Klingemann’s work, one might conclude that AI is the next natural step in the evolution of the art world, with its own Dalis or Warhols waiting to be made. With the art now being algorithmically generated, the possibilities of concepts or ideas on this newfound digital canvas seem endless.

Beneath the dazzle of an algorithmic Renaissance, however, lie lines of hard code which, while seemingly neutral, have been the center of much controversy. Some critics argue that the algorithms powering AI art perpetuate harmful biases and stereotypes found in humans. More cynically, these algorithms have the ability to shape the way we see the world, coloring the visions of AI artists and their audience. AI art might promise to leave its mark, but its potential may be tainted by the very beings who designed these algorithms in the first place: us.

AI doesn’t enact bias, people do

While most of us do not actively think about it, algorithms govern many parts of our lives, from social media to online shopping. Even the choices we make in our daily commute can be decided algorithmically, with apps like Waze and Uber sifting through live data to give users the fastest routes or the price of a ride home.

Algorithms have played a part in improving the services we use over the years, but that is not always the case. In parts of America, various district police forces have used algorithms as part of their police work. Until April 2020, the Los Angeles Police Department (LAPD) worked with PredPol (now known as Geolitica) to algorithmically predict where crimes in a district were most likely to occur on a given day. Activists have criticized PredPol for perpetuating systemic racism by running its algorithms based on datasets measuring arrest rates, a model which disproportionately targets people of color who face higher arrest rates per capita compared to white people. Hamid Khan of the Stop LAPD Spying Coalition calls algorithmic policing “a proxy for racism” and argues that he does not believe that “even mathematically, there could be an unbiased algorithm for policing at all.”

Even though PredPol might be an extreme example, it demonstrates that algorithms and machine-learning systems are not above human bias, which can very easily bleed into AI-powered tools if left unchecked. PredPol, along with the earlier case of Google Photos, illustrate the consequences of AI inheriting the biases of the datasets given to them, a phenomenon that the tech community has dubbed “Garbage In, Garbage Out” (or GIGO for short).

GIGO in AI art

PredPol may be an example of bias in a policing algorithm, but these same biases can exist in the deep-learning algorithms used to generate AI art. This is an issue which OpenAI, the developers of DALL-E 2, have pointed out themselves. For instance, the prompt, “a flight attendant” generated photos mainly of East Asian women and the prompt “a restaurant” defaulted to showing depictions of a typical Western restaurant setting.

DALL-E 2’s generation of the prompt “a restaurant,” depicting Western restaurant settings and tableware. Sourced by Elliot Wong

An example of an Asian restaurant with a vastly different looking interior compared to DALL-E’s generated images. Sourced by Elliot Wong

The examples raised by OpenAI highlight that DALL-E 2 tends to represent Western concepts in its prompts by default. Though these stereotypes can be mitigated to a certain degree through more specificity in writing prompts, OpenAI rightfully points out that this makes for an unequal experience between users of different backgrounds. While some will have to customize their prompts to suit their lived experiences, others are free to use DALL-E 2 in a way that feels tailored to them.

OpenAI has also worked to try and reduce the generation of offensive or potentially harmful images, such as overly-sexualised depictions of women when unwarranted from prompts, with methods such as putting filters on various inputs. This, however, raises its own set of problems; putting filters on prompts about women led to a reduction in generated images of women entirely.

The representation of Western concepts seems fairly natural given that OpenAI was founded in San Francisco, with most of their operations based in the US. But alternative options seem to be lacking. Other established research labs with their own AI generator programs, such as Midjourney and Stability AI, are also based in the West, with these two hailing from the US and the UK respectively. Another layer of bias centers around language; with most of the research and development of these programs being done in English, the images generated adopt an English-speaking perspective which may not capture the nuances of cultural and linguistic differences in other parts of the world.

Examples of the way AI processes the concept of race, generating images of the Mona Lisa as specific ethnicities. 

These factors play a part in creating datasets that are bound to be biased in one way or another, no matter the good intentions of developers. This is where the term “Garbage In, Garbage Out” puts things into perspective: if the generation of AI art depends on biased data, which continues to remain biased, then the programs behind AI art could end up in a feedback loop that inevitably perpetuates the biases of the Western world.

Bias might hold innovation back

Beyond being an issue of representation, the algorithms behind AI art may stifle innovation rather than grow it.

Even as developers like OpenAI try to make algorithms that are optimized to create the “best” possible image, “best” is ultimately subject to existing trends and tastes at the time. Datasets may sample these trends, creating art that in turn creates trends which mirror the previous trends, ultimately homogenizing the AI art scene as a whole.

The homogeneity of art as a result of trends is nothing new. Each era of art throughout history developed its own unique sense of style and form, from the realistic depictions of the Renaissance to the abstractions of post-modern art, within which there were many works of art that looked similar in style and form. With AI art, on the other hand, homogeneity becomes far more likely to occur; with more creative control relegated to the algorithms and datasets used in AI art-generating programs, the artist has to strive harder to break away from existing trends and diverge from the norm.

Outside of AI art, social media provides evidence that homogeneity in algorithms is already a problem. Researchers at Princeton University found that recommender systems, the algorithmic models responsible for recommending content to users, tend to be caught in feedback loops, a phenomenon the researchers have dubbed “algorithmic confounding.” As users make decisions online, such as liking or clicking on content, recommendation systems are trained on such user behavior, further recommending similar content for users to consume. These feedback loops increase homogeneity without increasing utility; in other words, users may not necessarily be getting the content they desire despite an increase in similar recommendations.

An illustration of the feedback loops in social media. Source: Chaney et al.

In terms of the art and creative industry, such feedback loops have proven to be harmful. Consider the backlash against Instagram. Many creators and celebrities have come out to voice their criticism of Instagram’s decision to favor short-form video content in its algorithms in a bid to rival TikTok. The petition “Make Instagram Instagram Again” gripes that Instagram is full of recycled TikTok content as a result of its algorithm. (At present, roughly 300,000 have signed the petition.), Instagram’s CEO Adam Mosseri does not inspire confidence in a more dynamic and inclusive digital future. Responding to the requests for more content from friends (as opposed to brand accounts and influencers) in the feed, Mosseri tweeted that stories and DMs are already options for this; rather than listen to Instagram’s user base, Mosseri simply asserted the company’s overall strategy.

If social media algorithms can result in “old stale content” (as the petition phrases it), algorithms responsible for AI art can be susceptible to the same feedback loops, especially if the datasets behind them are not actively managed. Additionally, as Mosseri has shown, the people responsible for what algorithms show us may not necessarily care about what the people want, leaving room for improvement and change in the hands of a select few people. GIGO could become a reality in every sense of the phrase, with AI art eventually bearing little to no sense of originality over time.

A more representative future

While a more vibrant and inclusive AI art scene might be the end goal, the road towards it still stretches far ahead. Many of the platforms that generate AI art are still in beta, and even the most widely available beta Midjourney, is only available on Discord with limited features. 

As OpenAI and Midjourney release their betas to more users, uncertainty may arise over possible abuse of these programs for malicious means, such as deepfake pornography or controversial political imagery. However, the alternative of keeping these programs in the hands of an elite minority–as OpenAI previously did–would only serve to enforce bias present in AI art, so a larger pool of beta testers seems to be a step in the right direction.

More importantly, the datasets that algorithms sample need to accommodate for a wider variety of lived experiences around the world and across different languages. Ultimately, while bias in AI may be difficult to eliminate completely (as it is with humans), sampling from more diverse data may help mitigate some of that bias and create more innovative generations of art.

AI art has the potential to shake up the world of digital art as we know it, especially as it sees a growing community within Web3. Artists like Claire Silver are making huge waves in the AI art scene, and galleries solely dedicated to AI art are being formed. Like Web3, there is the hope that AI art will give everyone a shot at creating a work of art on their own, especially given that art is usually an endeavor reserved for those with time and money. But creating that reality requires an extensive effort to include different voices in the development of these new age tools. And just as art is an expression of our personal voice, to steer AI in a more inclusive direction, we need to shout into the void and hope it echoes back.

20

Elliot Wong

Elliot, aka squarerootfive, is a visual artist who seeks to bring clarity to the cultural issues surrounding Web3. He hopes to see the maturation of the scene as time goes on and guide conversations in the space for the better. He can be found on Twitter at @squarerootfive

Art

Tech

Curators' Choice