Why Artists are Fed Up with AI Art.
Education
Introduction
In recent times, discussions surrounding AI-generated art have intensified, drawing attention to significant ethical concerns. Understanding how AI works is crucial to grasp the severity of these issues. Unlike creating art from scratch, AI requires input to produce output. Companies like Stability AI profit from high-quality media by training their models on vast data sets. These sets are created by assembling visual data that has correlations to generate a final image. This process is quite simplified but encapsulates the mechanics behind AI systems. Essentially, images can only be generated if there is a data set, and this is where the problems arise.
Many of these data sets include billions of copyrighted images—artworks, photographs, and more—all collected from the internet without the consent of the original creators. My artwork, shared on social media platforms like Instagram, has been used to train AI models without my consent. If you've shared your own work online or even images of yourself or your environment, there’s a good chance your creations are also embedded in these data sets.
For a deep dive into this topic, I suggest watching Steven Zapata's video, linked below. The core issue lies in the fact that when artwork is shared online, it inadvertently opts into an exploitative system. Pieces created by artists, regardless of the hours of toil poured into them, are being used without permission. This constitutes an industrial-scale violation of our rights as creators.
Stability AI has come forward with a troubling admission. They stated, "because the fusion models are prone to memorization and overfitting, releasing a model trained on copyrighted data could potentially result in legal issues," meaning these models directly utilize copyrighted content. Many AI-generated images resemble their original counterparts closely. The practice of data mining allows these corporations to amass inconceivable amounts of data, specifically targeting legal loopholes that allow exploitation.
One prominent dataset known as LAION-5B claims to consist of 5.8 billion image-text pairs, which includes copyrighted materials. This practice of using copyrighted data without consent in commercial products is not just unethical; it is also illegal. Yet, the companies involved continue to monetize these systems. The creation of such databases has led to applications like Dream Studio and Stable Diffusion being funded primarily to serve commercial interests.
Additionally, AI tools like Lensa AI have begun charging users for specific avatar packs, which further cements the idea that these companies are making large profits at the expense of artists. The practice has been aptly named "data laundering," showcasing how these organizations duck legal implications.
AI platforms are using the names of well-known artists, such as Greg Rutkowski, generating works that reference their styles. By employing these names, AI systems do not merely seek inspiration; they risk reputational damage, fraud, and identity theft for the original creators. The models trained on these datasets are unable to forget the copyrighted material they've processed, leading to an unending loop of compromised data.
The implications are glaring. Artists pour their passion and soul into their creations, but these works are pulled from the internet without their consent. The ideal scenario for large corporations seems to be a world devoid of the need for individual artists, allowing them to monopolize the entire industry. Tragically, creatives lack the option to opt out, and AI systems lack the capacity to forget the copyrighted content gleaned from their training.
Some supporters of AI seem to dismiss these concerns outright. Arguments often arise suggesting artists should just accept that AI is here to stay. Yet, this perspective overlooks how artists are pushing against unethical practices surrounding the use of their work. For instance, while AI-generated audio content is developed with copyright considerations, visual artists still suffer a double standard. This discrepancy is glaring, given that both art forms have been pivotal to human culture throughout history.
As artists, we are fighting for our rights—not to eliminate AI technology. We want a system that respects our work and allows artists a say in how their creations are used. Exploitation, commercializing work without consent, and disregarding the artist's rights is confronting us on a massive scale.
I've witnessed firsthand how toxic the AI community can be. There was a recent incident where a person created an AI model based solely on 300 images of my work, taken without my knowledge or consent. They outright stated that asking for permission was unnecessary, risking their own legal difficulties. I called out this practice on social media, causing an uproar. Some individuals became hostile, retaliating by taking more of my work and creating additional models.
Comments directed my way ranged from rudely dismissive to outright accusatory about my concerns. These instances reflect a broader misunderstanding among AI proponents regarding the very real fears artists have about their work being appropriated. Talented artists like Greg Rutkowski have also encountered negative reactions when voicing their concerns.
Yet, we stand together as a community to educate and push back against these exploitative practices. We can still influence how AI interacts with artists moving forward. A recent success story includes DeviantArt's reversal of their decision to automatically opt artists into their AI system, illustrating that collective voices can make a difference.
If you’re an artist or have any awareness of these issues—speak out. It is important not only for yourself but for the younger generation of artists currently struggling with feelings of hopelessness. We need to create a future where art remains a viable career choice, where young creators can thrive without fear of exploitation.
This is an urgent moment in history where big changes need to be made. Thank you for listening and for your engagement with this pressing issue. I encourage you to share this information, educate others, and advocate for more ethical practices within AI art. Together, we can strive for a balanced future where artists and technology coexist in a fair and respectful landscape.
Keywords
AI art, copyright infringement, Stability AI, data mining, intellectual property, ethical practices, visual artists, consent, exploitation, legal loopholes.
FAQ
Q: Why are artists upset about AI-generated art?
A: Artists are upset because their copyrighted work is being used to train AI models without their consent, leading to exploitation and a lack of protection for their rights.
Q: What is data laundering?
A: Data laundering refers to the exploitation of copyrighted material through legal loopholes, allowing companies to profit from using artists’ work without consent.
Q: Can artists opt out of having their work used by AI?
A: Currently, artists do not have the option to opt out, which raises significant ethical concerns about how their work is appropriated.
Q: How does AI-generated art differ from human-generated art?
A: AI generates content by directly using data from training sets, often replicating specific styles or works, while human artists draw inspiration and create original pieces through their unique life experiences.
Q: What can artists do to protect their rights?
A: Artists can raise awareness, educate others about the issue, and join together to demand more ethical practices surrounding AI usage of creative content.