A new data poisoning tool called Nightshade is giving artists and creators a way to fight back against misuse of their work by AI image generators. Developed by researchers at the University of Chicago, Nightshade subtly alters images so that AI models trained on them produce distorted outputs.
Nightshade works by making tiny, imperceptible changes to images that trick AI systems but are invisible to humans. For example, it might tweak a few pixels or alter colors in a way that humans can’t perceive. But for an AI system, those small changes are enough to completely throw off its ability to generate realistic images.
“We developed Nightshade to shift the balance of power back towards creators and help them control how their work is used,” said Ben Zhao, professor of computer science at the University of Chicago and lead researcher on the Nightshade project.
The need for a tool like Nightshade has grown as AI image generators like DALL-E and Stable Diffusion have exploded in popularity. These systems are trained on huge datasets of images scraped from the web, often without permission from the creators. The AI then learns to generate new images in the style of the training data.
This has raised concerns about copyright infringement, plagiarism, and misuse of artists’ creations. Nightshade poisons the training data so that any AI trained on altered images won’t be able to replicate the original artworks accurately.
Since being announced in October, Nightshade has received praise from artists and photographers who want to protect their creations. “This gives us a way to fight back against the frightening power of these AI systems,” said one digital artist.
It remains to be seen how widespread adoption of Nightshade will be and whether AI companies will find ways to overcome its effects. But for now, it represents an important development in the battle between human creativity and artificial intelligence.