From Hollywood strikes to digital portraits, AI’s potential to steal creatives’ work and how to stop it has dominated the tech conversation in 2023. The latest effort to protect artists and their creations is Nightshade, a tool allowing artists to add undetectable pixels into their work that could corrupt an AI’s training data, the MIT Technology Review reports. Nightshade’s creation comes as major companies like OpenAI and Meta face lawsuits for copyright infringement and stealing personal works without compensation.
University of Chicago professor Ben Zhao and his team created Nightshade, which is currently being peer reviewed, in an effort to put some of the power back in artists’ hands. They tested it on recent Stable Diffusion models and an AI they personally built from scratch.
Nightshade essentially works as a poison, altering how a machine-learning model produces content and what that finished product looks like. For example, it could make an AI system interpret a prompt for a handbag as a toaster or show an image of a cat instead of the requested dog (the same goes for similar prompts like puppy or wolf).
Nightshade follows Zhao and his team’s August release of a tool called Glaze, which also subtly alters a work of art’s pixels but it makes AI systems detect the initial image as entirely different than it is. An artist who wants to protect their work can upload it to Glaze and opt in to using Nightshade.
Damaging technology like Nightshade could go a long way towards encouraging AI’s major players to request and compensate artists’ work properly (it seems like a better alternative to having your system rewired). Companies looking to remove the poison would likely need to locate every piece of corrupt data, a challenging task. Zhao cautions that some individuals might attempt to use the tool for evil purposes but that any real damage would require thousands of corrupted works.