As discussed in Can a Machine be an Author? from the Open Access Module, there is an ongoing debate about whether AI can be considered the author of a work. Since AI is a relatively new technology and there are still many grey areas, the guidelines are continuously evolving. Below is how various communities, including Creative Commons, educational institutions, and copyright organizations, are addressing AI-related challenges:
Creative Commons and AI
GenAI Considerations
“We recognize that there is a perceived tension between openness and creator choice. Namely, if we give creators choice over how to manage their works in the face of generative AI, we may run the risk of shrinking the commons. To potentially overcome, or at least better understand the effect of generative AI on the commons, we believe that finding a way for creators to indicate “no, unless…” would be positive for the commons.”
-Anna Tumadóttir, Questions for Consideration on AI & the Commons

As we covered in the module What is Creative Commons?, Creative Commons operates on top of copyright law. In the United States, there are several strong cases where using copyrighted works to train generative AI models could be considered Fair Use, even though that is also use case dependent. However, the use of openly available content in GenAI models may not always align with the original creator’s intention for sharing it. This is especially relevant since much of this content was likely shared before the development of GenAI, meaning the creators may not have anticipated its use in such a context.(Ross, 2024) As of August 2024, Creative Commons is exploring the development of preference signals to enable creators to indicate their preferences regarding the use of their works in AI training. This initiative aims to empower creators with more nuanced control over how their content is utilized in the context of generative AI
Mitigating use of AI
Aside from Creative Commons, there are multiple initiatives that are developing licenses, initiatives or softwares that are mitigating uses of AI or machine learning. Below are few examples:
RAIL license
RAIL (Responsible AI License) is a license that allows software developer to restrict the use of their AI Technology in order to prevent irresponsible and harmful application, such as preventing AI software to use it for surveillance or malicious purpose.
Nightshade
Nightshade is a tool developed by researchers at the University of Chicago to protect artists’ work from unauthorized use in training AI models. It uses a data poisoning approach by subtly altering digital images in ways that are imperceptible to humans but cause AI systems to misinterpret the content, thereby disrupting attempts to replicate the artist’s style.
Have I been Trained?
Have I been trained? is a search engine developed by Spawning that allows users to check if their images have been used in AI training datasets. Spawning also created a tool called ai.txt, which enables website owners to create a text file specifying rules to prevent AI from scraping their data.

Dig Deeper
To learn more about Creative Commosn and AI
Adapted from Ross, R. (2024, August 23). Six Insights on Preference Signals for AI Training. Creative Commons.