While discussions of AI in academia have largely centered on its ethical use in teaching, learning, and research, deeper questions about the fundamental nature of AI remain unresolved. Generative AI, in particular, creates seemingly novel content, raising critical questions about the copyright status of these outputs.
Broadly speaking, generative AI outputs raise two important questions related to “copyrightability”. Firstly, they upset traditional notions of authorship. While the Copyright Act does not explicitly define the term “author”, basic assumptions about the status of an author are infused throughout the Act. For example, copyright term lengths are explicitly tied to the life of the author. This conception of authorship is predicated on a human author, as there would be no way to make a copyright term determination based on the lifespan of an AI tool.

Case Study – The Monkey Selfie
While Canada has little jurisprudence on the question of human authorship directly, the US has several instructive examples we can look to.
One such dispute involves the copyright status of a photograph of a macaque that was claimed to have been taken by the monkey itself. Originally published alongside a Daily Mail article, wildlife photographer David Slater licensed images of a monkey alongside quotes indicating that the photographs were self-portraits because the monkey had tripped the remote shutter-button. A user later uploaded the photograph to Wikimedia Commons, claiming that because it was created by a non-human author, it was not protected by copyright. Although not decided in a formal lawsuit, the dispute ended up at the United States Copyright Office, where its claim for registration was refused, confirming Wikimedia’s claims that the work was not eligible for copyright protection.
A second lawsuit, filed by People for the Ethical Treatment of Animals (PETA) asked that the monkey be assigned copyright. This suit was ultimately unsuccessful, with the court finding that only a human-authored work was eligible for copyright protection.
If we assume that only humans can be authors, it leads us to look for elements of human authorship within generative AI processes. For example, can someone claim authorship over the set of prompts that they input into an AI generator, and if so, would this then give them rights over the resulting outputs? Questions such as these have yet to be resolved, either through legislation, policy, or jurisprudence. Currently, those with an interest in AI are looking to lawsuits, many of them coming out of the US, to provide guidance. George Washington University maintains an AI litigation database, which highlights the ongoing debate over the legality of AI-related outputs and processes.
AI Outputs and Originality
The second question raised by generative AI outputs has to do with originality, a requirement of any work that seeks to be eligible for copyright protection. This may be an even more complicated question than the one surrounding authorship, as it rests on a sophisticated understanding of how generative AI models work. Here, an example that highlights the potential issue may suffice:

This image, created by stable diffusion, appears to reproduce a version of the Getty Images watermark, suggesting that photographs from the visual media company’s repertoire may have ended up as part of Stable Fusion’s training data. To what degree one can consider this generated image “original” is difficult to parse, especially when end users usually have little knowledge of the corpus of material that the AI was trained on.
Questions of originality almost always lead to questions of infringement, and certainly copyright holders have been vocal about their concerns over the use of their works as inputs for AI training purposes. As this example illustrates, the relationship between AI inputs (training data, prompts) and outputs is still difficult for the average user of generative AI to understand.
Complexities Abound
This module does not aim to provide a comprehensive analysis of all copyright considerations related to AI. However, it is worth noting that legal scholars and governments continue to grapple with the challenges of originality and authorship in AI-generated content. Recently, the Canadian government launched a consultation on copyright in the age of generative artificial intelligence, asking stakeholders to weigh in on a variety of AI-related issues. In early 2025, the government issued a “What we Heard Report”, summarizing the feedback they received from interested parties. A review of the stakeholder feedback and government report makes clear that there is no general consensus on how the government should address issues arising from the advancements of AI technology. Over the course of the next several years, we should expect to see governments and courts grapple with policy and legislative reforms in response to AI.

Dig Deeper
To learn more about AI governance in Canada, review:
What We Heard Report: Consultation on Copyright in the Age of Generative Artificial Intelligence
Image Credits: Macaca nigra self-portrait. Public Domain. ; No Title. The Verge / Stable Diffusion. Copied under fair dealing.