IPOsgoode /osgoode/iposgoode/ An Authoritive Leader in IP Tue, 17 Mar 2026 02:18:13 +0000 en-CA hourly 1 https://wordpress.org/?v=6.7.1 When AI Hurts: Stress Testing the Heart of the Law /osgoode/iposgoode/2026/03/11/when-ai-hurts-stress-testing-the-heart-of-the-law/ Thu, 12 Mar 2026 03:25:41 +0000 /osgoode/iposgoode/?p=41207 If law is the heart of a democratic society... then AI functions as the stressor that tests its endurance in the face of emerging and increasingly complex legal challenges.Ìę

The post When AI Hurts: Stress Testing the Heart of the Law appeared first on IPOsgoode.

]]>
By Shadi Nasseri

In cardiac medicine, the condition of the heart cannot be assessed in a state of rest, nor is it wise to wait for moments of acute failure. Clinicians instead employ stress testing to measure cardiac performance under conditions of controlled exertion, typically by monitoring patients as they walk or run on a treadmill. This diagnostic technique often reveals vulnerabilities undetectable at rest.Ìę Under strain, hidden weaknesses surface—irregular rhythms, constricted pathways, structural fragilities that calm conditions conceal.

At the recent panel, “,” hosted at Osgoode by Professor , one theme pulsed beneath every doctrinal discussion: artificial intelligence is placing the law under sustained pressure. And if law is the heart of a democratic society - circulating norms, distributing responsibility, sustaining trust - then AI functions as the stressor that tests its endurance in the face of emerging and increasingly complex legal challenges. 

AI systems now influence hiring, lending, securities trading, education, communications, and health care. They make or shape decisions once exclusively human; and when harm occurs (economic loss, discrimination, psychological injury, market manipulation), the causal chains are longer, more opaque, and more distributed across multiple actors and systems. In this context, the question becomes whether the law can preserve its steady cadence as the pace of technological change pushes it to run faster and faster.

For centuries, the common law has evolved incrementally, adapting to industrialization, mass production, financialization, and digitization. Negligence, fraud, public nuisance, product liability - these doctrines were built to regulate human actors making human decisions. At the same time, the legal system has embedded adaptive principles that enable it to evolve—doctrines such as technological neutrality, incrementalism through analogy, the “living tree” doctrine, and the open-textured character of standards such as reasonableness and foreseeability. Duties of care are established through proximity. Fault turns on knowledge and intent. Causation links conduct to harm. Liability is allocated. These principles have allowed the law to endure and respond to social and technological change. Artificial intelligence, however, introduced pressures of a different magnitude and speed. It was precisely this tension between doctrinal continuity and technological acceleration that animated the panel’s discussion, which examined how negligence, fraud, and the allocation of responsibility are each being placed under strain in distinct but interconnected ways.

Negligence Under Strain

One of the most vivid examples discussed by the panel was brought by Ontario school boards (including the Toronto District School Board), against Meta Platforms, TikTok, and Snap Inc. Their claim alleges negligent design and public nuisance, that these companies knowingly engineered platforms in ways that caused widespread disruption of the education system – diverting resources, exacerbating behavioural issues, and contributing to mental health crises.

The legal framework invoked in the litigation is not novel. The plaintiffs ground their claim in established negligence principles traceable to : duty of care, breach of the applicable standard, causation, and damages. But here is where the incline steepens, the doctrinal structure is familiar, the difficulty only arises in its application. The harm alleged is primarily relational and economic in nature. Canadian law has historically been cautious about pure economic loss. Establishing a duty therefore requires demonstrating sufficient proximity, and any extension of liability must be justified as an incremental development grounded in considerations of justice and fairness. The central issue is whether the relationship between social media platform designers and educational institutions is sufficiently close to ground such a duty. The negligence doctrine is thus being tested in a new context.

The negligence claims against social media companies illustrate how familiar doctrinal elements encounter novel factual configurations. The plaintiffs do not allege that defendants erected physical barriers to education. Rather, they contend that the platforms were deliberately and knowingly engineered - through geolocation, engagement optimization, and targeted notifications – in ways that foreseeably disrupted the educational system. The defence contests both proximity and foreseeability, arguing that the relationship between platform designers and educational institutions is too attenuated to ground a duty of care. Under these pressures, the courts must decide whether established negligence principles can coherently accommodate systemic harms arising from algorithmic design.

Fraud, Intent, and the Autonomy Problem

Where the social media litigation is testing the elasticity of proximity and foreseeability with negligence doctrine, the emergence of autonomous and learning AI systems places even more profound pressure on the mental-state architecture that underpins fraud.  Traditional fraud requires scienter: knowledge, recklessness, and intent to deceive. A plaintiff must establish a material misstatement, reasonable reliance, and causation. These elements presuppose a human decision-maker capable of forming a culpable state of mind. The difficulty arises when the operative “actor” is an algorithm that develops strategies through machine learning rather than explicit human instruction.

Panelists and considered scenarios involving high-frequency trading systems that might manipulate markets while pursuing a generic instruction such as “maximizing profit.” If no individual human formed a fraudulent intent in the classical sense, can we still allocate liability? Here, the stress test becomes acute. Fraud doctrine was built around human mental states. AI systems destabilize that foundation by generating outcomes that may be strategically sophisticated yet not traceable to a single, conscious mental state. In response, the panel considered several adaptive pathways: imputing an AI system’s “knowledge” to its developers or deployers; analogizing AI to an employee under agency principles; adopting burden-shifting or quasi-strict liability approaches; or relaxing reliance on scienter requirements, as has occurred in certain areas of U.S. securities regulation.  

These proposals do not abandon the doctrinal core. Rather, they attempt to recalibrate its rhythm. The deeper inquiry is whether intent and knowledge must be reconceptualized in functional terms—perhaps by asking whether the AI system would have behaved differently had a consequence been removed (a counterfactual test for “intent”), or whether it used specific information for a defined objective (a scope notion of “knowledge”). The law has historically adjusted its doctrines under technological pressure. The open question is whether the adaptation required here will remain incremental, or whether the strain risks distorting foundational concepts beyond recognition.  

Control, Foreseeability, and the Circulation of Responsibility

Building on the panel’s examination of negligent design claims and the challenges of proving scienter in algorithmic fraud, a further axis of strain emerged in the discussion: how to allocate legal responsibility in AI systems characterized by distributed and fragmented control. The allocation of legal responsibility has long been structured around the concept of control; agentic AI disrupts this conventional alignment. Large language model-based agents can iteratively plan, access external tools, access databases, refine strategies, and execute tasks with minimal ongoing human oversight. A user provides a broad instruction, and the system determines the operational pathway. When harms result, the attribution of control becomes contestable. Is responsibility property assigned to the user who articulated the goal, the developer who designed the model architecture, the corporation that deployed the system, or some distributed network of actors who collectively shaped its training data, parameters, and deployment context? The difficulty is not merely practical but conceptual.

As the incline increases with complexity, foreseeability becomes more and more blurred. Developers may possess systemic knowledge of general risks while lacking foresight into specific outputs. Users control initiation, but not the path taken. Corporations manage deployment but may not anticipate emergent behaviours. This diffusion of control does not mean responsibility disappears. Rather, it strains the arteries through which liability traditionally flows.

Litigation as Emergency Response

As panelist observed, litigation is inherently reactive. By the time a case reaches court, the harm has already materialized. A stress test, by contrast, is diagnostic: it identifies vulnerabilities before catastrophic failure.  When the heart falters under exertion, clinicians intervene (perhaps with medication, with surgery, or with lifestyle change). Similarly, the legal responses to AI cannot be singular. The panelists outlined a list of options. Ex post liability regimes (negligence, nuisance, fraud) will continue to evolve incrementally through judicial interpretation. Ex ante regulatory frameworks may articulate risk tiers or sector-specific obligations. Insurers may function as soft regulators, pricing AI-related risk and influencing corporate behaviour through coverage conditions. Gatekeepers (i.e. auditors, platforms, financial institutions) may assume greater responsibility in monitoring systemic risk.

Evidence of strain does not necessitate doctrinal abandonment. The discussions of the panel suggest that the legal system has not reached doctrinal failure. Negligence law remains operational, even as it confronts novel factual matrices. Fraud doctrine exhibits strain but retains conceptual elasticity. Public nuisance and product liability continue being extended incrementally, grounded in established principles of proximity, knowledge, and control. Nevertheless, the pressures on these doctrines are intensifying, and their continued adaptive resilience cannot be taken for granted.

AI systems scale rapidly. The harms they generate may be diffuse, cumulative, and distributed across institutional and geographic boundaries. Their evidentiary opacity complicates discovery and proof. At the same time, powerful economic incentives drive their deployment and expansion. The heart of the law continues to beat—but at a much faster pace. The real risk is not that AI requires immediate, sweeping abandonment of existing doctrine. Rather, it is that we fail to attend to the diagnostic signals now emerging.  Where the law shows early signs of strain (evidentiary bottlenecks, liability gaps, doctrinal contortions), we must respond with thoughtful regulatory design—not complacency.

AI does not introduce entirely new moral dilemmas. Rather, it accelerates, amplifies, and exacerbates existing ones. The discomfort of the present moment reveals where our legal assumptions about agency, causation, and responsibility may require attention and refinement. If law functions as the heart of a democratic society, its vitality depends upon its capacity to respond under pressure. For now, the heart continues to pump—but the stress test is just getting started.  


A lawyer and graduate of the Osgoode Professional , Shadi Nasseri's doctoral research addresses the profound legal and ethical concerns arising from neurotechnologies, including issues related to mental integrity, human dignity, personal identification, freedom of thought, accessibility, autonomy, and privacy.

The post When AI Hurts: Stress Testing the Heart of the Law appeared first on IPOsgoode.

]]>
Copyright’s Edges and the Ethics of Expression in the Digital Age /osgoode/iposgoode/2026/02/25/copyrights-edges-and-the-ethics-of-expression-in-the-digital-age/ Thu, 26 Feb 2026 03:26:29 +0000 /osgoode/iposgoode/?p=41190 Silbey’s central thesis is that contemporary copyright law is undergoing a gradual but profound transformation. Doctrines historically designed to facilitate the circulation of ideas and preserve the public domain are increasingly weakened, enabling the commodification and enclosure of knowledge, facts, and expressive fragments.

The post Copyright’s Edges and the Ethics of Expression in the Digital Age appeared first on IPOsgoode.

]]>
Reflections on Jessica Silbey's 2026 Grafstein Lecture in Communications

By Xiang Zhang

“To write poetry after Auschwitz is barbaric.” With this stark provocation, articulated not a prohibition on artistic expression, but a profound ethical challenge to culture itself: how can expression remain meaningful in the aftermath of catastrophe, and what conditions must exist for culture to serve as a site of witness, critique, and hope? It was this philosophical tension—between the fragility and necessity of expression—that framed Professor ’s 2026 at the University of Toronto Faculty of Law. Like Adorno, Silbey’s inquiry was animated by a concern with the conditions under which expression remains possible, and by the recognition that cultural and legal structures can either sustain or foreclose the emancipatory potential of art, knowledge, and communication.

Silbey’s central thesis is that contemporary copyright law is undergoing a gradual but profound transformation. Doctrines historically designed to facilitate the circulation of ideas and preserve the public domain are increasingly weakened, enabling the commodification and enclosure of knowledge, facts, and expressive fragments. Where copyright once functioned as a structural safeguard for expressive freedom, it now frequently operates as a mechanism for proprietary control, allowing private actors to assert ownership over the fundamental materials of communication. In this respect, Silbey’s critique closely parallels Adorno’s broader diagnosis of modern culture: both identify systemic forces that transform culture from a domain of human freedom into an instrument of economic rationality, commodification, and control. In both accounts, culture’s emancipatory function is displaced by the logic of ownership and monetization.

Silbey situated this doctrinal and cultural transformation within the broader political context of the contemporary United States, emphasizing that artistic and expressive practices are essential forms of democratic witness, particularly during periods of political instability and crisis. Expression, she argued, is not merely aesthetic but ethical; it enables individuals and communities to bear witness to injustice, preserve memory, and imagine alternative futures. The fundamental purpose of copyright law, therefore, must be to facilitate, rather than frustrate, this expressive function. Yet, as Silbey observed, copyright law, once a “public domain protecting doctrine” is now suffering from what she described as a “death by a thousand cuts”: a gradual erosion driven by expanding proprietary claims, restrictive licensing regimes, and doctrinal narrowing. Echoing Marx’s metaphor of capital as a vampire sustained by extraction, Silbey suggested that contemporary copyright increasingly survives through the extraction of control over culture itself.

To illustrate this erosion, Silbey anchored her lecture in three foundational doctrines of copyright law: the exclusion of facts from copyrightability, the de minimis defense, and personal use liberties. These doctrines, deeply rooted in both common law and transnational copyright traditions, have historically functioned as structural limits on copyright’s reach, preserving the public domain and ensuring the continued circulation of knowledge.

The first doctrine, the exclusion of facts from copyright protection, reflects the foundational principle that facts belong to the public domain. Although this principle was affirmed by the United States Supreme Court in , Silbey argued that the decision oversimplified the nature of facts by characterizing them as merely “discovered” rather than socially produced. Drawing on the history of journalism and scientific inquiry, she emphasized that facts are constructed through institutional processes of verification, expertise, and consensus. This insight carries profound contemporary implications, as private entities increasingly assert proprietary claims over data, technical standards, and functional information. Silbey pointed to examples such as insurance companies asserting copyright over climate risk assessments, private organizations licensing building codes incorporated into law, and manufacturers restricting access to repair manuals for essential technologies. These practices represent not merely legal disputes but a broader cultural shift toward the privatization of knowledge, threatening public safety, scientific progress, and democratic accountability.

The second doctrine, the de minimis defense, historically permitted courts to dismiss claims involving trivial or insignificant copying, recognizing that expressive communication necessarily involves reference, quotation, and reuse. Silbey argued that this doctrine has been significantly weakened in the digital age. Courts increasingly subject even minor uses to complex fair use analyses, raising the costs and risks of expression. At the same time, industry practices have normalized pervasive licensing, transforming even incidental uses into licensable events. The result is a cultural environment in which expressive practices once understood as ordinary and permissible are now subject to proprietary surveillance and control.

The erosion of personal use liberties represents perhaps the most profound transformation identified by Silbey. Historically, ownership of physical copies conferred broad autonomy, enabling individuals to lend, resell, preserve, and adapt cultural works. The transition to digital distribution has replaced ownership with conditional licensing, fundamentally altering the relationship between individuals and culture. Through cases such as , , and , courts have narrowed the scope of personal use, treating activities once understood as legitimate cultural participation as acts of infringement. As Silbey observed, practices central to cultural life: reading, sharing, preserving, and referencing, are increasingly framed as morally and legally suspect.

Silbey concluded her lecture by invoking Jessica Litman’s observation that personal use is not incidental but central to copyright’s constitutional purpose. Copyright was never intended to prohibit engagement with culture, but to facilitate it. The freedom to read, share, and build upon existing works is essential to cultural vitality and democratic life. The erosion of these freedoms reflects not merely doctrinal change, but a broader transformation in how culture itself is governed.

In this light, Silbey’s lecture returns us to the ethical question posed by Adorno’s opening provocation. Adorno’s warning that “to write poetry after Auschwitz is barbaric” was ultimately a reflection on the fragility of expression in the face of systemic forces that threaten to render culture hollow, commodified, and incapable of bearing witness. Silbey’s critique suggests that contemporary copyright law risks producing a parallel condition, not through violence, but through enclosure. When facts become property, fragments become licensable, and personal engagement becomes suspect, the structural conditions necessary for expression itself begin to erode. In defending these doctrinal edges, Silbey ultimately affirms that artistic expressive practices remain indispensable forms of democratic witness, and that copyright law itself bears a profound responsibility to preserve the structural conditions that enable such witnessing, by safeguarding access to knowledge, protecting the public domain, and ensuring that the legal framework governing culture facilitates, rather than forecloses expression, during periods of political instability and crisis.


Xiang Zhang is a doctoral student at Osgoode Hall Law School and an IP Osgoode Research Fellow, with a strong interest in advancing open-source and open access to knowledge.

The post Copyright’s Edges and the Ethics of Expression in the Digital Age appeared first on IPOsgoode.

]]>
Currents, Waves, and Ripple Effects – CCH’s Legacy at Home and Abroad /osgoode/iposgoode/2025/10/16/currents-waves-and-ripple-effects-cchs-legacy-at-home-and-abroad/ Fri, 17 Oct 2025 02:09:42 +0000 /osgoode/iposgoode/?p=41140 In March 2004, the Supreme Court of Canada released CCH Canadian Ltd. v. Law Society of Upper Canada. Twenty-one years later, scholars, practitioners, professionals, and observers gathered in Toronto to reflect on the enduring legacy of CCH at home and abroad.

The post Currents, Waves, and Ripple Effects – CCH’s Legacy at Home and Abroad appeared first on IPOsgoode.

]]>
On September 19-20, 2025, IP Osgoode co-hosted an important international conference on The Legacy of CCH Canadian Ltd. v. LSUC and the Future of Copyright Law. In this post, Shadi Nasseri (Osgoode PhD student, IP Osgoode Research Fellow, and Connected Minds Trainee), reflects on and the lasting legacy of the that it explored.


The image depicts a winding river in which a copyright symbol appears, with SCC and CCH written on the river banks.

The development of copyright law in Canada has never been quick to move but rather advances like a river carving its course, slow, persistent, and shaped by centuries of cultural and legal history. From the imperial statutes imported in the nineteenth century to the quiet but profound pronouncements of today’s Supreme Court, its progress has been less a leap than a measured accumulation of meaning across generations. Each judgment is a stone laid carefully in the stream, sometimes uneven, sometimes contested, yet together forming a path that reflects Canada’s patient effort to balance the rights of creators with the needs of users, tradition with innovation, and private reward with the public’s access to knowledge.

In March 2004, the Supreme Court of Canada released CCH Canadian Ltd. v. Law Society of Upper Canada, (“CCH”), a case that began as a dispute over library photocopying but grew into one of the most influential copyright rulings in Canadian history. In a single unanimous judgment, the Court redefined the purpose of copyright, reshaped its doctrinal foundations, and projected Canada’s legal voice onto the international stage. Twenty-one years later, on a bright, sunny weekend in , scholars, practitioners, professionals, and observers gathered at the Centre to reflect on the enduring legacy of CCH at home and abroad, asking: what did this ruling truly accomplish, and what did it set in motion?

The CCH ruling addressed four critical questions. First, the Court adopted the “” test for originality, rejecting the idea that mere industrious effort, what had been called the “sweat of the brow”, was enough to qualify for copyright protection. Originality required more: an intellectual contribution that reflected thought and decision.

Second, the Court narrowed intermediary liability. Simply providing the means for infringement, such as photocopiers in a library, would not make an institution liable unless it the infringing use.

Third, it clarified “,” a concept increasingly relevant in the digital age, limiting how far publishers could stretch their rights against libraries sharing works with their patrons.

And fourth, and most famously, the Court recognized fair dealing and other exceptions as “.” With this declaration, the Court placed access and fairness at the heart of copyright law, ensuring that copyright was not simply a monopoly for rightsholders but a balanced framework serving creators, users, and the public interest.

As with any turning point in law, CCH’s legacy is complex. Supporters celebrate it as the moment Canada broke from overly restrictive copyright models and embraced a fairer balance between access and control. Critics, however, argue that the decision distorted the legislation and accelerated the decline of Canadian educational publishing. While Quebec largely charted its own cultural path, much of English Canada embraced the Court’s expansive vision of user rights, leaving local publishers crying foul as they struggled to adapt and compete in the digital era.

Even within institutions, the embrace of user rights has been uneven. While fair dealing has flourished through subsequent cases in the Supreme Court’s “” and amendments to of the Copyright Act, other exceptions, such as disability rights under , remain under-utilized. Libraries and universities, wary of litigation, often adopt risk-averse policies that fail to reflect the spirit of CCH. It is a reminder that judicial doctrine alone cannot change practice; institutions (and the people who work for them) must also to carry the torch.

Though born of a Canadian library, the CCH decision quickly echoed abroad. In India, the Supreme Court adopted Canada’s “skill and judgment” test in (2008), and today Indian courts continue to revisit CCH as they grapple with generative AI disputes and the role of user rights in text and data mining. In South Africa, reform efforts to decolonize and modernize copyright law have built upon CCH, with the proposed seeking to expand exceptions and incorporate fair use principles that mirror Canada’s emphasis on balance. Across Africa’s music economy, the narrowing of intermediary liability established in CCH resonates strongly: while limiting liability can promote innovation, in regions with weak enforcement institutions, it risks enabling exploitation—highlighting the danger of transplanting doctrines from well-resourced systems into fragile infrastructures. Meanwhile, in Europe and Latin America, Canada’s approach has sparked reflection of another kind. European scholars contrast Canada’s robust recognition of user rights with the EU’s narrower framework, while in Brazil, cultural policy debates under Gilberto Gil in the early 2000s similarly sought to reframe copyright as more than just a market commodity. In each of these contexts, CCH has functioned as both compass and caution—proof that a single Canadian decision can shape global debates, but also a reminder that law must always be adapted to the realities of place and culture.

CCH Canadian Ltd. v. Law Society of Upper Canada stands as a milestone not just in Canadian copyright, but in the global story of how law adapts to new technologies and shifting cultural priorities. Its vision of user rights has shaped debates from Ottawa to Delhi, Cape Town to SĂŁo Paulo.

Looking back, CCH reminds us of the slow dance of law in Canada. It did not arrive with fanfare but unfolded through a quiet dispute about photocopiers and fax machines, carried by careful words and judicial reflection. Yet over time, its influence spread like ripples on water—shaping institutions, and practices, inspiring courts and policymakers abroad, and offering copyright law a compass for navigating entirely new technological challenges.

Law evolves slowly, but its slowness is part of its strength. In a world of disruption, it anchors us to principles that endure: fairness, balance, and the recognition that the rights of users and the public are not afterthoughts but part of the very purpose of copyright. As Canada reflects on the case twenty-one years later, it is worth remembering the lesson woven through its legacy: law does not race to keep up with every innovation, but moves like water in a stream, guided by the memory of where we have been and the hope of where we might yet go.


Links to the recorded panel presentations, speakers' bios and paper abstracts are now available .

A lawyer and graduate of the Osgoode Professional , Shadi Nasseri's doctoral research addresses the profound legal and ethical concerns arising from neurotechnologies, including issues related to mental integrity, human dignity, personal identification, freedom of thought, accessibility, autonomy, and privacy.

The post Currents, Waves, and Ripple Effects – CCH’s Legacy at Home and Abroad appeared first on IPOsgoode.

]]>
Can We Develop ‘IncoIPterms’ for Intellectual Property?Ìę /osgoode/iposgoode/2025/06/04/can-we-develop-incoipterms-for-intellectual-property/ Wed, 04 Jun 2025 16:44:57 +0000 /osgoode/iposgoode/?p=41100 Despite the complex nature of IP law, the potential benefits of creating standardized international contractual terms are clear.

The post Can We Develop ‘IncoIPterms’ for Intellectual Property?Ìę appeared first on IPOsgoode.

]]>
A Legal Feasibility żìČ„ÊÓÆ” of Standardized Terms in International IP Contracts

By Mohsen Hasheminasab

A globe with IP symbols behind a picture of an IP contract being signed

In an era of increasing cross-border trade and digital innovation, the lack of standardised contractual frameworks for international intellectual property (IP) transactions creates legal uncertainty and commercial risk. In a recent research project, I explored the feasibility of developing IncoIPterms – a set of globally accepted contractual terms for IP agreements modelled after the (or International Commercial Terms) widely used in the international trade of goods. By analyzing the key legal challenges, including the territorial nature of IP and its intangible subject matter, and offering practical solutions, I hoped to plant the seed for the future development of IncoIPterms that could similarly enhance legal predictability, reduce disputes, and so promote more efficient international commerce in intangible assets.

Why Look to Incoterms?

The Incoterms, developed by the International Chamber of Commerce (ICC), are a globally recognized set of rules for the international sale of goods. These terms have become an essential part of international trade law. They clearly define the obligations, risks, and costs borne by buyers and sellers, helping to minimize disputes and standardize practices across jurisdictions.

This success prompts the question: Can we replicate this model for intellectual property? Enter the concept of IncoIPterms: uniform international commercial terms specifically designed for IP transactions.

The TRIPS Gap and the Need for IncoIPterms

While the TRIPS Agreement (Trade-Related Aspects of Intellectual Property Rights) aimed to harmonize IP law globally, its requirement for minimum standards has often led to the maximization of IP rights instead. The result? A growing imbalance between IP protection and the need for legal consistency in cross-border commerce.

IncoIPterms could fill this gap by offering flexible, standardized contract language for IP transactions, Ìęin the same manner as Incoterms operate in the trade of goods.

Key Challenges

Of course, developing international IP terms is not a straightforward proposition. Some of the main obstacles include:

  • The territorial principle in IP: Unlike Incoterms, which govern private commercial agreements, IP rights are enforced through national legal systems. The territorial nature of IP rights complicates any attempt at standardization.
  • The intangibility of IP: Incoterms apply to physical goods, whereas IP involves intangible assets—requiring entirely different legal considerations.
  • The diversity of IP subjects: The subject matter of IP deals varies widely. This diversity makes universal standardization a real challenge.
  • A lack of global consensus: The establishment of standardized IP contractual terms is essential, but such terms must be widely accepted by all relevant jurisdictions to have practical utility. Without international consensus, such terms would lack the legal legitimacy required to resolve disputes or provide certainty in commercial transactions.

Viable Solutions

While there are hurdles to be met in the development of IncoIPterms, none is insurmountable and solutions do exist:

  • Clarify the scope: IncoIPterms wouldn’t regulate the granting of IP rights (which is the domain of states) but rather the commercial use of those rights after they’re granted. Like Incoterms, they could function without interfering with state sovereignty. In fact, one of the main reasons for the creation of Incoterms was the territorial principle and variations in national law, so this challenge is not new.
  • Embrace IP’s intangibility: Though intangible, IP assets still benefit from contractual clarity. Indeed, given the complexities and flexibilities inherent in establishing the subject and scope of IP rights, standardized terms are all the more necessary to reduce confusion and foster greater legal certainty in IP transactions.
  • Focus on core agreement types: While the subject matter of IP contracts may differ, they generally fall into a few main categories: assignment, permission for use or sale, sale of IP-based goods, joint ventures, and confidentiality agreements. By establishing standardized terms for these core categories, a framework can be built that accommodates diversity while remaining aligned with the territorial nature of IP, except in the case of assignment. As with Incoterms, exclusions and updates can be applied regularly.
  • Leverage existing practices: Customary international IP practices already exist. By building on these foundations, IncoIPterms could gain traction and acceptance across legal systems.

A Path Forward

Despite the complex nature of IP law, the potential benefits of creating standardized international contractual terms are clear. The criteria, standards, and categories under IncoIPterms must differ from those of Incoterms, of course, to suit the unique nature of IP. But just as Incoterms have helped streamline trade in tangible goods, so too could IncoIPterms promote smoother, more predictable global commerce in intangible assets.

The vision is not to override national IP systems, it should be stressed, but to offer a contractual toolkit that can be used across jurisdictions, thereby reducing ambiguity, enhancing efficiency, and ultimately supporting innovation and fair trade. Recognizing the distinction between the creation and commercial exploitation of IP rights, along with existing international practices, reveals that such international IP terms are both feasible and beneficial.

Developing IncoIPterms will require thoughtful legal design, international cooperation, and a strong understanding of both trade and IP law. But the reward—a predictable, transparent, and more dispute-resistant framework for global IP transactions—makes this a concept well worth pursuing.

Mohsen Hasheminasab is an International Visiting Research Trainee with IP Osgoode at żìČ„ÊÓÆ”, an LLM student of International Commercial and Economic Law at the Department of Law, University of Tehran, and an Attorney at Law with the Iranian Central Bar Association. He is currently based in Toronto.

Have thoughts or feedback? Join the conversation by leaving a comment below or contacting the author at: hasheminasabmohsen@yahoo.com

The post Can We Develop ‘IncoIPterms’ for Intellectual Property?Ìę appeared first on IPOsgoode.

]]>
Identifying the implications of Big Tech and digital personal data for competition policy /osgoode/iposgoode/2025/03/17/identifying-the-implications-of-big-tech-and-digital-personal-data-for-competition-policy/ Mon, 17 Mar 2025 05:09:43 +0000 /osgoode/iposgoode/?p=41068 Our paper demonstrates the growing awareness among policymakers of the important effects of Big Tech and personal data collection on competition and market power.

The post Identifying the implications of Big Tech and digital personal data for competition policy appeared first on IPOsgoode.

]]>

By 'Damola Adediji

Image of author 'Damola Adedeji

and worldwide have continued to express deep concerns about Big Tech firms and their extensive collection of personal digital data, which affects how markets operate and compete. In a I coauthored with Professor Kean Birch of żìČ„ÊÓÆ”, we dove into these policy materials, using to explore recurring themes in across various regions. Published by the , our work also sheds light on how the collection of personal data is portrayed in the latest review of competition laws, policies, and regulations, and the implications for evolving competition policy

Why Competition Policy Matters

Big Tech firms are powerful political-economic actors within the economy, especially when it comes to the mass collection and use of digital personal data. As , in a data-driven digital economy, they can therefore shape and dominate markets by structurally and strategically undermining competition through their constructed platforms—data-driven ecosystems that appear separate from the market. This capacity gives Big Tech firms structural and techno-economic power over their competitors, making it more important than ever for competition law to step up its game. Through a thematic policy analysis, our research reveals a series of key issues that policymakers around the world are identifying as important structural and techno-economic implications of Big Tech for competition.

Structural and Techno-economic Dimensions of Big Tech’s Market Power

A significant part of Big Tech firms’ market power lies in economies of scale, which can create tough barriers for new competitors to break through. For example, as points out, the high costs needed to start a business can be a genuine hurdle for newcomers, while established companies can handle regulatory costs much more comfortably. Additionally, the costs involved in switching from one provider to another can make users hesitant to change. As highlighted by , the digital economy has sped up the impact of these economies of scale, in part because personal data complicates how we understand market definitions in competition policy. The basic assumptions that guide competition policy often use price theory to define markets and identify anti-competitive behaviour. These competition frameworks therefore struggle to address situations involving seemingly ‘free’ goods (like search engines) or the trade of these free goods and services for personal data. , ).

Meanwhile, the techno-economic side of the power held by these Big Tech firms includes both the strategic and responsive growth of relationships involving technology and political-economics. This growth is aimed at connecting a range of stakeholders, including governments, businesses, users, and academia, with the infrastructures and platforms created by Big Tech.

Structural Implications of Big Tech for Competition

Scholars such as have highlighted the significance of the network effect as a key structural implication of Big Tech for competition policy. These companies have established themselves as intermediaries in building multi-sided market platforms. Network effects result from how the number of users in a network (e.g., social media platforms, search engines) increases the usefulness of the network to its users, thereby raising its attractiveness for new users. Consequently, as the noted in 2020, network effects lead to a self-reinforcing cycle in which users migrate to the fastest-growing network. With this network effect, Big Tech companies are amassing a startling amount of data, providing them with an enormous competitive advantage, creating barriers to rivals entering or thriving in relevant markets, and allowing the incumbent digital platform providers to expand into adjacent markets.

The second structural effect is connected to but distinct from the first: investments made by Big Tech firms mean they can scale up with lower-than-usual costs. As the UK's 2019  put it, ‘Both the scale and the data that the platforms possess on consumers make it hard for other players, including publishers, to compete.’ Economies of scale have provided significant benefits for Big Tech firms as they have grown quickly to dominate their markets. This is clearly becoming a cause for concern amongst policymakers worldwide (as seen in, e.g., , , , OECD 2022). The main negative effect of such economies of scale is the loss of market contestability: there are significant barriers to entry into digital markets because Big Tech incumbents benefit from first-mover technology advantages; there are also significant disparities in market information; and then there are disparities in the capacity to adjust prices because incumbents benefit from greater information (e.g., data collection) and higher processing capacity (e.g., computing infrastructure). 

The third structural issue identified in our paper is the gatekeeping role of these Big Tech companies in our societies and economies. Policymakers have thus noted that a few digital gatekeepers hold the keys to the crucial digital infrastructure that impacts our everyday lives—whether it's staying in touch with friends, finding job opportunities, or accessing information. Gatekeepers can control access to the users and their data, which can hold significant value for other firms wishing to connect with consumers. The fact that this vital digital infrastructure, including personal data, is largely provided by Big Tech, makes it tough for startups and competitors to enter the market.

Techno-economic implications of Big Tech for competition

The first techno-economic issue we identify is the capacity of Big Tech to enter adjacent markets through data collection. As the  pointed out in 2019, ‘The extensive amount of data available to Google and Facebook provide these platforms with a competitive advantage and assist with entry into related markets.’ Data-driven business models enable Big Tech to enter adjacent markets through the modular extension of technical standards and terms and conditions (e.g., APIs, SDKs, plugins).

The second techno-economic issue concerns the spread of market power through the creation of digital ecosystems as ‘walled gardens.’ An ecosystem is more than a platform: it is the configuration of technical devices, applications and software, platforms, users and developers, payment systems, terms and conditions, and other legal rights and claims and standards (see: Autoriteit Consument & Markt, 2019). As explained by the , through this ecosystem, end-users get locked in, reducing the opportunity for competition, even when products and services (e.g., Gmail, Facebook) are notionally ‘free.’

The third techno-economic issue follows the second: Big Tech reinforces its market power by creating ‘enclaves’ in which they govern economic activities. These enclaves are distinct from markets; they sit inside wider markets, , but gatekeepers can also establish the internal ‘rules of the game’ and control market information. Policymakers have highlighted various relevant business strategies and practices—including the setting of defaults, cross-selling, and self-preferencing—that reduce competition within these techno-economic enclaves.

Challenges of digital personal data for competition and competition policy

The mass collection and use of personal data by Big Tech therefore has structural and techno-economic implications for competition policy—implications with which policymakers around the world are now grappling.

A key consideration in these policy materials is the techno-economic dimension of data-driven leverage. Policymakers repeatedly observe that Big Tech enjoys a competitive edge, primarily because of its vast personal data reserves and its ability to limit other companies' access to this valuable information. Although any digital firm can gather personal data, having substantial data holdings boosts innovation potential and offers a notable business advantage. This concern has been underscored by the.

Already concentrated digital markets are likely to concentrate further without concerted action to change competition policy. Our paper demonstrates the growing awareness among policymakers of the important effects of Big Tech and personal data collection on competition and market power. Of course, there's also a looming concern that the winner-takes-all dynamics fuelled by data control could influence the future development of important technologies like artificial intelligence, which significantly depend on large training datasets.

'Damola Adediji is a Visiting Researcher with IP Osgoode and a Doctoral Candidate with the Centre for Law, Technology & Society at the University of Ottawa.

The post Identifying the implications of Big Tech and digital personal data for competition policy appeared first on IPOsgoode.

]]>
Announcing the Winners of Canada's IP Writing Challenge 2024 /osgoode/iposgoode/2024/11/11/announcing-the-winners-of-canadas-ip-writing-challenge-2024/ Mon, 11 Nov 2024 13:28:34 +0000 /osgoode/iposgoode/?p=41047 IP Osgoode and the Intellectual Property Institute of Canada (IPIC) are thrilled to announce the winners of the 2024 edition of Canada’s IP Writing Challenge.Ìę In the Law Student category, Pasha Kulinich won for his entry, “Shortcomings of the Trademarks Act in the Frontline against Counterfeit Goods”.Ìę Pasha is a 3L student at Queen's University's Faculty of Law. […]

The post Announcing the Winners of Canada's IP Writing Challenge 2024 appeared first on IPOsgoode.

]]>

IP Osgoode and the  (IPIC) are thrilled to announce the winners of the 2024 edition of .Ìę

In the Law Student category, won for his entry, “Shortcomings of the Trademarks Act in the Frontline against Counterfeit Goods”.Ìę

Pasha is a 3L student at Queen's University's Faculty of Law.

In the Graduate Student category, Ìęwon for his entry, “AI & IP – Anticipating Obvious Issues for the Pharmaceutical Drug Industry”.

Nick recently graduated from Osgoode's Professional and is an Associate at .

The winners will be receiving a prize of $1000 and, in addition to having their winning article showcased here on the IPilogue, the article will be considered for publication in the Canadian Intellectual Property Review (CIPR) or the Intellectual Property Journal (IPJ). 

We would like to thank our esteemed intellectual property experts who served as judges for this year’s Writing Challenge:Ìę, , and our own .

But above all, on behalf of the judges and IPIC, we thank all of the authors who submitted their excellent papers for consideration. We are grateful for the opportunity to support a vibrant public policy discussion on all facets of intellectual property law and technology in Canada.

Stay tuned for more information about these award-winning papers!

Ìę

The post Announcing the Winners of Canada's IP Writing Challenge 2024 appeared first on IPOsgoode.

]]>
Dr. Tesh Dagne Shines a Light on the Unseen Hands and Invisible (Copy)Rights Behind AI Systems /osgoode/iposgoode/2024/10/04/dr-tesh-dagne-shines-a-light-on-the-unseen-hands-and-invisible-copyrights-behind-ai-systems/ Fri, 04 Oct 2024 17:45:43 +0000 /osgoode/iposgoode/?p=40924 By bringing to the fore the roles of digital workers, Dagne hopes to unearth the collaborative creation that goes into the AI production chain and feeds into the AI output.

The post Dr. Tesh Dagne Shines a Light on the Unseen Hands and Invisible (Copy)Rights Behind AI Systems appeared first on IPOsgoode.

]]>
By ‘Damola Adediji
A professional headshot of Tesh Dange
Teshager Dagne, Ontario Research Chair
and IP Osgoode Affiliated Researcher

Artificial intelligence systems often “give the vibe” of complete automated processing without human involvement. However, as reminds us, upon a closer “vibe check” there are layers of unseen and under-appreciated human inputs, efforts, and labour involved. The efforts of those unseen human hands are, in fact, the engine of AI innovation.

Dr. Dagne is the Ontario Research Chair in Governing Artificial Intelligence and an Associate Professor at żìČ„ÊÓÆ”’s new Markham campus in the School of Public Policy & Administration. He also teaches Property Law at Osgoode Hall Law School, where he is an Affiliated Researcher with IP Osgoode. His current project, which he recently presented at the at the University of Cape Town, highlights how copyright enables the proactive exploitation of digital workers’ contributions as inputs to AI training or, in some cases, AI-assisted outputs.

By bringing to the fore the roles of digital workers, Dagne hopes to unearth the collaborative creation that goes into the AI production chain and feeds into the AI output. His paper, “Unseen Hands, Invisible Rights: Unmasking Digital Workers in the Shadows of AI Innovation and Implications for the Future of Copyright Law”, is soon to be published in a forthcoming volume on IP’s Futures: Exploring the Global Landscape of Intellectual Property Law and Policy (Ottawa UP, 2025), which Dagne is co-editing with and . His chapter probes the future of copyright law, attempting to turn the focus of copyright to collaborative authorship. This move, Dagne argues, could respond to demands for the fair allocation of rights between digital workers, as authors or joint authors in some cases, and AI designers as exploiters of digital works. 

Digital Workers are the Lifeblood of AI Development

As , “[AI] doesn’t run on magic pixie dust
 [AI training] is a job that actually takes quite a bit of creativity, insight, and judgment.” Such ingenuity involves the preparation of data works for the datasets used to train and build AI technologies, which consists of a number of decisions as to the kind of data to collect, curate, clean, label, abstract, index, etc. The process of dataset development starts with formulating the problem, which is the conceptualization of the machine learning task by making the problems “into questions that data science can answer”. The task conceptualization is typically the responsibility of the AI designer, which may be an AI company like Open AI or Anthropic AI, for example, or platform company like Microsoft, Meta, or Amazon. After the conceptualization process comes the data collection, refining, and measuring stage. Dagne’s focus is on the “digital workers” who enter the picture at this stage in the AI production process.

According to these digital workers contribute to the training process of AI systems in three steps: generating and annotating data (AI preparation), verifying model output (AI verification), and directly mimicking model behaviour to produce a service (AI impersonation).  They range “from higher-skilled, ‘macro-task’ [
] workers [who] offer their services as graphic designers, computer programmers, statisticians, translators, and other professional services, to [those engaged in] ‘micro-task’ [work] which typically involve clerical tasks that can be completed quickly and require less specialized skills.” () As described by , “complex projects are broken down into smaller, easily accomplished tasks, which can then be distributed to a large number of workers.” Micro-task activities mainly involve the AI preparation aspect of AI training processes but can also include the AI verification and AI impersonation steps in AI training.

The Copyright Question

Much of the debate around copyright and AI has focused on whether using the underlying work of which inputs are constituted (the images, texts, musical works and other subject matter) for unauthorized learning constitutes copyright infringement. However, Dagne’s focus is on the copyright that can subsist over collected data, as we see in some and cases, and whether digital workers’ activities in the preparation of training data sets in the AI pipeline could itself give rise to a copyright interest. This question can be answered by examining the nature of digital workers’ contributions to the tasks assigned to them and the ownership of copyright under the contractual agreements that digital workers sign with platforms.

Digital workers in the AI production value chain collect raw data and help add extra meaning by associating each piece of data with relevant attributive tags. Although have argued that this attributive task is a mundane exercise that could ultimately be automated, others like have contended that tasks such as attribution will always be assigned to humans because of their capacity to recognize and classify data. Indeed, human intervention is now in demand to recognize the nuances and sophisticated details of specific data. As noted by , an example of such demand is in the medical field, where an understanding of scientific vocabulary is required.

From a doctrinal perspective, the copyright question is whether the contribution of digital workers described above meets the threshold of originality—which is defined, in Canadian law, by the Supreme Court of Canada’s ruling in , and requires more than trivial skill and judgment in the selection or arrangement of data. If so, we might ask whether recognizing the copyright status of such contributions could address these workers' invisibility. Even if, on account of originality, the tasks executed by digital workers amount to authorship, of course such authorship does not automatically translate into ownership. The ownership of the creative tasks conducted by digital workers as part of the collaborative venture is determined either by the workers’ status as employees or otherwise by contract—which means that it is determined in the context of significant power asymmetries and the routine exploitation of digital workers.

If copyright entrenches the inequities of an asymmetrical situation—by ensuring that the collective effort of digital workers in compiling essential datasets for AI training and AI development remains unseen and undervalued—Dagne thinks the time has come to confront its complicity. He suggests that, spurred by the arrival of AI, the copyright system needs to restructure the relationship between authors-as-(data)workers and corporate proprietors in pursuit of greater fairness.

‘Damola Adediji is a Visiting Researcher with IP Osgoode and Doctoral Candidate with the Centre for Law, Technology & Society at the University of Ottawa.

The post Dr. Tesh Dagne Shines a Light on the Unseen Hands and Invisible (Copy)Rights Behind AI Systems appeared first on IPOsgoode.

]]>
Osgoode PhD Amanda Turnbull Investigates How Algorithms Do Things with Words /osgoode/iposgoode/2024/10/01/osgoode-phd-amanda-turnbull-investigates-how-algorithms-do-things-with-words/ Tue, 01 Oct 2024 20:21:21 +0000 /osgoode/iposgoode/?p=40900 Throughout her doctoral studies, Amanda Turnbull has grappled with the legal consequences of “machines doing things with words.” Her timely dissertation, Law, Language, and Authority: The Algorithmic Turn, completed in August 2024, offers a measured yet unflinching reflection on how artificial intelligence is transforming society and the law.

The post Osgoode PhD Amanda Turnbull Investigates How Algorithms Do Things with Words appeared first on IPOsgoode.

]]>
By John Nyman
Dr. Amanda Turnbull, Osgoode PhD (2024). (Giselle B Photography)

Throughout her doctoral studies, Amanda Turnbull has grappled with the legal consequences of “machines doing things with words.” Her timely dissertation, Law, Language, and Authority: The Algorithmic Turn, completed in August 2024, offers a measured yet unflinching reflection on how artificial intelligence is transforming society and the law. Speaking over Zoom from her home in New Zealand, where she is now a at the University of Waikato’s Te Piringa Faculty of Law  Turnbull shared some insights from her research.

"With AI, there’s an algorithm at the end of the hammer.”

At the heart of Turnbull’s thesis is her contention that AI is “more than just a tool.” When we think of a tool, Turnbull suggests, we usually think of something like a hammer. There’s always a person at the end of the hammer, and they’re responsible for what the hammer does. In the context of algorithmic systems, commentators have proposed different alternatives for who that responsible party might be, including the programmer, the end user, and the company that owns the technology. But these approaches obscure the true novelty—and danger—of AI. With AI, Turnbull explained, “there’s an algorithm at the end of the hammer.”

Turnbull’s focus on algorithmically generated language reflects her thesis’s remarkable origins at the University of Ottawa’s Department of English. Although her original supervisor, the late Professor (Canada Research Chair in Ethics, Law & Technology), soon recognized that it belonged in a faculty of law, Turnbull’s dissertation maintains its indebtedness to mid-century philosopher of language, JL Austin—who, Turnbull was surprised to learn, was a close friend of the legal theorist . emphasized what words do in addition to what they mean. Adapting this framework to contemporary technology, Turnbull is less interested in what a generative AI like ChatGPT says than the difference it makes that a non-human actor says it.

The first of three “pillars” of Turnbull’s dissertation thus explores the consequences of AI’s participation in writing literary works. To be clear, “” according to Turnbull—but that doesn’t mean AI should have no legally cognizable role at all. Drawing on her early career as a classical flautist, Turnbull recognized that generative AI’s imitative reproduction of human-authored texts in its training data isn’t so different from the work of human artists. In her words, “there’s an amount of imitation that necessarily occurs when you’re being creative.”

Unexpectedly, Turnbull found inspiration in the “spectrum of authoring” developed by in the 13th century, long before the modern notion of authorship was developed. Generative AI, she asserts, resembles Bonaventure’s “commentator,” a mid-point between an author and a mere scribe, who clarifies and expands on pre-existing texts. By referring to generative AI as a commentator or “expositor,” lawmakers can reserve copyright for human authors without turning a blind eye to the authority embodied in algorithmically generated language.

That authority is at the centre of the second and third “pillars” of Turnbull’s research, which examine the legal implications of algorithmic contracting. As coined by , an algorithmic contract is a contract in which the main terms and conditions are drafted not by human actors, but by computer systems.

Key for Turnbull is how the systems behind algorithmic contracts exercise “derivative” authority without legal intent. For this reason, algorithmic contracting is “in no way” similar to earlier technologies such as click-wrap agreements, standard form contracts, or the archetypal pen and paper. In other words, there is no “functional equivalence” between algorithmic contracting and other platforms. Courts should therefore reconsider both the notion of technological neutrality and the application of intent-based contract doctrines, including the doctrine of unconscionability recently revived by the Supreme Court of Canada in .

In the third “pillar” of her thesis, Turnbull discusses how unconstrained algorithmic contracting creates the conditions for technology-facilitated sexual violence. She focused on Uber and the instances of sexual violence involving drivers and passengers documented in its 2019 safety report. Sadly, Turnbull described this chapter as “the easiest to write,” since it quickly “became obvious that this is a new way of exerting harm.” Yet the solutions to these problems are far from straightforward. In Uber’s case, the issue permeates the firm’s corporate culture and overall attitude toward innovation, she contends, which has failed to truly consider “the whole web of entanglements” impacting algorithmic language.

Ultimately, dealing fairly with AI will require “extraordinary ways of thinking” on the part of courts and regulators. But Turnbull is confident the law can adapt. The entire law of contracts and copyright, for example, can be seen as areas that have constantly adapted to new technologies. By approaching the algorithmic turn with both bravery and nuance, courts can learn to recognize AI as something that’s more than a tool, but no substitute for genuine human authority and intent.

Going forward, Turnbull is keen to use her dissertation, which was supervised by IP Osgoode Director , as a basis for further explorations of technology-facilitated gender-based violence such as platform violence and “onlife” harm—a term to describe the intersection between experiences online and in ‘real life.’ At the same time, Turnbull is interested in how algorithms have played a positive role in certain legal contexts. Although, as she says, “we’re hot to jump on technology and focus on the negatives,” in a forthcoming article on the 1999 Canada-US Pacific Salmon Agreement, she and co-author Donald McRae will explore how, “in this case, the algorithm solved the dispute.” Turnbull also plans to publish her dissertation as a book and to return to another book she began writing even before beginning her PhD—a fictional novel that is, aptly, about Austin, Hart, and the father of computer science, Alan Turing


John Nyman is a student at Osgoode Hall Law School (JD '26) and an IP Osgoode JD Research Fellow

The post Osgoode PhD Amanda Turnbull Investigates How Algorithms Do Things with Words appeared first on IPOsgoode.

]]>
A.I. Paintings: Registrable Copyright? Lessons from Ankit Sahni /osgoode/iposgoode/2023/03/31/a-i-paintings-registrable-copyright-lessons-from-ankit-sahni/ Fri, 31 Mar 2023 16:00:00 +0000 https://www.iposgoode.ca/?p=40719 Govind Kumar Chaturvedi is an IPilogue Writer and an LLM graduate from Osgoode Hall Law School. We sat down to chat about how he registered Suryast in Canada. Mr. Sahni told me that he had been inspired by Ryan Abbott’s DABUS, to take on this intellectual property legal experiment. I wanted to learn more about […]

The post A.I. Paintings: Registrable Copyright? Lessons from Ankit Sahni appeared first on IPOsgoode.

]]>

Govind Kumar Chaturvedi is an IPilogue Writer and an LLM graduate from Osgoode Hall Law School.

We sat down to chat about how he registered Suryast in Canada. Mr. Sahni told me that he had been inspired by Ryan Abbott’s DABUS, to take on this intellectual property legal experiment. I wanted to learn more about his A.I. and his legal reasoning.  

RAGHAV: The A.I.

Ankit shared that his A.I. tool was named “Raghav’.  A team of software developers and had gotten the A.I. assigned to him. Raghav’s unique way of working was based on a technique called Neural artistic style transfer, which is inspired by the biological neurons of the nervous system. Just like in the nervous system, the neuron takes in several incoming signals and creates a resulting signal from the inputs. Similarly, an artificial neuron takes input and many artificial neurons form a layer called the neural network. The input can be text, descriptive values, etc. and the output layer can be a label predicting a category like a ‘dog’ or ‘house.’ The user then sees two columns, allowing users to input the image’s style and content. In this case, Sahni chose the Starry Night of Van Gogh for Suryast. The A.I. was already trained on different painters’ data sets. This data set was used that to make the new image and the A.I. was advanced enough to know where to place colours and structures in the painting to mimic Van Gogh’s original work.

Legal Reasoning for Co-Authorship

According to Sahni, Raghav chooses and creates the brush strokes and colour palette, blurring the lines separating his own contributions. Sahni contributed the style and inputs, so the final product is a mixture of both his and Raghav’s work.

I was intrigued about whether A.I. could be considered an author according to the laws of Canada. Currently, the Copyright Act is silent on the issue. Jurisprudence in cases like has stated that non-juristic persons cannot be authors as the authors have lifetime and must be human. However, by co-authoring Suryast with the AI, Sahni met the legal recommendations for authorship, as it was an AI-assisted work. His creativity and skill were also present in the final work of Suryast and like he said no line could be drawn between his contribution and that of the AI, so the same qualified for copyright protection. I recalled the Copyright Act recognises joint ownership of work under as work of joint authorship, defined as a work produced by the collaboration of two or more authors in which the contribution of one author is not distinct from the contribution of the other author or authors. As Raghav contributed its own creativity, it fulfilled the definition of joint authorship under section 2.

A.I. is More Than Just a Tool

When asked if AI is just a tool, Sahni re-affirmed that the AI chose how to apply the data set fed to it, suggesting that it was more than a tool. Sahni believed that this contribution met the threshold of minimum amount of creativity required and cited the American case to support this point. In that case, the defendant’s selection and creative co-ordination of images was found to meet the threshold of minimal creativity as the artistic judgment was exercised. Further, in , para 44 states that “As discussed earlier, however, the originality requirement is not particularly stringent. A compiler may settle upon a selection or arrangement that others have used; novelty is not required”. The judge continues at para53 “It is equally true, however, that the selection and arrangement of facts cannot be so mechanical or routine as to require no creativity whatsoever. The standard of originality is low, but it does exist.” Therefore, Sahni believes that human inputs exceed the minimum recognized originality prescribed by law by the Supreme Court of the United States of America. However, while Sahni was able to register Raghav as author, his ownership of Raghav is also an important factor, and authors who do not own their AI co-author may not be as successful.

The post A.I. Paintings: Registrable Copyright? Lessons from Ankit Sahni appeared first on IPOsgoode.

]]>
IPIC and National Research Council Collaborates to Create the IP Assist Program for SMEs /osgoode/iposgoode/2023/03/30/ipic-and-national-research-council-collaborates-to-create-the-ip-assist-program-for-smes/ Thu, 30 Mar 2023 16:00:00 +0000 https://www.iposgoode.ca/?p=40722 Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School. The National Research Council of Canada (NRC) Industrial Research Assistance Program (IRAP) and the Intellectual Property Institute of Canada (IPIC) have partnered to offer the IP Assist program for Canadian small and medium-sized enterprises (“SMEs”). IPilogue readers may have seen Serena Nath’s recent coverage of another CIC program, ElevateIP, […]

The post IPIC and National Research Council Collaborates to Create the IP Assist Program for SMEs appeared first on IPOsgoode.

]]>
Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.

The  (IRAP) and the  (IPIC) have partnered to offer the  program for Canadian small and medium-sized enterprises (“SMEs”). IPilogue readers may have seen â€™s&ČÔČúČő±è; of another CIC program, , which provides funding for a similar purpose through a different government channel. That article outlined the motivation behind these types of programs and summed up that  Canadian SMEs often lack access to the means to protect intellectual property (IP) and highlighted a clear economic need for innovative Canadian businesses to improve their IP commercialization.

NRC IRAP, CIC, and IPIC

The NRC IRAP provides a range of innovation support services for Canadian SMEs. The program offers funding, advisory services, and networking opportunities to help SMEs undertake research and development (“R&D”) and to commercialize, and improve their competitiveness in domestic and global markets. IRAP also provides support for technology adoption, productivity improvement, and business expansion. On February 16, 2023, the Government of Canada announced that NRC IRAP will be integrated into the  (CIC).

The CIC will be a new, operationally independent organization solely dedicated to supporting business R&D across all regions and all sectors of the economy. It is a federal initiative that will be  that aims to “play an important role in building a stronger and more innovative Canadian economy for generations to come.” The CIC will include an umbrella of programs, including both IP Assist and ElevateIP, to support the development and exploitation of IP.

IPIC is Canada’s professional association of patent agents, trademark agents and lawyers practicing in all areas of intellectual property (“IP”) law and is comprised of over 1700 members.  is to match SMEs with IPIC members who practice in their specific industry. The IP professional will help SMEs better understand the key aspects of IP and how it can support their business goals.

The IP Assist Program

There are three levels to the IP Assist Program — levels 1, 2 and 3 (L1, L2, L3, respectively). Each level brings :L1 – up to $1k, L2 – up to $20k, L3 – up to $20k+), as well as increasing engagement with an IP professional matched to the SME:

The L1 IP Awareness is a one-to-one IP awareness session during which an IP professional will provide industry-specific IP information and guidance to an SME. Engagement at L1 provides IP professionals with an opportunity to connect, support and guide innovative Canadian SME to help them achieve their business goals. Engagements with SMEs will take, on average, up to 3 hours and include an IP awareness presentation followed by Q&As.

The L2 IP Strategy relates to the IRAP SME’s specific technology space, aligns with the IRAP SMEs business objectives, and provides IRAP SMEs with specific prioritized IP actions. The IP Strategy must be informed by key relevant information relating to the technology and competitor landscapes relevant to the IRAP SMEs.

The L3 IP Implementation relates to detailed IP asset assessments, such as IP audits, trademark clearance searches, prior art searches and analysis, advice on branding strategy, legal analysis of IP landscaping, patentability analysis, licensing strategy formulation, and other activities. However, some patent and trademark preparation services and filing fees may not be covered.

Conclusion

Canada’s investment in the CIC indicates that there is an increased focus on innovation as a driver of economic growth. There is also a clear aim through programs like IP Assist and ElevateIP to ensure that IP generated by innovative SMEs in Canada are carefully strategized for and well-protected. Hopefully, this increases Canadian presence in innovation and brings greater investment in R&D into Canada.

The post IPIC and National Research Council Collaborates to Create the IP Assist Program for SMEs appeared first on IPOsgoode.

]]>