privacy Archives - IPOsgoode /osgoode/iposgoode/tag/privacy/ An Authoritive Leader in IP Wed, 22 Mar 2023 16:00:00 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 FTC Punishes BetterHelp for Sharing Mental Health Information with Advertisers /osgoode/iposgoode/2023/03/22/ftc-punishes-betterhelp-for-sharing-mental-health-information-with-advertisers/ Wed, 22 Mar 2023 16:00:00 +0000 https://www.iposgoode.ca/?p=40698 The post FTC Punishes BetterHelp for Sharing Mental Health Information with Advertisers appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School


BetterHelp is a mental health platform that provides online mental health services, “the largest therapy platform in the world. We change the way people approach their mental health and help them tackle life’s challenges by providing accessible and affordable care. With BetterHelp, you can message a professional therapist anytime, anywhere”. reads: “Making professional therapy accessible, affordable, and convenient — so anyone who struggles with life's challenges can get help, anytime and anywhere”. Their primary business is online counseling and therapy provided online through web-based interaction and phone/text communication with professional counselors.

Privacy Misrepresentation

According to the , BetterHelp requires a questionnaire that asks for sensitive mental health information – “such as whether they have experienced depression or suicidal thoughts and are on any medications” – along with personal information. details BetterHelp’s dubious privacy practices, many of which display an egregious lack of concern for privacy interests. The complaint also details the privacy representations made by BetterHelp, some of which have been altered over time. An example of these changes was seen in the intake questionnaire, where a question asking “Are you currently taking any medication?” included a privacy statement that went through a few iterations (emphasis on alteration added in the complaint):

Up to Dec 2020: “Rest assured—any information provided in this questionnaire will stay private between you and your counselor.”

Dec 2020: “Rest assured—this information will stay private between you and your counselor”

Jan 2021: “Rest assured—your health information will stay private between you and your counselor”

Oct 2021: The statement was removed altogether

Revealing Private Information to Advertisers

The FTC release indicates that BetterHelp “did not obtain consumers’ affirmative express consent before disclosing their health data” and “failed to place any limits on how third parties could use consumers’ health information—allowing Facebook and other third parties to use that information for their own internal purposes, including for research and development or to improve advertising”. According to the complaint, BetterHelp used and revealed consumers’ email addresses, IP addresses, and health questionnaire information to Facebook, Snapchat, Criteo, and Pinterest for advertising purposes”, including “identify[ing] similar consumers and target[ing] them with advertisements for BetterHelp’s counseling service.”

The Punishment

The FTC has issued a (a legal document that outlines the terms and conditions for resolving a complaint or an investigation related to unfair or deceptive business practices) requiring that BetterHelp return funds – amounting to $7.8 million – to customers whose health data was compromised. The proposed order also bans BetterHelp from disclosing health information for advertising, prohibits misrepresenting its sharing practices and requires several changes to company practices regarding health and personal data. BetterHelp writes in that this settlement is “no admission of wrongdoing” and that their “industry-standard practice is routinely used by some of the largest health providers, health systems, and healthcare brands”. The says that this enforcement action is not the first of its kind, as it follows the , and that “the FTC has made it clear of its intent to crack down on the trafficking in sensitive health data by businesses not strictly classified as health care providers and thus not covered by HIPAA, the federal privacy rules that govern the health care industry”. Hopefully, this sets a precedent for more stringent enforcement of good privacy practices, particularly regarding the sale of personal and health information.

The post FTC Punishes BetterHelp for Sharing Mental Health Information with Advertisers appeared first on IPOsgoode.

]]>
How Much is Your Personal Information Worth? And What Will It Be Worth in the Future? /osgoode/iposgoode/2023/03/13/how-much-is-your-personal-information-worth-and-what-will-it-be-worth-in-the-future/ Mon, 13 Mar 2023 16:00:00 +0000 https://www.iposgoode.ca/?p=40664 The post How Much is Your Personal Information Worth? And What Will It Be Worth in the Future? appeared first on IPOsgoode.

]]>

Nikita Munjal is a 3L JD/MBA Candidate at Osgoode Hall Law School. This article was written as a requirement for Prof. Pina D’Agostino’s IP Intensive Program.


Using the Internet inevitably requires consenting to have your personal information used, collected, and disclosed by the websites you visit. A common reason for individuals, corporations, and non-profit organizations to collect your personal information is to influence your behaviour online, from your to your . One of the most effective ways to influence consumer behaviour online is through targeted advertising.

Value for Advertisers

Access to personal information has become necessary for advertisers to convert potential leads into customers. Think back to 2012, for example, when a suggested that a statistician working at Target predicted a teenage girl’s pregnancy based on her shopping habits. What did Target do with this information? It mailed her coupons for baby clothes and cribs.

that the value of your personal information to advertisers depends on various factors. Factors influencing value include your gender, race, and sensitivity of the information (that is, cost more than ). If, for example, the target audience for a new sneaker launch is young males of middle eastern origin, the spent to acquire your personal information is a minor investment to incur to influence you to purchase $180 sneakers.

Value for Users

Traditionally, users have valued the ability to share their personal information while using online services, like search engines or social media platforms, citing their .

However, increasingly, . This trend has mobilized startups in Silicon Valley to appeal to privacy-conscious users by providing them an incentive to share their personal information. Known as paid-to-surf models, companies in this space require their users to install browser extensions to track their browsing.

What monetary value do some privacy-conscious users demand to share their personal information? $20 a month for users of . Others are . While these paid-to-surf models have the potential to be disruptive, they are not yet a viable alternative, as users must surf a certain amount before they can cash out.

Value Going Forward

The tech industry has built empires based on collecting, using, and selling its users’ personal information to third-party advertisers. Surprisingly, some factions of the tech industry are modifying their business models to limit the tracking of personal information. Apple, for example, introduced a new iOS in 2021, s. Similarly, on its Chrome browser is estimated to impact millions of advertisers.

Apple and Google argue that these changes are necessary to respond to increasing and customer sensitivity to sharing personal information (the IPilogue has documented increased regulation in the and ). However, , including , lament that the changes are veiled anti-competitive practices.

Interestingly, increasing barriers to the online advertising ecosystem may benefit users. If access to personal information becomes impeded, interested parties may need to incentivize users to share their personal information, increasing users’ bargaining power. Although it is unclear what effect Apple and Google’s changes will have on the ecosystem, I am hopeful that users can leverage more control over their personal information for fair compensation by technology companies or advertisers for their valuable commodity.

The post How Much is Your Personal Information Worth? And What Will It Be Worth in the Future? appeared first on IPOsgoode.

]]>
Legal Tug-Of-War: Protecting Privilege in Privacy Breach Disputes /osgoode/iposgoode/2023/03/08/legal-tug-of-war-protecting-privilege-in-privacy-breach-disputes/ Wed, 08 Mar 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40655 The post Legal Tug-Of-War: Protecting Privilege in Privacy Breach Disputes appeared first on IPOsgoode.

]]>

Sally Yoon is an IPilogue Writer and a 3L JD Candidate at Osgoode Hall Law School. M. Imtiaz Karamat is an IP Osgoode Alumnus and an Associate at Deeth Williams Wall LLP. This article was on the OBA’s Information Technology and Intellectual Property Law Section’s.


Privacy breaches are becoming commonplace in today’s business landscape and cybersecurity is top of mind for many organizations— and for good reason. Thefound that the number of breaches involving customer and employee information nearly doubled after the pandemic, and more businesses are reporting loss of customers from cyberattacks. This situation is exacerbated by the risk of litigation, as lawsuits are a legitimate consequence of a privacy breach. Ongoing activity in the privacy breach litigation space calls for organizations to re-examine their privilege strategies and prepare for potential scrutiny that may occur in the event of a dispute.

The Ongoing Litigation Risk

In 2022, Canadian courts continued to see litigation resulting from privacy breaches, with class actions being certified on the basis of a broad range of claims, includingԻ. There have also been significant developments in the jurisprudence for privacy breaches, such as the landmark release of three Ontario Court of Appeal decisions (Owsianik v Equifax Co.,;Obodo v Trans Union of Canada, Inc.,; andWinder v Marriot International, Inc.,) in late 2022 that clarified the scope of liability in data breach class actions for the tort of intrusion upon seclusion.

The continued litigation reminds organizations and lawyers to ensure their privacy breach response plans conform with best practices. This is not only limited to having a robust IT framework, but includes adopting legal procedures to provide adequate protection and support. Privilege is an essential component of privacy breach litigation and should be a priority in a response strategy. In a privacy breach, legal privilege permits an organization to obtain legal advice about the incident without having to worry that such communications and related documents will be disclosed to others. This is crucial for breach response efforts, when the fast-paced environment requires candid conversations between counsel and client. Privilege is also an essential aspect for litigation preparation, by allowing lawyers to create necessary resources without fear that these materials may be disclosed and potentially used against their clients.

A Brief Review of Legal Privilege

Solicitor-client privilege and litigation privilege are two types of privilege that are involved in privacy breach litigation.

  • Solicitor-client privilegecommunications between the lawyer and client; entails the seeking or giving of legal advice; and is intended to be confidential. It does not depend on on-going or anticipated litigation, and it isonce applied, unless waived by the client.
  • Litigation privilegeprotects documents and communications that were created or collected for the of litigation that is on-going or reasonably anticipated. The privilege terminates once the respective litigation ends.

Recent Canadian Privilege Disputes

Although not as extensive as other jurisdictions, Canada has seen privilege disputes in the context of privacy breaches. The outcome of these disputes are important teaching points for organizations intending to develop their own privilege strategy.

Kaplan v Casino Rama Services Inc.

InKaplan v Casino Rama Services Inc.,,a class action lawsuit was brought against the owners and operators of Casino Rama Resort (Casino Rama) following Casino Rama’s announcement of a large-scale cyberattack. During the certification stage of the lawsuit, Casino Rama relied on an affidavit that included information from reports of a cybersecurity company hired to investigate the incident. The plaintiffs requested production of the company’s reports, but Casino Rama declined on the basis of legal privilege.

The Ontario Superior Court of Justice (ONSC) found that if privilege was present, it would have been waived when the defendants disclosed and relied on information from the reports as evidence towards the size and scope of the class of persons affected by the breach. In its reasons, the ONSC said that “a party cannot disclose and rely on certain information obtained from a privileged source and then seek to prevent disclosure of the privileged information relevant to that issue...” Therefore, the ONSC ordered production of the parts of the reports that related to the size and scope of the class of affected individuals.

LifeLabs Dispute

More recently, the privilege debate is being examined in the context of information provided to provincial privacy commissioners. In November of 2019, LifeLabs LP (LifeLabs) notified the Information and Privacy Commissioner of Ontario (IPC) and the British Columbia Office of the Information and Privacy Commissioner (OIPC) that it fell victim to a cyberattack, which resulted in personal health data of approximately 15 million customers being extracted from their systems. The IPC and OIPC commenced a coordinated investigation into the incident and demanded that LifeLabs produce certain documents relevant to the investigation. LifeLabs provided some of the documents but asserted litigation or solicitor-client privilege over others.

On March 30, 2020, in, the IPC rejected LifeLabs’ claim of litigation privilege over the documents on the basis that the dominant purpose for the creation of the documents was not litigation. The IPC also disagreed with LifeLabs’ claim for solicitor-client privilege because LifeLabs failed to provide adequate support that it met the requirements for solicitor-client privilege (i.e., that the information in issue was communicated in confidence between lawyer and client; for the purpose of seeking legal advice; and the parties intended it to be confidential). The IPC stated that the mere fact of communication between a lawyer and their client or the transfer of reports to in-house or external counsel does not support a claim of solicitor-client privilege. The IPC further noted that “…while underlying facts given to counsel could be part of the ‘continuum of communication’ protected by solicitor-client privilege…unless disclosure of the underlying facts would reveal or allow for inference of confidential solicitor-client communications, the underlying facts themselves do not attract the privilege”.

Following PHIPA Decision 114, LifeLabs provided the documents in issue to the IPC and OIPC, but maintained that it did not waive privilege by doing so. In May 2020, the Commissioners advised LifeLabs of the information from the documents that they were contemplating using in their final report, which led LifeLabs to submit additional evidence and arguments to the IPC and OIPC in support of its privilege claim over the documents. However, in June 2020, the IPC and OIPC issued a joint decision (the Privilege Decision) that rejected LifeLabs’ claims.

In response, LifeLabs commenced applications for judicial review of the Privilege Decision in both Ontario and British Columbia. In the application, LifeLabs argues that the Privilege Decision was wrong in law in rejecting its privilege claims and challenges the IPC’s power to compel production of privileged documents. This matter is still ongoing in the courts, with relatedbeing heard as recently as late January 2023.

Developing a Privilege Strategy

With the above disputes in mind, it is important for organizations to develop a privilege strategy for responding to privacy breaches and preparing for potential litigation. These are some general best practices to keep in mind:

  1. Preparation:Prior to a privacy breach, businesses can ensure that they have a comprehensive breach response strategy, which addresses retaining legal counsel and considerations for protecting legal privilege. This strategy should be regularly updated to remain current.
  2. Consulting Legal Counsel:Contacting external legal counsel is a top priority upon learning of a potential breach. This allows the organization to begin obtaining the necessary legal advice to immediately respond to the matter; and reinforces claims of privilege from the start. If the organization already has internal legal counsel that has been notified of the incident, it may still be prudent to retain external counsel. This is due to in-house counsel often providing both business and legal advice, which may result in heavywhen claiming privilege in a dispute. Retaining external counsel in a breach response would reinforce that the advice being given is legal, as opposed to business-related.
  3. Control Communication Flow:In addition to ensuring that counsel is included in privileged communications, the distribution of such communications can be controlled and limited to only the necessary parties (including the necessary members of the organization), with the intention to limit distribution and preserve confidentiality. As part of the organization’s preparation, it can work with counsel to establish how information is to be communicated, the recipients of such information, and proper labeling practices (e.g., marking documents as “Privileged and Confidential”).
  4. Consider Privilege with Third-Party Service Providers:Communications with third party service providers may be considered privileged when made for the purpose of helping counsel provide legal advice to the affected organization. This includes the use of cyber forensic experts to investigate a privacy incident and generate reports at the request of legal counsel. Where possible, third parties may be jointly retained by external counsel and the organization; and the terms of the retainer and supporting documents should reflect the legal nature of the engagement. The third party can also seek instructions and report to external counsel.
  5. Caution When Divulging Privileged Information:Organizations intending to maintain privilege should be cautious when disclosing privileged information to external parties. This includes being on the alert for inadvertent disclosure of privileged information in legal proceedings. It may also include stating that the organization does not intend to waive privilege by responding to disclosure demands from regulators.

Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.

The post Legal Tug-Of-War: Protecting Privilege in Privacy Breach Disputes appeared first on IPOsgoode.

]]>
Synthetic Data: The Next Solution for Data Privacy? /osgoode/iposgoode/2023/02/23/synthetic-data-the-next-solution-for-data-privacy/ Thu, 23 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40612 The post Synthetic Data: The Next Solution for Data Privacy? appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


One contentious point from the session was synthetic data’s potential to solve the privacy concerns surrounding the datasets needed to train AI algorithms. In light of its increasing popularity, I will explore the benefits and dangers of this potential solution.

Concept

The data privacy concern that synthetic data aims to address is very similar to the purpose of — protecting anonymized data from being de-identified without reducing data utility. This is distinct from data augmentation, which is the process of adding new data to an existing real-world dataset in order to provide more training data, and could include rotating images or combining two images to create a new one. Data augmentation is typically not useful in the privacy context.

In a , the Office of the Privacy Commissioner of Canada (“OPC”) describes synthetic data as “fake data produced by an algorithm whose goal is to retain the same statistical properties as some real data, but with no one-to-one mapping between records in the synthetic data and the real data.” Synthetic data consists of real-world source data that is put through a generative statistical model, which is evaluated for statistical similarity to the source alongside privacy metrics. Critically, there is no need to remove quasi-identifying data, that is, data vulnerable to de-anonymization. This results in more complete datasets.

Benefits

Synthetic data uses a highly automated process to provide protection from de-identification using a highly automated process. This results in datasets that can be readily shared between AI developers without the dangers of privacy concerns. also points out that there are substantial cost savings. The points to how a synthetic data service company founder estimated that “a single image that could cost $6 from a labeling service can be artificially generated for six cents.” Synthetic data can also be manufactured to reduce bias by deliberately including a wide variety of rare but crucial edge-cases. Nvidia uses machine vision for autonomous vehicles as their example, but I think this concept should translate to improving representation of marginalized and under-represented groups in large datasets in healthcare or facial recognition. Many of the Bracing for Impact panelists shared this concern.

Dangers

The OPC notes in their blog many issues and concerns, particularly regarding de-identification. This is especially true if the synthetic data is not generated with sufficient care and if the “generative model learns the statistical properties of the source data too closely or too exactly”. In other words, if it “overfits” the data, then the synthetic data will simply replicate the source data, making re-identification easy.” Moreover, there is also concern with membership inference, where the fact that some individual data exists is an inherent risk. A also demonstrated that “synthetic data does not provide a better tradeoff between privacy and utility than traditional anonymization techniques” and “the privacy-utility tradeoff of synthetic data publishing is hard to predict.” This indicates that the characterization of synthetic data as a “silver bullet” is likely overselling its capabilities.

Implementations

Nvidia is using synthetic data in computer vision, but its primary purpose is not privacy — that there are other important functions for the technology. is a leading platform for synthetic data in healthcare and is . It is only beginning: it is predicted that “.”

Conclusion

Synthetic data has the potential to be highly beneficial, as it may be the answer to the many challenges AI developers face in sharing sensitive data. However, like many developments in AI technology, it requires caution and careful implementation to be effective and is potentially dangerous if relied upon haphazardly.

The post Synthetic Data: The Next Solution for Data Privacy? appeared first on IPOsgoode.

]]>
NIST Releases their AI Risk Management Framework 1.0 /osgoode/iposgoode/2023/02/10/nist-releases-their-ai-risk-management-framework-1-0/ Fri, 10 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40589 The post NIST Releases their AI Risk Management Framework 1.0 appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


The (NIST) has been tasked with promoting “U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology.” On January 26, 2023, NIST released their alongside a suggesting ways to use the AI RMF to “incorporate trustworthiness considerations in the design, development, deployment, and use of AI systems”. Both the framework and playbook are intended to help organizations understand and manage the potential risks and benefits of AI. The framework is also meant to ensure that AI systems are developed, deployed, and used in a responsible and trustworthy manner. The framework is intended to be a flexible and adaptable tool that can be applied to a wide range of AI systems, including those used in various industries such as healthcare, finance, and transportation.

NIST describes a trustworthy AI to have a set of characteristics: valid and reliable, safe, secure, and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair – with harmful bias managed.

Valid and reliable: Produces accurate and consistent results. Its performance should be evaluated and validated through ongoing testing and experimentation, with risk management prioritizing the minimization of potential negative impacts.

Safe: Does not cause harm to people or the environment and should be designed, developed, and deployed responsibly with clear information for responsible use of the system

Secure and resilient: Maintains confidentiality, integrity, and availability through protection against common security such as data poisoning, and the exfiltration of other intellectual property through AI system endpoints.

Accountable and transparent: Provides appropriate levels of information to AI actors to allow for transparency and accountability of its decisions and actions.

Explainable and interpretable: representing the underlying AI systems’ operation and the meaning of its output in the context of its designed functional purposes. Explainable and interpretable AI systems offer information that will help end users understand their purposes and potential impact.

Privacy-enhanced: Protects the privacy of individuals and organizations in compliance with relevant laws and regulations.

Fair – with harmful bias managed: NIST has identified three major categories of AI bias to be considered and managed: systemic (broad and ever-present societal bias), computational and statistical (typically due to non-representative samples), and human-cognitive (perceptions of AI system information in deciding or filling in missing information).

AI RMF’s core is organized around four specific functions to help organizations address the risks of AI systems in practice: Govern, Map, Measure, and Manage.

Govern: This includes establishing policies, procedures, and standards for AI systems, key decision-makers, developers, and end-users.

Map: AI RMF is intended to contextualize and frame risks by identifying the system's components, data sources, and external dependencies, as well as to understand how the system is used and by whom.

Measure: AI RMF evaluates the potential risks and benefits of the AI system by assessing the system's vulnerabilities and potential social impacts.

Manage: AI RMF allocates risk resources to mitigate identified risks and continuously monitor the system and its environment by establishing monitoring processes and procedures to detect and respond to incidents, as well as updating controls as needed.

NIST’s AI risk management framework is a voluntary but very important prompt for organizations and teams who design, develop, and deploy AI to think more critically about their responsibilities to the public. Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust in AI – a critical part in AI adoption and advancement.

The post NIST Releases their AI Risk Management Framework 1.0 appeared first on IPOsgoode.

]]>
The Digital Age of Journalism: My Placement at "The Globe and Mail” /osgoode/iposgoode/2023/01/11/the-digital-age-of-journalism-my-placement-at-the-globe-and-mail/ Wed, 11 Jan 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40430 The post The Digital Age of Journalism: My Placement at "The Globe and Mail” appeared first on IPOsgoode.

]]>

Ivana PelozaIvana Peloza is a 3L JD Candidate at Osgoode Hall Law School. This article was written as a requirement for Prof. Pina D’Agostino’s IP Intensive Program.


The Globe and Mail is Canada’s foremost news media company, a nationally-distributed newspaper with one of the largest circulations in Canada. The newspaper’s print and digital formats reach over 6 million readers every week, with Report on Business magazine reaching over 2.5 million readers every issue in print and digital. When I was placed with The Globe and Mail as part of Osgoode’s IP Intensive program, however, I certainly did not expect the extent to which I would be intertwined in the world of tech. Publishing is, of course, one of the core copyright industries – if not the core industry historically associated with copyright. IP law in publishing, especially at The Globe – who is known for being an early provider of digital media and device-agnostic content delivery – goes far beyond copyright infringement and litigation. There are significant overlaps and considerations to think of with the roll-out of a privacy policy, consumer protection laws, and a range of different agreements including those related to advertising, purchase and sale, events, and content production freelancer rights.

Over the course of my time at The Globe, I gained vast and multidisciplinary experience, but three major themes emerged within my practical and research work: privacy, contracts, and data protection. On my very first day, my supervisor (thankfully) lent me a copy of The Tech Contracts Handbook: Cloud Computing Agreements, Software Licenses, and Other IT Contracts for Lawyers and Businesspeople by David Tollen to start familiarizing myself with these themes. Complying with privacy regulations, especially in IT contracts, is as important as it can be misunderstood. Especially in an era of rapidly developing regulation and technology surrounding privacy, corporate organizations have a strict duty to continually follow the developments in Canadian privacy and data protection law as it relates to different jurisdictions.

My internship also allowed me to reflect on and speak with my supervisor about the differences – between working in-house versus private practice. For instance, private practice may have an entire staff dedicated to accomplishing just one specific aspect of a privacy or contracts matter whereas in-house lawyers might deal collaboratively with the whole breadth of a legal process. In-house has the potential to, therefore, offer a much greater variety and scope of practice and expertise. If my experience at the Globe has taught me anything, it's that this type of legal work makes the days more interesting, in my opinion!

An in-house legal department is also intimately intertwined with the organization’s commercial decision-making. Learning how to navigate the specific challenges of interdisciplinary brainstorming, drafting, and decision-making was a significant takeaway as well. Often, legal professionals or a corporation’s legal team will be coming late compared to the business process and left out of major contractual decisions. Sometimes, however, as was the case with the incredibly accomplished lawyers who I was lucky enough to learn from at The Globe, just by virtue of experience, the legal professionals have beneficial insight into the commercial deal process. Sometimes this is helpful, sometimes it leads to “spinning of wheels” but the point is there is deal structure expertise that isn’t always brought until after the deal is “set.” One of the jobs is to try to get further upstream – even if you’re not necessarily trying to be involved in the day-to-day happenings – but you need to find a way to have some perspective and plan more effectively.

To this point, I often reflected on a piece of advice I was given on the very first day of the IP Intensive Seminars. When I asked the alumni speakers their advice for someone who has never had a summer legal placement before, Denver Bandstra, Associate at Bereskin & Parr LLP, reminded me that I would get used to it “just like any other job.” Like any job, there will always be work-place procedure and workflow that requires orientation and practice. Learning the workflow of a contract renewal and negotiations, or the day-to-day contrast for an in-house lawyer compared to a private practice lawyer, only comes from experience. The experience given in the IP Intensive program, for that reason, is the most worthwhile part of my legal education so far. And particularly, as all things in IP and technology law are proving to be, developing knowledge and familiarity with data and privacy, the Internet and disruptive technology is worthwhile – not just for a career in IP law, but also for any person using social media in the digital age.

The post The Digital Age of Journalism: My Placement at "The Globe and Mail” appeared first on IPOsgoode.

]]>
Differential Privacy: The Big Tech Solution to Big Data Privacy /osgoode/iposgoode/2022/12/16/differential-privacy-the-big-tech-solution-to-big-data-privacy/ Fri, 16 Dec 2022 17:00:00 +0000 https://www.iposgoode.ca/?p=40382 The post Differential Privacy: The Big Tech Solution to Big Data Privacy appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


The AI revolution has brought about significant concerns about the privacy of big data. Thankfully, over the past decade, big tech has found a solution to this problem: differential privacy, which actors have . The technology is not limited to big tech anymore either; the . Furthermore, the European Union is – indicating that policymakers are on board with differential privacy as a standard means of protecting large, tabulated datasets.

What problem does differential privacy aim to solve?

Differential privacy was created to combat the , which states that “overly accurate answers to too many questions will destroy privacy in a spectacular way.” For instance, in a striking example,

showed that gender, date of birth, and zip code are sufficient to uniquely identify the vast majority of Americans. By linking these attributes in a supposedly anonymized healthcare database to public voter records, she was able to identify the individual health record of the Governor of Massachusetts.

, which at the time contained anonymous movie ratings of 500,000 Netflix subscribers. The attacker compared this to the Internet Movie Database (IMDb) and successfully identified the Netflix records of known users, uncovering information such as their apparent political preferences.

How does one defend against such an attack?

De-anonymization attacks follow the principle that overly accurate answers to too many questions will destroy privacy. Defending a database against too many questions is impractical, thus there must be a method to make answers inaccurate without affecting the data’s utility. Per , this method is achieved by introducing “statistical noise”. The noise () is significant enough to protect the individual’s privacy, but small enough that it will not impact the extracted answers’ accuracy.

Why is this relevant to law?

protects an individual’s information by presenting the impression that their information were not used in the analysis at all, which is more likely to comply with legal requirements for privacy protection. Differential privacy also masks individual contributions to ensure that using an individual’s data will not reveal any personally identifiable information, making it impossible to infer any information specific to an individual.

raised (and voluntarily dismissed) legal arguments against differential privacy by alleging that “the defendants’ decision to produce “manipulated” census data to the states for redistricting would result in the delivery of inaccurate data for geographic regions beyond the state's total population in violation of the Census Act”. As the plaintiff voluntarily dismissed the case, we will need to wait to see if this argument is successful in the future. However, it is obvious that the courts find the addition of statistical noise to violate the data’s integrity, which would be a serious problem for differential privacy.

The post Differential Privacy: The Big Tech Solution to Big Data Privacy appeared first on IPOsgoode.

]]>
44th Global Privacy Assembly Leads To Resolutions On Facial Recognition Technology And Cybersecurity /osgoode/iposgoode/2022/11/21/44th-global-privacy-assembly-leads-to-resolutions-on-facial-recognition-technology-and-cybersecurity/ Mon, 21 Nov 2022 17:00:35 +0000 https://www.iposgoode.ca/?p=40273 The post 44th Global Privacy Assembly Leads To Resolutions On Facial Recognition Technology And Cybersecurity appeared first on IPOsgoode.

]]>

M. Imtiaz Karamat is an IP Osgoode Alumnus and Associate Lawyer at Deeth Williams Wall LLP. This article was originally posted on on November 16, 2022.


On October 28, 2022, the Office of the Privacy Commissioner of Canada (the OPC)that data protection authorities around the world endorsed resolutions on facial recognition technology (FRT) and cybersecurity at the 44th Global Privacy Assembly (GPA) in Istanbul, Türkiye.

The GPA is an international forum where data protection and privacy authorities from more than 130 countries meet to discuss privacy matters of interest and coordinate efforts on an international scale. The theme of the public portion of the event was, “A matter of balance – Privacy in the era of rapid technological advancement”.

During the conference, the GPA members adopted a resolution on the use of, which outlined a series of principles and expectations that they would promote to external stakeholders, assess the real-world application therein, and report back on. These principles require an organization to do the following:

  1. Lawful basis: have a lawful basis for collecting and using biometrics;
  2. Reasonableness, necessity and proportionality:demonstrate the reasonableness, necessity, and proportionality of their use of FRT;
  3. Protection of human rights:assess and protect against unlawful interference with privacy and other human rights;
  4. Transparency:ensure that the use of FRT is transparent to affected individuals and groups;
  5. Accountability:include clear and effective accountability mechanisms for the use of FRT; and
  6. Data protection principles:ensure that FRT is used in a way that respects all data protection principles.

The GPA also saw the adoption of afor international cooperation in improving cybersecurity regulation and understanding the harms that results from cyber incidents. As part of this resolution, the endorsing GPA members would take steps to understand the responsibilities of data protection authorities regarding cybersecurity, and explore possibilities for international cooperation amongst members to avoid duplication in investigations and other regulatory activities.

The post 44th Global Privacy Assembly Leads To Resolutions On Facial Recognition Technology And Cybersecurity appeared first on IPOsgoode.

]]>
Open-Source AI-Generated Art Raises Concerns Amongst Artists /osgoode/iposgoode/2022/11/02/open-source-ai-generated-art-raises-concerns-amongst-artists/ Wed, 02 Nov 2022 16:00:07 +0000 https://www.iposgoode.ca/?p=40171 The post Open-Source AI-Generated Art Raises Concerns Amongst Artists appeared first on IPOsgoode.

]]>

Sally Yoon is an IPilogue Writer, IP Innovation Clinic Fellow, and a 3L JD Candidate at Osgoode Hall Law School.


A high-tech solarpunk utopia in the Amazon rainforest, a Pikachu fine dining with a view to the Eiffel Tower, a mecha robot in a favela in expressionist style – if you are struggling to visualize any of these descriptions, an AI art generator could most likely help you out. All of the prompts are suggestions by , an open-source AI art generator launched in 2022 by startup .

As its name suggests, AI-generated art refers to art generated with the help of artificial intelligence. I like to use AI art generators to help visualize environments, such as where I would rather be writing this blog as the weather gets chillier in Toronto.

An AI-generated image of the prompt “a laptop and bubble tea on a table under a parasol at a Hawaiian beach during sunset, photorealistic” by Stable Diffusion.

Aside from being a fun tool for curious users to play around with, AI art generators serve as a for visualizing concept art and automating repetitive tasks. Furthermore, in more recent years, AI art has enabled artists to explore previously uncharted territory. For example, Lynn Hershman Leeson’s “uses algorithms, performance, and projections to draw attention to the inherent biases in private systems like predictive policing, which are increasingly used by law enforcement”.

Understanding “Open-source” AI-Generated Art

Similar to previous models, Stable Diffusion is a text-to-image generator (similar to and ). It differs from these models in that it is open-source, meaning that its underlying code and model has been trained on publicly available data. The motive stems from Emad Monstaque’s (Founder of Stability AI) that we will only realize AI’s potential to solve humanity’s biggest challenges “if the technology is open and accessible to all”. Stable Diffusion’s open model equips anybody with a web browser to generate images (including violent and pornographic ones) according to their prompts, including for commercial use.

Why Visual Artists are Concerned

Open-source AI-generated art can be seen as a threat to commercial artists in practically every industry. In , Greg Rutkowski, a Polish digital artist, spoke about the difficulties that have come with his artwork’s popularity in the world of text-to-image AI generators. Known for his distinctive ethereal style, Rutkowski found his style becoming one of the most commonly used prompts in Stable Diffusion. Initially, the artist thought this was an effective way to gain publicity until he realized through some Google searches that his name was becoming associated with work that was not his.

Rutkowski is not alone - more artists are beginning to see their artworks gain popularity with similar models and have . Others have raised concerns about data protection and privacy due to their artwork being either personal or linked closely to an existing person. These concerns have consequently about the potential for artists to opt out of the data training process. However, some say this would be impossible as it would involve throwing out the whole model “built around nonconsensual data usage”. Moreover, with the source code out in public, some are under the impression that it will be like “putting toothpaste back in the tube”.

While some companies and artists have been optimistic in their beliefs that AI will ultimately benefit humanity and generate new ideas for their careers, other artists are finding it necessary to build a coalition to fight back with proper regulations and protect the future of their professions.

The post Open-Source AI-Generated Art Raises Concerns Amongst Artists appeared first on IPOsgoode.

]]>
Office Of The Privacy Commissioner Of Canada Publishes Results Of Investigation Into Marriott Data Breach Of 2018 /osgoode/iposgoode/2022/10/27/office-of-the-privacy-commissioner-of-canada-publishes-results-of-investigation-into-marriott-data-breach-of-2018/ Thu, 27 Oct 2022 16:00:39 +0000 https://www.iposgoode.ca/?p=40152 The post Office Of The Privacy Commissioner Of Canada Publishes Results Of Investigation Into Marriott Data Breach Of 2018 appeared first on IPOsgoode.

]]>

M. Imtiaz Karamat is an IP Osgoode Alumnus and Associate Lawyer at Deeth Williams Wall LLP. This article was originally posted on on October 19, 2022.


On September 29, 2022, the Office of the Privacy Commissioner of Canada (the OPC) published the results of itsinto the 2018 data breach involving Marriott International, Inc. (Marriott), finding many of the hotel giant’s privacy controls inadequate and recommending remedial steps to prevent future breaches.

Marriott announced that it experienced a data breach involving the unauthorized access of a Starwood Hotels (Starwood) database on November 30, 2018, as previously reported by the E-TIPS® Newsletter. Starwood is a separate hospitality company that was acquired by Marriott in 2016, with the unauthorized access reportedly starting before the acquisition (i.e., spanning from 2014 to 2018). The threat actor reportedly obtained access to personal information contained in up to 12.8 million records where the country-of-residence information was listed as Canada. These records included information on guest profiles and contact details, guest reservations, passport details, and encrypted payment card information.

The incident prompted the OPC to launch an investigation into Marriott’s primary operating company for Canadian hotels, Luxury Hotels International of Canada, ULC. During the investigation, the OPC considered the following key issues:

  1. Safeguards.The OPC reviewed whether there were proper information security safeguards in place to protect personal information. It found several deficiencies in its investigation, including with respect to access controls, anti-virus software, logging and monitoring, and information storage. The OPC found that these deficiencies represented a failure to implement proper protection measures and were a contravention of Principle 4.7 of thePersonal Information Protection and Electronic Documents Act(PIPEDA).
  2. Accountability.Following the acquisition of Starwood, Marriott was accountable for implementing policies to properly protect personal information. The OPC found that despite undergoing a post-acquisition assessment of Starwood’s systems and making certain improvements, Marriott failed to adequately perform ongoing security assessments in contravention of Principle 4.1.4 of PIPEDA.
  3. Information Retention.The OPC determined whether the compromised information was held for an appropriate period of time and found that certain personal information was retained for longer periods than necessary in violation of Principle 4.5 of PIPEDA.
  4. Notification and Mitigation.Given that the OPC considered the compromised information as presenting an ongoing risk of harm for those affected, it reviewed whether appropriate notification and mitigation measures were used in response to the breach. Marriott conducted both direct notification for those individuals in which it had a valid email address and indirect notification for the remaining individuals (e.g. issuing press releases and providing breach information on a dedicated website). Additionally, Marriott implemented various mitigation measures, such as offering one year of free web monitoring to affected individuals, establishing a dedicated call centre, implementing a process for individuals to verify whether a passport number was involved in the breach, and notifying credit card networks of the incident. Although the OPC would have preferred the web monitoring protection to be for a longer time period, it ultimately found the above notification and mitigation measures to be adequate.

In concluding its report, the OPC acknowledged the remedial steps carried out by Marriott, such as the decommissioning of the Starwood database in December 2018. It also recommended implementing further action to ensure compliance, including having Marriott (i) retain an independent assessor to review any enhancements it has made to its systems; and (ii) review its organizational and governance measures as it relates to selected privacy practices. With both recommendations, the OPC requested that Marriott submit reports detailing their findings and proposed timelines for addressing any action items arising from the reviews.

The post Office Of The Privacy Commissioner Of Canada Publishes Results Of Investigation Into Marriott Data Breach Of 2018 appeared first on IPOsgoode.

]]>