Electronic Databases Archives - IPOsgoode /osgoode/iposgoode/category/electronic-databases/ An Authoritive Leader in IP Wed, 22 Mar 2023 16:00:00 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 FTC Punishes BetterHelp for Sharing Mental Health Information with Advertisers /osgoode/iposgoode/2023/03/22/ftc-punishes-betterhelp-for-sharing-mental-health-information-with-advertisers/ Wed, 22 Mar 2023 16:00:00 +0000 https://www.iposgoode.ca/?p=40698 The post FTC Punishes BetterHelp for Sharing Mental Health Information with Advertisers appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School


BetterHelp is a mental health platform that provides online mental health services, “the largest therapy platform in the world. We change the way people approach their mental health and help them tackle life’s challenges by providing accessible and affordable care. With BetterHelp, you can message a professional therapist anytime, anywhere”. reads: “Making professional therapy accessible, affordable, and convenient — so anyone who struggles with life's challenges can get help, anytime and anywhere”. Their primary business is online counseling and therapy provided online through web-based interaction and phone/text communication with professional counselors.

Privacy Misrepresentation

According to the , BetterHelp requires a questionnaire that asks for sensitive mental health information – “such as whether they have experienced depression or suicidal thoughts and are on any medications” – along with personal information. details BetterHelp’s dubious privacy practices, many of which display an egregious lack of concern for privacy interests. The complaint also details the privacy representations made by BetterHelp, some of which have been altered over time. An example of these changes was seen in the intake questionnaire, where a question asking “Are you currently taking any medication?” included a privacy statement that went through a few iterations (emphasis on alteration added in the complaint):

Up to Dec 2020: “Rest assured—any information provided in this questionnaire will stay private between you and your counselor.”

Dec 2020: “Rest assured—this information will stay private between you and your counselor”

Jan 2021: “Rest assured—your health information will stay private between you and your counselor”

Oct 2021: The statement was removed altogether

Revealing Private Information to Advertisers

The FTC release indicates that BetterHelp “did not obtain consumers’ affirmative express consent before disclosing their health data” and “failed to place any limits on how third parties could use consumers’ health information—allowing Facebook and other third parties to use that information for their own internal purposes, including for research and development or to improve advertising”. According to the complaint, BetterHelp used and revealed consumers’ email addresses, IP addresses, and health questionnaire information to Facebook, Snapchat, Criteo, and Pinterest for advertising purposes”, including “identify[ing] similar consumers and target[ing] them with advertisements for BetterHelp’s counseling service.”

The Punishment

The FTC has issued a (a legal document that outlines the terms and conditions for resolving a complaint or an investigation related to unfair or deceptive business practices) requiring that BetterHelp return funds – amounting to $7.8 million – to customers whose health data was compromised. The proposed order also bans BetterHelp from disclosing health information for advertising, prohibits misrepresenting its sharing practices and requires several changes to company practices regarding health and personal data. BetterHelp writes in that this settlement is “no admission of wrongdoing” and that their “industry-standard practice is routinely used by some of the largest health providers, health systems, and healthcare brands”. The says that this enforcement action is not the first of its kind, as it follows the , and that “the FTC has made it clear of its intent to crack down on the trafficking in sensitive health data by businesses not strictly classified as health care providers and thus not covered by HIPAA, the federal privacy rules that govern the health care industry”. Hopefully, this sets a precedent for more stringent enforcement of good privacy practices, particularly regarding the sale of personal and health information.

The post FTC Punishes BetterHelp for Sharing Mental Health Information with Advertisers appeared first on IPOsgoode.

]]>
Regulations and Restrictions for AI Facial Recognition Tech in Canada /osgoode/iposgoode/2021/10/12/regulations-and-restrictions-for-ai-facial-recognition-tech-in-canada/ Tue, 12 Oct 2021 16:00:29 +0000 https://www.iposgoode.ca/?p=38400 The post Regulations and Restrictions for AI Facial Recognition Tech in Canada appeared first on IPOsgoode.

]]>
Human shadow in front of lines of code

Photo by on

Shannon Flynn is a Guest Writer and the Managing Editor of Rehack Magazine

Although facial recognition may have begun as a useful tool for the masses, as with many things, it has become something that can be used against them. When paired with artificial intelligence, facial recognition software can sort through millions of photos to identify a single face or even a fragment of one.

The problem lies in the sourcing of these photos. Why is AI-driven facial recognition problematic, and what regulations and restrictions are in place in Canada to prevent its abuse?

The Problem With Clearview

Clearview is probably one of the best-known facial recognition programs in the world. Its AI is designed to detect and prevent crimes. By itself, this doesn’t sound like a bad thing. Ideally, AI programs can sort through many times the data a human worker could manage, finding collections and identifying people easily even with partial images to work with.

The problem does not lie in the algorithm itself, but rather in where it sources the images it sorts through. Clearview’s AI crawls the internet and can access, download, and store any image uploaded to social media. That means Clearview considers anything posted on Facebook, Twitter, Instagram, or other sites to be fair game. The company has also been accused of using to train the AI’s algorithm.

Many social media companies, including Google, Facebook, and Twitter, have of utilizing user images without authorization. It is important to note that this isn’t user authorization, but rather authorization of the social media program. Instagram’s terms of service include a to use anything individuals post on their site — but that doesn’t allow AI programs like Clearview to swoop in and take what they need.

Even under the best circumstances, allowing a program like Clearview to sort through social media imagery — even in public posts — could be considered a violation of privacy. The average user should not have to worry that corporations or government entities are watching everything they post online. Indeed, there is an in favor of stronger consumer protections where data gathering is concerned.

Regulations and Restrictions

In June 2021, the Office of the Privacy Commissioner of Canada (OPC) submitted a special report to Parliament about the Royal Canadian Mounted Police (RCMP) and their use of facial recognition technology. Again, Clearview AI was in the crosshairs for improper use of private user data scraped from various social media sites across the internet.

Billions of people, both in Canada and around the world, have suddenly found themselves in a “,” as the report states, without even the courtesy of due process.

As a result of this report, new policy guidelines have been drafted that clarify when and where the use of facial recognition technologies is appropriate. These guidelines focus on four key points: accuracy, data minimization, accountability, and transparency. Accuracy is one of the biggest concerns because AI-powered facial recognition technologies tend to be a lot less accurate than human detectives completing the same task. Law enforcement officials shouldn’t take any matches discovered by facial recognition at face value, and should always double-check the results before making an arrest or pursuing legal action.

Data minimization ensures large swaths of the population are not included in a search. It also helps reduce the impact of a data breach if one happens, which grows more common every year. Accountability is essential so everyone involved knows what data is being collected and how. This key component also includes information security.

Finally, transparency helps keep innocent people out of digital lineups simply for sharing a certain demographic with an assailant.

Looking Toward the Future

It may seem as if an individual’s information is fair game because it’s available on a public post, but this is not the case. Facial recognition technologies can be valuable for preventing and detecting crime, but only if those in power are not allowed to abuse it.

The new policy guidelines being embraced in Canada are just one piece of the puzzle. Every government that utilizes facial recognition should follow suit by embracing key tactics like accountability, transparency, accuracy, and data minimization to ensure the technologies are used properly.

The a fine line between tyranny and law enforcement should not be crossed, regardless of how easily one could click a button and find the “bad guy.”

The post Regulations and Restrictions for AI Facial Recognition Tech in Canada appeared first on IPOsgoode.

]]>
Ontario Court Of Appeal Finds Insurance Coverage Does Not Apply To Cyber Hack /osgoode/iposgoode/2021/04/23/ontario-court-of-appeal-finds-insurance-coverage-does-not-apply-to-cyber-hack/ Fri, 23 Apr 2021 13:00:00 +0000 https://www.iposgoode.ca/?p=37064 The post Ontario Court Of Appeal Finds Insurance Coverage Does Not Apply To Cyber Hack appeared first on IPOsgoode.

]]>
This article was originally published on on April 14, 2021.

On March 15, 2021, the Ontario Court of Appeal (the Court), inFamily and Children’s Services of Lanark, Leeds and Grenville v Cooperators General Insurance Company,, reversed the lower court’s decision that found that Co-operators General Insurance Company (Co-operators) had a duty to defend Family and Children’s Services of Lanark, Leeds and Grenville (FCS) and Laridae Communications Inc. (Laridae) against two claims in relation to a cyber hack.

Laridae was retained by FCS to perform communication and marketing services, including working on FCS’ website. FCS subsequently discovered that its website had been hacked and that a report containing personal information of 285 clients and subjects of FCS’ investigations was disclosed on Facebook without authorization. Both companies were insured by Co-operators and claimed that Co-operators had a duty to defend against the following two claims that arose out of the event:

  1. a $75 million class action brought against FCS alleging that FCS was negligent in securing its website; and
  2. a third-party claim in that proceeding brought by FCS against Laridae for negligence and breach of contract.

Co-operators denied that it had a duty to defend because its policies excluded claims arising from the distribution of data by means of an internet website. All three parties brought applications to determine the rights that depend on the interpretation of the policies.

The Court disagreed with the lower court’s finding that the matter could not be addressed by way of application, stating that there were no material facts in dispute requiring a trial and that the policy provisions in issue were clear and unambiguous. Upon assessing the issue, the Court found that the substance and true nature of both claims arose from the wrongful appropriation and distribution of confidential personal information on the internet. The Court held that all claims asserted were covered by the clear and unambiguous language of the exclusion clauses, and therefore Co-operators had no duty to defend either claim.

The Court did not waver when faced with FCS and Laridae’s argument that applying the data exclusions would nullify meaningful coverage under the policy. The Court held that the policies clearly stated that Co-operators would not insure against all risks, and therefore, holding the parties to the terms of the agreement, aligned with the reasonable expectations of the parties.

Written by M. Imtiaz Karamat, Osgoode Alumnus and Student-at-Law at Deeth Williams Wall LLP.

The post Ontario Court Of Appeal Finds Insurance Coverage Does Not Apply To Cyber Hack appeared first on IPOsgoode.

]]>
Facebook Addresses Resurgence Of Information From 2019 Data Breach /osgoode/iposgoode/2021/04/16/facebook-addresses-resurgence-of-information-from-2019-data-breach/ Fri, 16 Apr 2021 13:00:45 +0000 https://www.iposgoode.ca/?p=37062 The post Facebook Addresses Resurgence Of Information From 2019 Data Breach appeared first on IPOsgoode.

]]>
The following article was originally published on on April 13, 2021.

On April 3, 2021, Business Insiderthat information relating to over 530 million Facebook accounts had been made publicly available online. It isthat 3.49 million accounts belong to Canadians and the leaked data included names, locations, birthdates, email addresses, and other identifying information.

In response, Facebook issued athat stated that the information was not leaked through a recent hack, but was the resurgence of data that was taken from the platform in 2019. Facebook claimed that the information was obtained via data scraping, where automated software is used to obtain public information from the internet and distribute it to online forums. The company believes that malicious actors took advantage of the vulnerability in Facebook’s contact importer feature, which is designed to help users easily find and connect with friends through their contact lists. Through exploiting the feature, the malicious actors were able to obtain information from users’ public profiles. Facebook has assured the public that the malicious actors had limited access to users’ information and the leaked data did not include financial information, health information, or passwords.

The news release also stated that Facebook made changes to its contact importer feature in 2019 to address the issue. More specifically, it modified the feature to prevent malicious actors from imitating the Facebook app and uploading a large set of phone numbers to find matching Facebook users. Facebook stated that it will work to get the data set taken down and that it will continue to combat the misuse of its platform’s features.

Written by M. Imtiaz Karamat, Osgoode Alumnus and Student-at-Law at Deeth Williams Wall LLP.

The post Facebook Addresses Resurgence Of Information From 2019 Data Breach appeared first on IPOsgoode.

]]>
Risks in AI Over the Collection and Transmission of Data /osgoode/iposgoode/2018/07/12/risks-in-ai-over-the-collection-and-transmission-of-data/ Thu, 12 Jul 2018 16:52:05 +0000 https://www.iposgoode.ca/?p=31953 While the daily lives for us ordinary people are made more convenient and more pleasant by the application of various Artificial Intelligence (AI) tools – ranging from the widely known consumer products such as home assistant Siri and personal medical devices to other business applications of natural language processing or deep learning, we should gradually […]

The post Risks in AI Over the Collection and Transmission of Data appeared first on IPOsgoode.

]]>
While the daily lives for us ordinary people are made more convenient and more pleasant by the application of various Artificial Intelligence (AI) tools – ranging from the widely known consumer products such as home assistant Siri and personal medical devices to other business applications of natural language processing or deep learning, we should gradually start to think about the emerging risks associated with these AI-enabled technologies. In particular, it is important to recognize the risks associated with the collection and transmission of data between consumer applications and the users themselves.

is quality of data, arising from the technical aspect of AI algorithms. No matter how well-coded the AI algorithms are, the results still depend highly on the quality of data entered as inputs. Volunteer sampling may produce bad data that is not representative of the subject attributes or introduce unwanted bias. Duplicate, incorrect, incomplete, or improperly formatted data is bad data and can be removed by a data scrubbing tool in a more cost-efficient manner than fixing errors manually. Bad data is a big issue for employing AI and, as businesses increasingly embrace AI, the stakes will only get higher. For example, KenSci, a start-up company based in Seattle, is using an AI platform to make health care recommendations to doctors and insurance companies based on medical datasets collected, classified, and labelled. If there are errors in the medical records, or in the training sets used to create predictive models, the consequences could potentially be fatal, as real patients’ lives are at stake.

However, some companies (unlike large and established ones like ) may not realize the importance of good data until they have already started their projects. It is critical to have a well-curated collection of data to train an AI system and companies might not be aware of the potential business risks rising from biases hidden within their AI training datasets. For example, back in 2015, . Thus, companies must be cautious about what data they use and how they use it to avoid public relations nightmares and reduce associated business risks.

The second risk arises from a legal perspective: consumers are becoming more concerned with whether their privacy is being infringed by service providers, for example, using the data for unpermitted purposes, benefitting from unauthorized transfer to 3rd parties, or providing insufficient protection from potential hackers. Consumers do want to know how their personal data is used, where it is used, and what it is used for. Various known or unknown gaps regarding the legal risks and liabilities governing AI exist and the recent implementation of the European Union (GDPR) has started to fill those gaps. For example, some types of big data analytics such as profiling can have intrusive effects on individuals. According to , the GDPR does not prevent automated decision making or profiling, but it does give individuals a qualified right not to be subject to purely automated decision making.

The GDPR that the data controller should use “appropriate mathematical or statistical procedures for the profiling” and take measures to prevent discrimination on the basis of race, ethnic origin, political opinions, religion or beliefs, trade union membership, genetics, health condition, or sexual orientation. Previously, users did not have the leverage to negotiate with companies in terms of what data is collected or transmitted, by whom, what their personal data are being used for, and their consent to data sharing. Although such legal rights are explicitly regulated by the recent implementation of GDPR (i.e. Articles 15, 16, 17, 21), there is arguably a strong inequality of bargaining power between consumers and powerful companies which makes consumers vulnerable targets and unable to effectively defend their privacy rights. The Supreme Court of Canada decision for in 2017 is a good illustration.

Today, technology is changing so rapidly that new questions regarding AI risks are raised from time to time, posing legal challenges to lawyers, regulators, manufacturers, service providers and consumers. Even back in 2013, United Kingdom Information Commissioner’s Office issued a detailed with suggestions for companies, addressing the incoming GDPR reforms. Various have discussed the suggestions in an attempt to cope with the new challenges. Summarily, it is more cost-saving to start with “privacy by design” processes or systems in the beginning than being caught by new rules in later adaptation stage. With “privacy by design” in mind, companies need to determine what is the purpose of data analytics, what data is required and how to legally and effectively collect, transmit, store and use the data. Second, companies need to avoid over-collection of personal data when such data is not required for the legitimate purpose. Third, they should be transparent about their collection and transmission of personal data by providing privacy notices which are comprehensible to data subjects without accessibility barriers caused by legal jargon, hidden notifications or poor telecom infrastructure. Lastly, companies should ensure that data subjects can exercise their legal rights to give consent, withdraw consent, request for a copy and make changes to their data.

Artificial Intelligence is a double-sword. If used well, it will continue to benefit the general public. If not properly managed, it might be leveraged by those with ill intent to cause harms and companies should bear that in mind when leveraging AI-enabled technologies.

Grace Wang is an IPilogue Editor and a JD candidate at Osgoode Hall Law School.

The post Risks in AI Over the Collection and Transmission of Data appeared first on IPOsgoode.

]]>
Breaking Up With Big Tech? /osgoode/iposgoode/2018/04/09/breaking-up-with-big-tech/ Mon, 09 Apr 2018 15:52:11 +0000 https://www.iposgoode.ca/?p=31583 This week, Facebook co-founder Mark Zuckerberg will make a long-awaited appearance on Capitol Hill. With Facebook under new and increased scrutiny in the United States (US) and United Kingdom (UK) following the Cambridge Analytica data breach, Facebook’s Chairman and Chief Executive Officer is set to be grilled by representatives of both the Senate and the […]

The post Breaking Up With Big Tech? appeared first on IPOsgoode.

]]>
This week, Facebook co-founder Mark Zuckerberg will make a long-awaited appearance on Capitol Hill. With Facebook under new and increased scrutiny in the United States (US) and United Kingdom (UK) following the , Facebook’s Chairman and Chief Executive Officer is set to be grilled by representatives of both the and the. The fallout from the Cambridge Analytica affair has spooked as well as , igniting a #deleteFacebook campaign and sending the company’s stock price . Questions from US lawmakers are likely to focus on fundamental issues surrounding how Facebook collects, protects, and commercializes user data on its platform. These matters strike at the heart of Facebook’s advertising revenue model, meaning that Zuckerberg’s congressional moment may potentially become an to his company’s operations as well as the data-driven operations of his peers in the technology industry.

Companies like Facebook, Google (Alphabet), Amazon, and Uber have long presented themselves as creative pioneers who collect and analyze massive amounts of user data to improve the human condition. Savvy marketing and personal acts of altruism have combined to create a calculated image of these companies as rebels and outsiders, doing no evil, working to leverage data analytics to disrupt tired and unimaginative incumbents in order to connect and empower the world. The tech community’s first major crisis occurred via the unbridled economic hype and enthusiasm presaging the , and current big tech companies may be similarly humbled by ongoing pricks to the veneer covering the structural deficiencies of their data-driven business practices. Recently, French President Emmanuel Macron has about the need to “dismantle […] these big giants” as a competition issue, and, here in Canada, there is a growing call for a that prioritizes domestic interests.

Facebook’s current time in the spotlight is just the most recent instance of big tech’s proclivity for moving fast and, unintentionally, breaking the wrong things. Zuckerberg may have inadvertently said as much himself in the immediate wake of the Cambridge Analytica revelations. In an interview with the New York Times, he , “If you had asked me, when I got started with Facebook, if one of the central things I’d need to work on now is preventing governments from interfering in each other’s elections, there’s no way I thought that’s what I’d be doing, if we talked in 2004 in my dorm room.”

Such a revelation may be an instructive lesson for a fresh-faced undergraduate student thinking through the implications of disruptive technologies for the first time. However, they are worrisome when the head of a global technology behemoth who has run the company for over a decade and has utters them.

But they’re not terribly shocking. Since the early 1990s, lawmakers and technologists have advanced the idea of increased connectivity through information and communication technologies (ICTs) as, what then-Secretary of State Hillary Clinton would call them some 20 years later, the . In with the New York Times, Zuckerberg echoed a similar sentiment to defend Facebook’s revenue model: “The thing about the ad model that is really important that aligns with our mission is that — our mission is to build a community for everyone in the world and to bring the world closer together. And a really important part of that is making a service that people can afford. […]Therefore, having it be free and have a business model that is ad-supported ends up being really important and aligned.” However, a from Facebook Vice President Andrew Bosworth that seemingly downplays “the ugly” side of Facebook’s activities effectively punctures this grandiose narrative. Today’s big tech firms have come of light-touch regulation from lawmakers and responded by giving normative and ethical concerns a back seat to connectivity and disruption.

More recently, though, legislators on both sides of the Atlantic have begun to rethink this arrangement. In the European Union (EU), next month’s enforcement date for the new will introduce heavy fines for companies that do not comply with harmonized data privacy regulations. And at a into Russian online disinformation activities during the 2016 Presidential election campaign, Senator Dianne Feinstein from Facebook, Twitter, and Google that “You created these platforms, and now they’re being misused. And you have to be the ones who do something about it—or we will.” Depending on the outcome of Zuckerberg’s appearances this week, the US Congress may begin to make good on Sen. Feinstein’s threat.

Regulating or, in the words of Macron, dismantling big tech will be no easy task. These companies have amassed large stores of data about our innermost feelings and have developed technologies that . These companies have also entranced governments with the promise of jobs and economic prosperity . It is imperative that any attempts to harness big tech for the public good are not done in a knee-jerk or . The challenges these companies and new emerging technologies pose require long-term and strategic thinking around the social, economic, ethical, and democratic impacts of our increasingly data-driven society.

 

Joseph F. Turcotte is a Senior Editor with the IPilogue and theCoordinator. Heholds a PhD from the Joint Graduate Program in Communication & Culture (Politics & Policy) at 첥Ƶ and Ryerson University (Toronto, Canada).

The post Breaking Up With Big Tech? appeared first on IPOsgoode.

]]>
Facebook and Whatsapp Fined for Breaching EU Law and Deceiving Consumers /osgoode/iposgoode/2017/06/02/facebook-and-whatsapp-fined-for-breaching-eu-law-and-deceiving-consumers/ Fri, 02 Jun 2017 17:53:12 +0000 http://www.iposgoode.ca/?p=30673 The re-posting of this comment is part of a cross-posting collaboration with MediaLaws: Law and Policy of the Media in a Comparative Perspective. On 18 May 2017, the European Commission fined €110 million Facebook for providing misleading information during the 2014 takeover of WhatsApp in case COMP/M.7217. Calling it a “proportionate and deterrent fine”, the […]

The post Facebook and Whatsapp Fined for Breaching EU Law and Deceiving Consumers appeared first on IPOsgoode.

]]>
The re-posting of this is part of a cross-posting collaboration with : Law and Policy of the Media in a Comparative Perspective.

On 18 May 2017, the European Commission fined €110 million Facebook for providing misleading information during the 2014 takeover of WhatsApp in case . Calling it a “proportionate and deterrent fine”, the Commission established that Facebook infringed the procedural obligations laid down by the EU Merger Regulation.

Most notably, this decision follows the 2016 WhatsApp terms of service and privacy update, which included the automatic linking of WhatsApp users’ data with Facebook users’ identities for advertising and marketing purposes. When Facebook notified the acquisition of WhatsApp to the Commission in 2014 under the EU Merger Regulation, which requires undertakings to provide correct information to allow a timely and effective review of the merger process, it ensured an automated matching between Facebook and WhatsApp users could not be established.

However, the Commission’s scrutiny revealed that the technical possibility of matching users’ profiles between the two platforms, which was made effective in 2016 after the terms of use update, already existed in 2014 but had not been communicated to the Commission at the time of the merger.

Although it could impose a fine of up to 1% of the company’s aggregated turnover (it could have amounted to more than €250 million), the European Commission’s assessment was mitigated by Facebook’s cooperation during the investigation proceedings, where the company acknowledged its infringement and convinced the authority to reduce the amount of the penalty. The EU’s competition watchdog concluded that Facebook negligently provided incorrect information, but the gravity of these infringements would not affect the Commission’s clearance decision regarding the WhatsApp acquisition of 2014.

The 2016 WhatsApp terms of use update has also drawn the attention of the Italian Competition Authority (ICA), which on 11 May 2017 has imposed a penalty of €3 million on WhatsApp for infringing consumers’ rights (see ICA decision ).

First, the company was fined for undermining Article 20 of the Italian Consumer Code, most notably for infringing the ban on unfair business practices. According to the ICA, WhatsApp led users to believe they could use WhatsApp Messenger only if they accepted in full the new terms of use, including the provision of sharing users’ data with its parent company Facebook.

However, those who were already users at the time of the update could partially accept the new terms of use and still be able to use the application, but – according to the ICA – the existence of such an option had not been sufficiently represented.

On 11 May 2017, the ICA concluded a second investigation concerning the unfair nature of some contractual clauses of the WhatsApp terms of use, which were assessed as illicit since they caused a significant imbalance into consumers’ rights and obligations arising from the contract in breach of Article 33 of the Italian Consumers Code (see ICA decision ).

These clauses included inter alia a general limitation of WhatsApp liability, as well as the possibility for the company to unilaterally interrupt the service without notice, the right to introduce changes of economic nature to the terms of use without reason and the application of the Law of California.

WhatsApp has now 60 days for filing an appeal against the two ICA decisions before the Administrative Court of Lazio.

 

The post Facebook and Whatsapp Fined for Breaching EU Law and Deceiving Consumers appeared first on IPOsgoode.

]]>
The Partnership on AI: A Modern Manhattan Project? /osgoode/iposgoode/2016/10/26/the-partnership-on-ai-a-modern-manhattan-project/ Wed, 26 Oct 2016 17:29:03 +0000 http://www.iposgoode.ca/?p=29725 On June 29, Sam Harris delivered aTED Talkin which he posed the question: “can we build artificial intelligence without losing control of it?” He proposed the founding of “something like a Manhattan project on the topic of artificial intelligence” to answer his question. On September 28, leading Silicon Valley AI developers entered into a “Partnership […]

The post The Partnership on AI: A Modern Manhattan Project? appeared first on IPOsgoode.

]]>
On June 29, Sam Harris delivered ain which he posed the question: “can we build artificial intelligence without losing control of it?” He proposed the founding of “something like a Manhattan project on the topic of artificial intelligence” to answer his question. On September 28, leading Silicon Valley AI developers entered into a “”. Is this the answer Harris hoped for?

What is the "Partnership on AI", and who are the Partners?

The “Partnership on AI” is a not-for-profitplatform to support best practices in the development of Artificial Intelligence., , , and are the founding partners. These companies are industry leaders in the development of artificial intelligence, drones, and enterprise technologies.

’s Watson AI in recent years for its ability to research and compile relevant information at super-human speeds. Watson has the potential to fundamentally change the nature of industries reliant on intelligent research. DeepMind, Google’s AI development office, when its “learning” AI was able to beat world champions at the ancient logic gameGo. The scale of processing needed to calculate moves in Go is astronomically greater than that in chess, marking a distinct shift in the capabilities of computing since IBM's .

Why should we be concerned about AI?

These computers are examples of how computing is already capable of information processing exceeding that of humans, in some areas. Sam Harris' TED Talk argued “if intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence.” At the same time, he argued, we have so little understanding of how to constrain such an intelligence and “we have no idea how long it will take us” to determine that.

We should be afraid of this paradigm. Artificial intelligence, if incorrectly implemented, .The extreme example Harris offered was that “a few trillionaires”, benefitting from the exponentially improved productivity of AI, “could grace the covers of our business magazines while the rest of the world would be free to starve”, as the result of AI eroding jobs and networks of economic exchange. The fear in this example is not that artificial intelligence would become malevolent—as has proposed it may—but, instead, that it would be so much more intelligent and capable than humans, and, by relative measure, intellectually, we would be to it what ants are to us.

What does the Partnership propose to do about this?

The and of the Partnership on AI respond to some of Harris’ concerns. The organization states its mission is to ensure the maintenance of, “ethics, fairness, inclusivity, transparency and interoperability, and privacy” in the development of artificial intelligence.

The organization intends to bring together experts from a broad range of fields to respond to the implications of AI in relation to economics, social science, finance, public policy, and law.

The organization’s tenets include: “to ensure that AI technologies benefit and empower as many people as possible”; “maximize the benefits and address the potential challenges of AI technologies”; and, “working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society”—these suggest that this organization understands and empathizes with the concerns of Harris and others, related to AI.

What does this mean?

It remains to be seen if this organization and the oversights it vows to provide will prove sufficient to mitigate the potential threats and issues raised by Harris. Concerns are already being raised related to the and Elon Musk (of , , ) from the agreement.

’s Siri personal assistant and Tesla Motors’ cars are two of the highest-profile artificial intelligence applications on the market. Both companies stand poised to play a major role in the development of AI. It remains possible that these companies could join the “Partnership”, however, both Apple and Musk are known for their history of independence in the tech market. If these developers choose to remain independent, this could seriously undermine the authority of the "Partnership" and affect the ability for the AI development 'industry' to self-regulate.

It is also worthwhile to consider that the "Partnership" is rooted only in American businesses, which presents problems insofar that it does not adequately account for the emergence of new AI developersin countries outside of the United States - China, or India, for example. As well, in an extreme case, the centralization of such AI development singularly in the United States could contribute to Cold War-esque tensions, which Harris warned his audience during his talk.

The Manhattan Project for AI?

Harris' Manhattan Project analogy is significant. The Manhattan Project brought together many of the world's greatest scientists and mathematicians to construct the atomic bomb, all with the purpose of ensuring that power did not fall in to the wrong hands - Nazi Germany - during the Second World War. For its intents and purposes, the project succeeded. The bomb was built and it was used to end the war. However, as history proved,despite the positive intentions of the project, it ultimately contributed to further evils as the impetus for the beginning of the Cold War. Albert Einstein, who , later regretted the creation of the device.

If AI were to go the way of the atomic bomb, that is, result in disastrous consequences despite our best efforts to regulate it, this author believes that fact should be cause for concern. While the functionality of AI remains in question as developers continue to seek greater and greater cognition from their machines, this may be, as Harris argued, a critical point in our history.

 

Christopher McGoey is an IPilogue Editor and a JD Candidate at Osgoode Hall Law School.

The post The Partnership on AI: A Modern Manhattan Project? appeared first on IPOsgoode.

]]>
Compliance with EU Data Protection Regulation /osgoode/iposgoode/2016/05/04/compliance-with-eu-data-protection-regulation/ Wed, 04 May 2016 14:47:24 +0000 http://www.iposgoode.ca/?p=29173 The re-posting of this analysis is part of a cross-posting collaboration with MediaLaws: Law and Policy of the Media in a Comparative Perspective. Introduction By means of an innovative and modern directive (Directive 95/46/EC – the “Data Protection Directive”), in 1995, the European Community adopted its first data protection legislation aimed at providing common legal […]

The post Compliance with EU Data Protection Regulation appeared first on IPOsgoode.

]]>
The re-posting of this is part of a cross-posting collaboration with : Law and Policy of the Media in a Comparative Perspective.

Introduction

By means of an innovative and modern directive (Directive 95/46/EC – the “Data Protection Directive”), in 1995, the European Community adopted its first data protection legislation aimed at providing common legal principles (to be implemented by European Union (“EU”) Member States by means of national legislation) to protect personal data and to align the bases of Member States’ provisions in respect to privacy and data protection.

However, the Data Protection Directive was adopted when the Internet was not widely used. The Internet technology has advanced in recent years and has posed new challenges to the protection of individuals’ data. The accelerating take-up of social networking, user-generated content platforms, mobile apps, cloud computing, location-based services, the “Internet of Things” (i.e. the ability of everyday objects to connect to the Internet and to send and receive data, e.g. wearables devices, home automation, etc.) and the growing globalization of data flows have significantly increased the risk for individuals to lose control on their own personal data.

Further, one of the main recurrent complaints about the Data Protection Directive is the lack of actual harmonization, which led to a certain fragmentation in the way personal data protection has been implemented across EU Member States. This resulted in additional costs and administrative burdens for operators as well as widespread uncertainty. This is particularly true for data controllers established in several Member States, who should comply with the requirements and practices in each of the countries where they are established. Guidance provided by the Article 29 Data Protection Working Party, an independent advisory body to the EU Commission set up under Article 29 of the Data Protection Directive (the “Working Party 29”), on several data protection issues certainly contributed to harmonization of data protection principles at EU level, although the Working Party 29’s opinions are not binding.

A uniform and coherent application of the data protection rules among the European countries is fundamental, in light of the proposed creation of the .

Seventeen years after, on January 25, 2012, the EU Commission proposed a new uniform legislation on privacy and data protection in Europe, by means of a General Data Protection Regulation (the “Regulation”) which, once adopted, would be directly applicable in all Member States without the need for national legislation. The Regulation comes together with a proposed directive 5833/12 on the processing of personal data with the purpose to prevent, investigate or prosecute crimes or to adopt criminal sanctions, intended to replace the 2008 Data Protection Framework Decision (see Article 29 Data Protection Working Party’s no. 1/2013, of February 26, 2013, providing further input into the discussions on the draft Police and Criminal Justice Data Protection Directive).

Henceforth, the European legislators have been discussing on the new proposals and on March 12, 2014 the European Parliament adopted its on the Regulation, proposing amendments aimed at enhancing the guarantees on data protection, in respect to the text approved by the EU Commission.

On June 11, 2015, the EU Council (the “Council”) approved its and the discussion among the three organisms (the so-called ‘trilogue’) has officially , with the purpose to reach an agreement and to finalize the approval of the Regulation and the attached directive before the end of 2015.

This article focuses on some of the most groundbreaking provisions of the proposed Regulation which are expected to be a major concerns for in-house counsel, in particular those advising businesses with multi-jurisdictional operations. The Regulation also introduces new provisions that, amongst others, would: (i) make international data transfers easier; (ii) decrease the requirements and the costs of dealing with more than one Privacy Authority with differing rules (so-called “one-stop shop”); (iii) implement specific provisions on the so-called “right to be forgotten,” as interpreted by the European Court of Justice in the Google Spain case (European Court of Justice, decision of May 13, 2014, case C-131/12); (iv) provide for more effective sanctions and penalties to data controllers and data processors.

 

Territorial Scope of the Regulation

One of the major changes to be brought by the Regulation concerns the territorial scope of the EU data protection laws.

Today, Article 4 of the Data Protection Directive contains the rules governing its territorial scope and jurisdictional reach. According to this provision, the EU rules apply to personal data processing:

  • where the processing is carried out in the context of the activities of an “establishment” of the data controller in the territory of the Member State. If the same controller is established in more than one Member State (e.g., by means of subsidiaries), the controller must take the necessary steps to ensure that each of these establishments complies with the obligations laid out by the applicable national law. Security measures depend on the location of a possible processor, as provided in Article 17, paragraph 3 of the Directive; and
  • where a controller not established in the EU, for purposes of processing personal data, makes use of “equipment,” automated or otherwise, located on the territory of that Member State, unless such equipment is used only for purposes of transit through the territory of the EU.

Article 3, paragraph 1, of the Regulation, as recently amended by the Council based on the Parliament’s position, would still keep the “establishment criterion” mentioned above for the applicability of its provisions to controllers or processors established in the European Union. In addition to that, however, the Regulation would expand the “use of equipment” criterion currently provided by the European data protection law by making data controllers established outside the EU, but “targeting” EU residents, subject to EU data protection obligations.

Indeed, the Regulation would be applicable whether the processing of personal data concerns:

  • the offer of goods or the provision of services to residents in the EU, even where no payment is required (e.g. “free” services, where individuals in fact pay for the service by providing their personal data);
  • the monitoring of data subjects’ behavior within the EU. In order to determine whether a processing activity can be considered to ‘monitor the behavior’ of data subjects, it should be ascertained whether individuals are tracked on the Internet with data processing techniques which consist of profiling an individual, particularly in order to take decisions concerning her or him or for analyzing or predicting her or his personal preferences, behaviors and attitudes (see Recital 21 of the Regulation, in the text approved by the Council on June 11, 2015).

Because of its potential broad reach, the new criterion poses challenges for businesses directing their activity to the EU and also gives rise to questions on how the Regulation’s requirements can be readily enforced outside the EU.

It is worth mentioning that the Council uses different wording from the position adopted by the Parliament: in fact, the latter proposed that controllers, and even processors not residing in the EU, would be subject to the provisions of the Regulation. In its regarding the proposed regulation, the Working Party 29 stressed the fact that the Regulation should also cover non-EU processors, in order to provide for a legal liability for these subjects.

 

Automated Data Processing and Profiling

Generally speaking, “profiling” enables an individual personality or aspects of his or her personality – especially behavior, interests and habits – to be determined, analyzed and predicted. “Profiling” of individuals is increasingly used by companies to offer personalized and targeted services (e.g., discounts, special offers and targeted advertisements based on the customer’s profile).

The Data Protection Directive does not contain any specific provision on “profiling”, but it includes a general provision concerning “automated individual decisions” in Article 15, which grants to data subjects the right not to be subject to a decision which “produces legal effects” concerning him or “significantly affects” him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc. An automated decision by a bank not to grant credit may fall within the aforementioned provision.
Automated decisions can, however be made in certain cases, notably in the course of entering into or performance of a contract, provided that data subject’s legitimate interests are protected, e.g. by taking arrangements allowing him to express his point of view, or as otherwise provided by the law.

This provision has sometimes been implemented across EU Member States in different ways. It is worth mentioning Italy, where the prohibition to make decisions involving the assessment of a person’s conduct based solely on the automated processing of personal data aimed at defining the data subject’s profile or personality is limited to measures or act taken by judicial or administrative authorities (see article 14 of Legislative Decree of June 30, 2003, no. 196 – the Italian Data Protection Code).

The Regulation builds on Article 15 of the Data Protection Directive and on the Council of Europe’s Recommendation on profiling of November 23, 2010 and it specifically addresses “profiling” of data subjects.

Article 4 of the Regulation defines “profiling” as “any form of automated processing of personal data evaluating personal aspects relating to a natural person, in particular to analyze or predict aspects concerning performance at work, economic situation, health, personal preferences, or interests, reliability or behavior, location or movements”.

The main provision on profiling is Article 20 of the Regulation (“Automated individual decision making”), which, similar to the Data Protection Directive, grants to the data subject the right not to be subject to a decision based solely on automated processing (like automatic refusal of an online credit application or e-recruiting practices without any human intervention – see Recital 58 of the Regulation), including profiling, which produces legal effects concerning him or her or significantly affects him or her. The Regulations expands the cases in which decision-making based on such processing, including profiling, is allowed, introducing the possibility to carry it out with the data subject’s explicit consent.

Different from the various national provisions adopted in each Member State, profiling would be treated by the new EU rules as a processing alone and, as a consequence, it would require, amongst others, that controllers:

  • inform data subjects about the existence of profiling, and the consequences of such profiling;
  • obtain a specific and explicit consent for it (unless one of the exceptions provided by the Regulation applies).

This course of action would not be a new one for Italy, where, for example, profiling is traditionally considered as an autonomous processing, which requires a specific consent, separate from the consent for other purposes (such as, marketing purposes). In other European countries, profiling is usually treated as a modality of processing personal data and not as an autonomous processing, therefore it is generally deemed that no specific consent is required for profiling once the controller has obtained consent for marketing purposes.

 

Conclusion

In conclusion to this brief overview of the most groundbreaking provisions of the proposed Regulation, it is worth reminding that the latter is currently subject to discussions between the Parliament and the Council. Even though it is likely that the proposal will be amendment before the enactment, the general structure would probably remain the same, especially in the parts described above, which represent momentous innovations and will surely ensure effectiveness and confidence in the processing of people’s personal data

The post Compliance with EU Data Protection Regulation appeared first on IPOsgoode.

]]>
The Italian Data Protection Authority’s Annual Report 2013 – Big Data, Transparency and Surveillance /osgoode/iposgoode/2014/08/11/the-italian-data-protection-authoritys-annual-report-2013-big-data-transparency-and-surveillance/ Mon, 11 Aug 2014 16:47:10 +0000 http://www.iposgoode.ca/?p=25500 The re-posting of this analysis is part of a cross-posting collaboration with MediaLaws: Law and Policy of the Media ina Comparative Perspective. On June 10, 2014, the Italian Data protection Authority (Garante per la protezione dei dati personali – “DPA”) presented its Annual Report for 2013. In its 17th annual edition of the Report, the […]

The post The Italian Data Protection Authority’s Annual Report 2013 – Big Data, Transparency and Surveillance appeared first on IPOsgoode.

]]>
The re-posting of this is part of a cross-posting collaboration with : Law and Policy of the Media ina Comparative Perspective.

On June 10, 2014, the Italian Data protection Authority (Garante per la protezione dei dati personali – “DPA”) presented its . In its 17th annual edition of the Report, the Italian watchdog sets out the status of the implementation of privacy laws and indicates the operation prospects that are required to move towards genuine and effective personal data protection.

 

1. Highlights of the Annual Report 2013

The main DPA’s activities in 2013 concerned the following topics.

 

Internet and the role of large providers. Particular importance goes out to work done by the DPA, also in cooperation with other European authorities, to ensure greater transparency for users in connection with the processing of their personal data via the internet. In this respect, the DPA issued guidelines to protect privacy on smartphones and tablets and recently a resolution on consent for the use of cookies.

 

Global supervision in connection to the Datagate. Datagate stands for the revealed collecting of personal data of citizens by USA’s National Security Agency (NSA). The DPA raised concerns about espionage performed by the NSA and therefore sent a letter to the Italian Prime Minister, requesting him to support the adoption of the draft reform of the EU legal framework for data protection.

 

Transparency of the online public administration and safeguards for citizens. The DPA guidelines to make sure that transparency would not be in conflict with the right to privacy and data protection. For example, a dissemination of information on health and economically or socially disadvantaged beneficiaries of public allowances was prevented.

 

Problems caused by cyber bullying on social networks. On the occasion of the 2013 European Privacy Day, the DPA published a video on its website containing tips for knowledgeable use of social networks. Also a letter was sent to the Italian Ministry of Education to bring the growing problem of cyber bullying to his attention.

 

Confidentiality of taxpayers. In-depth prior checks were performed on the processing of data performed by the Italian Revenue Agency for purposes of the so-called “Redditometro” (i.e., an income meter tool). The DPA set forth various measures to be implemented, in order to address the many criticalities that were found. These comments related to, among the others, the quality and accuracy of the data used by the Italian Revenue Agency, the estimated expenses incurred by each taxpayer depending on multifarious life-style components, as well as the information to be provided to the taxpayers.

 

Mobile payments. The DPA launched a public consultation on the processing of personal data performed in connection with payments through the use of smartphones and tablets and, more broadly, through remote mobile payment services (the DPA has recently a resolution on such matter which takes into account the outcome of the public consultation).

 

Use of biometric data. Significant actions were taken to regulate the use of the biometric signature in banks and the use of fingerprints in the workplace. The DPA found that the use of biometrics in order to check attendance of teachers and administrative staff in several schools was disproportionate, also in accordance with the principles set out by the Article 29 Data Protection Working Party’s on developments in biometric technologies.

 

Protection of minors in the media and on the internet. The use of webcams in a nursery school was banned in order to protect children’s privacy, the unfettered development of their personality, unrestrained relationships with their teachers and freedom of teaching.

 

Protection of data used for justice purposes. Measures and arrangements were made to stimulate the security of any personal data that is being collected and used as part of interception activities, carried out by the Telecommunications Interception Centres (“Centri Intercettazioni Telecomunicazioni”), which are attached to every prosecuting office in Italy, as well as to police offices tasked with performing interceptions for judicial authorities.

 

Video surveillance. Based on spot-checks, the DPA discovered several instances of unlawful processing of employees’ and customers’ data performed by department stores using video surveillance. However, a longer retention period for video surveillance images collected in some building yards and storage areas set up in Pompeii was approved with the objective of preventing mafia-related activities. Furthermore, the DPA required health care districts that had installed video surveillance equipment in the restrooms of their facilities for ruling out drug addiction cases to take measures and precautions such as to protect the privacy of any individual whose urine sample was being taken.

 

Unsolicited promotional calls. Inspections and injunctions against IT companies specialized in database services were carried out to counteract unregulated telemarketing and unsolicited marketing. Hefty fines were to be paid since these companies had failed to comply with previous orders. Moreover, automated pre-recorded calls to costumers for debt collection reasons were banned. Other developments related to telemarking (or customer care) activities concerned call centers located in third countries without adequate data protection levels compared to EU standards. Measures such as the obligation to provide information and notify the DPA in advance about the call centers relied upon, enables the DPA to assess the transfer of personal data outside the EU.

 

Marketing and spam. Guidelines were adopted on marketing and for countering spam, with special emphasis on the new frontiers of spamming such as social spam (via social network sites) or spam based on the viral (or targeted) marketing. A video tutorial and was made available on the DPA’s website (named “Spam: how you can defend yourself”).

 

Consent for direct marketing. The DPA adopted a general resolution providing clarifications on the consent requirement in case of processing of personal data for direct marketing purposes. In particular, the DPA made clear that a data controller obtaining a data subject’s consent for direct marketing purposes through automated mechanisms may also process this data according to traditional/non-automated mechanisms (e.g., by post or operator-assisted calls), unless the data subject objects, also in part, to this processing, provided that other requirements set forth by the resolution are met.

 

Consumer rights. Two banks were allowed to equip their financial promoters with tablets that could perform an analysis of the signature of any customer entering into financial agreements in electronic format. However, the companies involved in enabling and managing both systems were required to take special measures to protect the data they collected. Additionally, measures were created to provide bank customers the option to undersign such agreements through conventional mechanisms as well.

 

Data retention of telephone traffic data. With the help of the tax police, the DPA performed inspections on telephone companies and internet service providers to verify compliance with the law provisions on internet and telephone traffic data retention. Sanctions in case of non-compliance with previous orders by the DPA were imposed.

 

Data breach notification. The DPA adopted a resolution for the notification of personal data breach providing guidance on who is required to fulfill the relevant obligations, what measures could ensure minimum common security standards, the timeline and content of the notification.

 

2. A few Figures

Over 606 decisions were adopted by the DPA in 2013 (almost 38% more compared to 2012).

 

The number of on-the-spot inspections has increased by 4% compared to 2012, for a total of 411. The inspections concerned, in particular, call centers and unsolicited telemarketing; mobile payment services; profiling; data breaches; the tax revenue database; consumer credit; credit bureaus; the information system of Italy’s social security agency (INPS).

 

Interestingly, also the number of the breaches of the Italian data protection law registered an increase, with 850 breach found by the DPA compared to 580 in 2012 (i.e., 47% more). 56% of the breaches concerned the failure to provide adequate information to data subjects. Other breaches involved processing without data subjects’ consent (179 cases); failure to adopt security measures (24 cases); breach of telemarketing rules (19 cases); failure to notify processing operations to the DPA (12 cases); etc.

 

The fines levied on account of administrative sanctions amounted to over 4 million Euros.

 

In 71 cases the DPA informed criminal authorities in particular relating to the failure to adopt security measures to protect personal data.

The post The Italian Data Protection Authority’s Annual Report 2013 – Big Data, Transparency and Surveillance appeared first on IPOsgoode.

]]>