machine learning Archives - IPOsgoode /osgoode/iposgoode/tag/machine-learning/ An Authoritive Leader in IP Thu, 23 Feb 2023 17:00:00 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 Synthetic Data: The Next Solution for Data Privacy? /osgoode/iposgoode/2023/02/23/synthetic-data-the-next-solution-for-data-privacy/ Thu, 23 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40612 The post Synthetic Data: The Next Solution for Data Privacy? appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


One contentious point from the session was synthetic data’s potential to solve the privacy concerns surrounding the datasets needed to train AI algorithms. In light of its increasing popularity, I will explore the benefits and dangers of this potential solution.

Concept

The data privacy concern that synthetic data aims to address is very similar to the purpose of — protecting anonymized data from being de-identified without reducing data utility. This is distinct from data augmentation, which is the process of adding new data to an existing real-world dataset in order to provide more training data, and could include rotating images or combining two images to create a new one. Data augmentation is typically not useful in the privacy context.

In a , the Office of the Privacy Commissioner of Canada (“OPC”) describes synthetic data as “fake data produced by an algorithm whose goal is to retain the same statistical properties as some real data, but with no one-to-one mapping between records in the synthetic data and the real data.” Synthetic data consists of real-world source data that is put through a generative statistical model, which is evaluated for statistical similarity to the source alongside privacy metrics. Critically, there is no need to remove quasi-identifying data, that is, data vulnerable to de-anonymization. This results in more complete datasets.

Benefits

Synthetic data uses a highly automated process to provide protection from de-identification using a highly automated process. This results in datasets that can be readily shared between AI developers without the dangers of privacy concerns. also points out that there are substantial cost savings. The points to how a synthetic data service company founder estimated that “a single image that could cost $6 from a labeling service can be artificially generated for six cents.” Synthetic data can also be manufactured to reduce bias by deliberately including a wide variety of rare but crucial edge-cases. Nvidia uses machine vision for autonomous vehicles as their example, but I think this concept should translate to improving representation of marginalized and under-represented groups in large datasets in healthcare or facial recognition. Many of the Bracing for Impact panelists shared this concern.

Dangers

The OPC notes in their blog many issues and concerns, particularly regarding de-identification. This is especially true if the synthetic data is not generated with sufficient care and if the “generative model learns the statistical properties of the source data too closely or too exactly”. In other words, if it “overfits” the data, then the synthetic data will simply replicate the source data, making re-identification easy.” Moreover, there is also concern with membership inference, where the fact that some individual data exists is an inherent risk. A also demonstrated that “synthetic data does not provide a better tradeoff between privacy and utility than traditional anonymization techniques” and “the privacy-utility tradeoff of synthetic data publishing is hard to predict.” This indicates that the characterization of synthetic data as a “silver bullet” is likely overselling its capabilities.

Implementations

Nvidia is using synthetic data in computer vision, but its primary purpose is not privacy — that there are other important functions for the technology. is a leading platform for synthetic data in healthcare and is . It is only beginning: it is predicted that “.”

Conclusion

Synthetic data has the potential to be highly beneficial, as it may be the answer to the many challenges AI developers face in sharing sensitive data. However, like many developments in AI technology, it requires caution and careful implementation to be effective and is potentially dangerous if relied upon haphazardly.

The post Synthetic Data: The Next Solution for Data Privacy? appeared first on IPOsgoode.

]]>
AI in Healthcare: Application in Medical Imaging /osgoode/iposgoode/2022/11/07/ai-in-healthcare-application-in-medical-imaging/ Mon, 07 Nov 2022 17:00:19 +0000 https://www.iposgoode.ca/?p=40233 The post AI in Healthcare: Application in Medical Imaging appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


This past summer, I had the privilege, as my final act as a graduate student, to attend a major magnetic resonance imaging (MRI) conference in London, UK (ISMRM). At this conference, GE Healthcare used its plenary session to . The other major MRI manufacturers, and , also have AI suites. Computed tomography (CT) had joined the AI party even earlier than MRI, with , , and products. The widespread adoption of AI in medical imaging products is significant because it is one of the first commercial applications of AI in healthcare.

What are MRI and CT?

MRI and CT are the workhorses of most hospitals’ radiology departments. CT and MRI both allow for a 3D image to be taken of internal anatomy, making them invaluable for diagnosing many diseases. Unfortunately, they both have at least one critical downside. CT is an extension of x-ray and thus exposes patients to ionizing radiation, with a CT image often depositing more than 10x the effective radiation dose of an x-ray image. MRI is lauded for, among other benefits, avoiding this radiation; however, MRI is both expensive to run and comparatively very time-consuming.

How does AI come into play?

The primary goal of AI in MRI and CT applications is mitigating the downsides – radiation dose in CT, and scan time in MRI. In both cases, this goal is achieved by “training” an AI through machine learning – or, more specifically, deep learning algorithms – by feeding it an enormous amount of data consisting of previously acquired images. Trained AI allows MRI and CT to acquire less data as the AI is used to fill in the data shortfall – almost analogous to the Hollywood idea of zooming in on a pixelated picture and seeing a clear image. Acquiring less data means less views in CT, leading to less radiation dose and shorter MRI scan times. The resulting AI-enhanced images are used for diagnostic purposes in the same way that conventionally acquired images are.

Why does it matter?

Directly related to healthcare, Canadian , and any improvements to MRI and CT will aid in alleviating that pileup to some extent. It is also significant that radiologists and medical physicists approve of AI in diagnostic imaging. There may not be any group in the medical field more qualified to have at least some grasp of the (disclaimer: I do not fully understand the title of this thesis). It also represents one of the first applications of AI that directly affects medical decisions, which may open the door for other AI applications in healthcare. Lastly, using AI in a commercially available product is interesting on its own – the pathway toward deploying AI in such a high-stakes application may be a useful example for future AI-based products.

The post AI in Healthcare: Application in Medical Imaging appeared first on IPOsgoode.

]]>
Osgoode Emerging Technology Association Panel with Professors Allan Hutchinson and Jon Penney /osgoode/iposgoode/2021/12/15/osgoode-emerging-technology-association-panel-with-professors-allan-hutchinson-and-jon-penney/ Wed, 15 Dec 2021 17:00:00 +0000 https://www.iposgoode.ca/?p=38786 The post Osgoode Emerging Technology Association Panel with Professors Allan Hutchinson and Jon Penney appeared first on IPOsgoode.

]]>

Source: Screenshot of the Zoom Panel


Natalie BravoNatalie Bravo is an IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School.

On November 24, 2021, the Osgoode Emerging Technology Association (OETA) hosted an interactive panel discussion with Professors and , led by OETA president and co-founder Murad Wancho.

from the Osgoode Fintech & Blockchain Association, OETA was founded in Spring 2020 and has quickly grown in popularity. As an OETA executive, I am honoured to share details of this informative event delivered by my dedicated colleagues.

Despite the fast-approaching exam season, the virtual event had an excellent turn-out of students and legal community members. The panel garnered traction preceding the event, with participants eagerly sending in questions on topics ranging from concerns to the future of (“NFTs”). Wancho began by thanking participants and snapping a lovely photo of everyone in the call (as seen above). Everyone rushed to turn on their cameras in time. I regrettably was too slow (or maybe Wancho was too fast!) This spontaneous moment of collaboration and engagement served as a fun icebreaker before introducing the esteemed guests.

Professor is an internationally renowned legal theorist and an Osgoode faculty member since 1982. His research interests include politics, constitutional law, and torts, and he teaches a wide range of courses. Hutchinson also authored an on the intersection between cryptocurrencies and the law, .

Professor has been at Osgoode since 2020. He is a research affiliate at for Internet and Society and a Research Fellow at the based at the University of Toronto. His research lies at the intersection of law, technology, and human rights. Penney recently designed and is currently instructing “” at Osgoode.

Cryptocurrency was the main topic of interest, along with the ever-prevailing questions surrounding its future. This form of decentralized digital currency has been around for but is growing in mainstream popularity. With a show of hands, over half of the participants expressed owning or wanting to own some cryptocurrency.

Hutchinson shared details on his upcoming book and his thoughts on regulation. While no one can accurately predict the future of cryptocurrency, Hutchinson discussed the merit in theorizing unique regulatory approaches to the decentralized system(s). and self-regulation were of notable interest. Many participants asked whether further external regulations would detrimentally affect the appeal and use of cryptocurrencies. The implications of overarching regulatory actions, such as securities or tax, are looming realities of NFTs and cryptocurrency, as we are now witnessing in multiple regions, Penney shared the sentiment of cryptocurrency as a speculative asset that likely cannot succeed without further mainstream support and usage. He also explored the environmental impacts of cryptocurrency , as crypto- utilizes large amounts of energy. Remarking on China’s recent , Penney expressed that some major cryptocurrency players have simply migrated their mining practices elsewhere.

The conversation shifted to career guidance within the legal technology field. This discussion was particularly interesting for 1L students developing their legal paths. Both professors offered pertinent advice on professional development, emphasizing networking. Penney highlighted the importance of reaching out to tech companies for any legal work available. Companies are increasingly incorporating emerging technologies within their operations, such as and algorithms, which may require legal expertise to ensure legal compliance. As innovative technologies emerge, so will the demand for technology lawyers.

Following the event, Professor Penney added, “In the coming years, emerging technologies like cryptocurrency and NFTs will pose a range of complex challenges for law, policy, and broader society. This was an excellent panel discussion, and OETA is showing great leadership in bringing students and faculty together to discuss and debate.”

While no one can ever fully predict the future of cryptocurrency and NFTs, both Penney and Hutchinson provided insightful perspectives. They both have extensive work related to technology that can help us theorize when looking forward. The panel elicited strong engagement and interactive feedback from participants. It was refreshing to learn more about technology law outside of the classroom setting and see different perspectives and interests within the field. I encourage everyone to explore the work of both professors and follow the OETA’s socials below for more information about our next event!

OETA Socials:

LinkedIn:

Twitter:
Facebook:
Instagram:

The post Osgoode Emerging Technology Association Panel with Professors Allan Hutchinson and Jon Penney appeared first on IPOsgoode.

]]>
Privacy Plight: Apple’s Proposed Changes & Consumer Pushback /osgoode/iposgoode/2021/09/07/privacy-plight-apples-proposed-changes-consumer-pushback/ Tue, 07 Sep 2021 16:00:19 +0000 https://www.iposgoode.ca/?p=38164 The post Privacy Plight: Apple’s Proposed Changes & Consumer Pushback appeared first on IPOsgoode.

]]>
Apple logo over people carrying screens

Photo by Jimmy Jin ()

Natalie BravoNatalie Bravo is an IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School.

In August, Apple made headlines by . These new features are purported to expand protections for children through modified communication tools, on-device algorithm learning within , , and , and Search . Although protecting children as a vulnerable group should be of utmost importance to all, many security experts find some of these proposed changes troubling as they may undermine the company’s longstanding reputation in privacy preservation and enable future security .

Over the years, Apple has cultivated a strong reputation as a One of their core values and s is that After all, their security and privacy designs are so powerful that Apple allegedly can’t access encrypted user data—. In 2015, Apple CEO Tim Cook that while issues such as national security are important, Apple would not implement any technology which malicious actors could misuse as a backdoor to encrypted user data. Now, in 2021, Apple’s ironclad encrypted system has one exception.

As one of the changes, Apple intends to introduce photo-scanning technology for all users to identify any Child Sexual Abuse Material (CSAM). This well-intentioned technology is already widely used online to identify known explicit materials, including terrorist propaganda and other violent content. Some consumers worry that all their private images will be scanned in search of illegal content; however, Apple is not proposing that. The technology scans for the “” of a file and matches it to a known hash. If a certain threshold of known CSAM is found, barring false positives, then law enforcement is contacted. Strangely enough, Apple has noted that users can opt to disable photo uploads to iCloud, expressing that CSAM is only identified within their servers, and not on users’ devices. Some experts interpret this as

Some security experts expressed strong s over modified communication tools for children. Apple alleges that device software will detect any explicit content (not hashes) within a minor’s Messages conversations—a feature that can be turned on or off by a guardian. This will alert a parent if their minor has received any image that is flagged as explicit. This seems appropriate to allow some supervision to protect vulnerable children from online predators; however, the algorithms currently used to detect explicit images are . It is widely known that benevolent, non-sexual content, particularly , is consistently To add to this, child advocates worry about the possibility of minors in abusive households being monitored by such a faulty and algorithm.

Though is not a new concept, these changes will suddenly affect billions of consumers. It’s been reported that when a child, like any other user, experiences negative behaviour online, they . However, there is currently no way to report messages within Apple’s Messages application. . After causing a tremendous stir in both the privacy and child advocacy communities, Apple that Messages scanning would only apply to those under 13, not teenagers, and have attempted to offer limited clarity on the new technologies.

Despite the changes, . Children need to be protected and prioritized in terms of technology experience, but their privacy matters too. It will be interesting to see the roll-out of Apple’s polarizing changes, particularly how they will affect Apple’s reputation and ecosystem security and if Apple will introduce any more changes moving forward as it responds to community concerns.

The post Privacy Plight: Apple’s Proposed Changes & Consumer Pushback appeared first on IPOsgoode.

]]>
Copilot or Co-Conspirator? Is GitHub’s New Feature a Copyright Infringer? /osgoode/iposgoode/2021/08/04/copilot-or-co-conspirator-is-githubs-new-feature-a-copyright-infringer/ Wed, 04 Aug 2021 16:00:00 +0000 https://www.iposgoode.ca/?p=37945 The post Copilot or Co-Conspirator? Is GitHub’s New Feature a Copyright Infringer? appeared first on IPOsgoode.

]]>

Photo by (

Claire WortsmanClaire Wortsman is an IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School.

Is GitHub Copilot a Copyright Infringer?

At the end of June, CEO Nat Friedman the launch of a technical preview of . Much like the predictive text and search features we see in messaging, email applications and search engines, Copilot makes instant suggestions to users as they type. These suggestions can range from a line of code to an entire function.

The Guardian’s UK Technology Editor Alex Hern a couple of simple tasks that programmers can now hand off to Copilot, including sending a valid request to Twitter’s API (application programming interface) and pulling the time in hours and minutes from a system clock. Although the big-eyed Copilot mascot may look innocent, Hern also identified some functions that are a little less helpful. These range from allegedly violating copyright (the subject of much on forums) to (i.e., providing access to an app’s otherwise inaccessible databases).

On infringing copyright, GitHub’s staff machine-learning engineer Albert Ziegler published a assuring users that while “Copilot can quote a body of code verbatim … it rarely does so, and when it does, it mostly quotes code that everybody quotes, and mostly at the beginning of a file, as if to break the ice.”

While Ziegler’s use of the word “mostly” may not reassure those fearing copyright infringement, his paper highlights two details that might. First, verbatim code is only suggested about . Second, GitHub plans to integrate a duplication search into the user interface. A duplication search would identify overlap with Copilot’s training set to flag instances of duplicating direct snippets of code and identify where they originate from.

Intellectual property law professor Andres Guadamuz that Copilot, as it stands, does not infringe copyright. This is because Copilot would copy small snippets of commonly used code which are unlikely to amount to substantial reproduction or meet the threshold of originality necessary to be protected under copyright. Guadamuz explains that machine learning (ML) training is “increasingly considered to be fair use in the US and fair dealing under data mining exceptions in other countries.”

On the question of which country’s law governs GitHub’s activities, Internet, telecoms, and tech lawyer Neil Brown “a reasonable chance that GitHub will claim that its service is provided by GitHub, Inc., which is established in the USA, such that [any other country’s] law is irrelevant.”

What About Copyleft?

Some licensing agreements contain “copyleft” obligations. Copyleft allows for the use, modification, and distribution of a work, or a portion of it, on the condition that the resulting work is bound by the same license. disapprove of code licensed under GNU’s (GPL) being included in Copilot’s training set, given that Copilot is a commercial work and the GPL has copyleft obligations. However, Guadamuz explains that under GPL v3, this obligation only arises where the copying is substantial enough to warrant copyright permission. As previously mentioned, Copilot’s activities likely do not meet this standard.

What Comes Next?

Profiting off the work of others without remuneration or their consent goes against the spirit of copyright protection. But what if using the work of others to train a commercial product results in a tool like Copilot that lowers barriers to coding and permits a wider audience to engage in the creation process? After all, encouraging innovation should be one of the primary functions of any copyright regime. The opinions, and possible legal decisions, that follow in the wake of Copilot’s launch, and the launch of similar ML features, will reveal what we value about copyright law and the direction it takes as technological complications arise.

The buzz surrounding Copilot is not the first time an autocomplete feature has landed a company in hot water. In the 2018 Australian High Court case of , the plaintiff argued that Google’s autocomplete predictive search suggestions were defamatory. Although no final conclusion was reached, I anticipate that we will see more definitive cases emerge as autocomplete and predictive text tools, whether suggesting text or code, continue to develop and more instances of potential defamation and IP infringement take place.

The post Copilot or Co-Conspirator? Is GitHub’s New Feature a Copyright Infringer? appeared first on IPOsgoode.

]]>
How Machine Learning Could Play a Key Role in the Diagnosis of Rare Genetic Diseases /osgoode/iposgoode/2020/10/19/how-machine-learning-could-play-a-key-role-in-diagnosis-of-rare-genetic-diseases/ Mon, 19 Oct 2020 16:13:40 +0000 https://www.iposgoode.ca/?p=36000 The post How Machine Learning Could Play a Key Role in the Diagnosis of Rare Genetic Diseases appeared first on IPOsgoode.

]]>
Machine learning as a subset of artificial intelligence (AI) has increasingly become the subject of interest by many industries, including in the field of healthcare. For instance, AI and machine learning can play a key role in the diagnosis of rare medical conditions. AI and machine learning in the context of medicine and disease diagnosis use large sets of data to train algorithms and patterns in computers which can then be applied to new input in order to make a prediction such as a disease diagnosis. Machine learning in the context of diagnosis of rare genetic diseases would enable healthcare professionals to sift through large volumes of research and medical literature in order to draw conclusions that would have taken them years to reach had they gone through the research manually.

Different countries have alternative ways of defining a disease as ‘rare’. In , a rare disease is defined as a “condition affecting fewer than 1 person within 2000 in their lifetime.” There are a total of and with the discovery of new genetic disorders every year, it is projected that will be affected by a rare disease in their lifetime. Although there are policy incentives for pharmaceutical companies to invest in research and development of treatments for rare diseases that affect small populations (see ), such research and development may not be profitable, and many conditions go undiagnosed and many individuals end up having to live with the symptoms of their chronic diseases for years. Additionally, rare genetic conditions are extremely difficult to diagnose since even the most experienced physicians may never come across a single patient with one during their years of practice.

Patients with rare genetic diseases can greatly benefit from the implementation of machine learning in healthcare. AI’s ability to large amounts of data and extrapolate from them in a meaningful way in order to categorize patients or reach new diagnoses should give sufferers of rare diseases hope that with the help of rapidly improving technology, their conditions may be better understood and treated in the near future. There are several initiatives worldwide that are aimed at gathering information about rare diseases and making them accessible to healthcare professionals for use in diagnosis- in Germany, in the United States to name a few.

Although AI in the context of healthcare certainly offers significant benefits, it is by no means a ‘cure-all’ for the immense challenges of disease diagnosis. After all, the quality of the output from a computer algorithm- coded by a programmer or ‘learned’ by the machine itself- is only as good as the input. In a published by the Journal of Rare Diseases in 2020, it was found that not all rare diseases are studied to an equal extent. Rare neurologic, rheumatologic, cardiac and gastroenterological diseases were more broadly studied and hence appeared in the literature more frequently. On the other hand, rare skin diseases were highly understudied and it was difficult for a computer to form meaningful algorithms from the limited data that was available in order to better understand the conditions and apply the existing expertise to new cases. That is to say, unless more funding, effort and incentives are invested in the study of rare genetic diseases, no amount of help from AI can help save patients and improve their quality of life. Per President and CEO of Rady Children’s Institute for Genomic Medicine, Dr. ’s statement regarding artificial intelligence and medicine, “Patient care will always begin and end with the doctor.” Technology will only help professionals in connecting the ‘dots’ where there is existing data and research.

Written by Bonnie Hassanzadeh, IPilogue editor and Clinic Fellow at Osgoode Innovation Clinic.

The post How Machine Learning Could Play a Key Role in the Diagnosis of Rare Genetic Diseases appeared first on IPOsgoode.

]]>
Moral Ethics of Artificial Intelligence Decision-Making – Who Should be Harmed and Who is Held Responsible? /osgoode/iposgoode/2018/11/16/moral-ethics-of-artificial-intelligence-decision-making-who-should-be-harmed-and-who-is-held-responsible/ Fri, 16 Nov 2018 17:57:59 +0000 https://www.iposgoode.ca/?p=2808 As autonomous vehicles begin their test runs and potential commercial debuts, new liability and ethical questions arise. Unlike other computer algorithms which are already available to the public, a fully automated car divorces the authority of the device from the driver, instead vesting all power and decision-making into the car and its software. Accidents may […]

The post Moral Ethics of Artificial Intelligence Decision-Making – Who Should be Harmed and Who is Held Responsible? appeared first on IPOsgoode.

]]>
As autonomous vehicles begin their test runs and potential commercial debuts, new liability and ethical questions arise. Unlike other computer algorithms which are already available to the public, a fully automated car divorces the authority of the device from the driver, instead vesting all power and decision-making into the car and its software. Accidents may become less accidental, and more preordained in their execution. While the death toll from human negligence numbers from vehicular accidents in Canada alone, and automated cars will supposedly decrease this by a high amount, the approach to delineating liability fundamentally changes. If the hypothetical drop in incidence of vehicular mortality is to be believed, then future automation of cars objectively supersedes the question of ‘if’ these technological advancements should be performed, and instead transforms into a ‘how’ scenario regarding integration.

While theoretically superior in terms of public safety, autonomous cars and advanced algorithms bring up a type of mechanical morality -- in decisions to be made between life and death of pedestrians, occupants, or property damage. Determined by humans, these ethical ‘what-ifs’ translate into potentially tangible scenarios; extensive hypotheticals and their implications are not a question of existence, but of approach. While coded ethics, according to Noah J Goodall, a University of Virginia research scientist in transportation, will have “situations that the programmers often will not have considered [and] the public doesn’t expect superhuman wisdom but rather rational justification,” what counts as ‘ethical’ is subjective . Factors in ethics go beyond death toll. Janet Fleetwood, from the Department of Community Health and Prevention, Dornsife School of Public Health, Drexel University, how age, assigned societal value, injuries vs. fatalities, future consequences on quality of life, personal risk, and the moral weight of killing against a ‘passive’ death all contribute to the problem.

The legal framework for autonomous vehicles does not yet exist, but the laws and policies that govern automated cars must not derive from their creators.Doing so would place considerable responsibility on the programmers of automated vehicles to ensure their control algorithms collectively produce actions that are legally and ethically acceptable to humans. The ethical decisions must be uniform for the sake of simplifying liability and establishing an optimal process for future programming. Negligence, the current “governing liability for car accidents,” will expand to the field of programming. Consequently, it is important to in the cases of accidents.

Ethics being prescribed solely by manufacturers may result in an arms-race to best serve the consumer’s interests. The technique of marketing reveals itself in ethical ambiguity of preferential treatment of the consumer; instead of being ethically neutral, an autonomous car may be marketed to show strict deference to the consumer, diluting ethics into a protectionist game. Conversely, programming an automated car to slavishly follow the law might also result in dangerous and unforeseen consequence. Extensive research and decision-making experiments explored in a non-profit based model may simplify the scope of the problem; instead of imagining how to market the product, discussions over autonomous cars become purely performative. Consideration from multiple stake-holders such as consumers, corporations, government institutions, organizations and others may be necessary to construct clear and stringent ethical guidelines that is necessary in the implementation of automated vehicles into society, which requires extensive efforts to reach a uniform conclusion. However, ethical dilemmas regarding control algorithms that determine the actions of automated vehicles will inevitably be subject to philosophical issues such as “the trolley problem” — a no-win hypothetical situation in which a person witnessing a runaway trolley would either do nothing and allow it to hit several people or, by pulling a lever, divert it, killing only one person. In such circumstances, there is simply no right answer and this make ethical guidelines difficult to construct.

While one may propose a utilitarian approach should be adopted for the sake of simplicity, questions such as would humanity be comfortable having a computer decide the fate of their life -- and what if the machine’s philosophical understanding extends beyond dire incidents would undoubtedly arise. Professor Azim Shariff of University of California that found that respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car. However, this raises the question of, would a customer buy a car in which they and their family member(s) would be sacrificed for the benefit of the public?

To delineate the complexity of the situation, Fleetwood multiple hypothetical situations regarding moral preferences to participants. The study concluded 76% of participants favored a utilitarian approach in which the maximum number of lives were saved, yet participants were reluctant in purchasing a vehicle that sacrificed its own passengers. Understandably, consumers maintain a bias towards their own life; few would desire a product that chooses to sacrifice its owner.

Perhaps, legal ethics of automated vehicles should not exist in human management but rather in a machine’s own ability to learn. This is known as machine learning, where the program provides systems the ability to automatically learn and improve from experience. Machine learning focuses on developing computer programs that can access data and use it learn for themselves without human intervention. Static programming arrives at pre-determined ethical conclusions, while machine learning generates its own decisions, distinct from purely human determined ethics. While introducing an objective or impartial arbiter to complex situations would be desirable, questions on how accurate its judgments may arise. Scholars propose to model human behaviour to ensure that cars, rather than behaving better, behave exactly like us, and thus impulsively, rather than rationally. One by Leon R. Sütfeld et al. state “simple models based on one-dimensional value-of-life scales are suited to describe human ethical behaviour” in these circumstances and as such would be preferable to pre-programmed decision-making criteria, which might ultimately appear too complex, insufficiently transparent, and difficult to predict. This solution regarding machine-learning to mimic human decision-making appears to be oriented towards an essential aspect of social acceptance; the uniformity of robots with human behaviour. Therefore, instead of regulating automated vehicles through ambiguous ethical guidelines, they will base their decisions through humanistic thinking while still lowering accident rate compared to human drivers. Nevertheless, other issues still exist; questions of whether this technology is achievable in the future and who should be held responsible for automated incidents in the context of machine learning. Regardless, whether the guidelines for automated vehicles arise from policy regulators or machine-learning, society needs to embrace that autonomous cars will debut on the market in the coming years, and work towards addressing the floodgate of concerns for wider applications including life-or-death accidents.

Rui Shen is an IPilogue Editor and a JD Candidate at Osgoode Hall Law School.

The post Moral Ethics of Artificial Intelligence Decision-Making – Who Should be Harmed and Who is Held Responsible? appeared first on IPOsgoode.

]]>
AI for Social Good: Becoming Aware of Different Interests /osgoode/iposgoode/2018/03/21/ai-for-social-good-becoming-aware-of-different-interests/ Wed, 21 Mar 2018 17:05:38 +0000 https://www.iposgoode.ca/?p=31410 On February 2, 2018, IP Osgoode along with its partners, theYork Centre for Public Policy & Lawand theZvi Meitar Institute for Legal Implications of Emerging Technologies, hosted a conference entitled“Bracing for Impact – The Artificial Intelligence Challenge (A Road Map for AI Governance in Canada)”. The conference brought together experts from a broad range of […]

The post AI for Social Good: Becoming Aware of Different Interests appeared first on IPOsgoode.

]]>
On February 2, 2018, IP Osgoode along with its partners, theand the, hosted a conference entitled.

The conference brought together experts from a broad range of disciplines to discuss artificial intelligence (AI) innovation and the impact machine learning will have on our social, moral, and legal norms. Throughout the day, tough questions were asked and critical issues about commercialization, cybersecurity, and the application of AI for social good were discussed. In my blog, I will share a piece of this journey with you and focus on the last panel entitled, “AI for Social Good.”

AI in the Public Sector & Biases

Our journey into AI started with Dr.’s presentation on the uses of AI in the public sector, the power of various AI applications to promote equity, and the biases we need to be aware of in designing algorithms. Dr. Nonnecke brought the audience’s attention to the rapid growth of metropolitan centers. According to , 30 years from now, cities are going to have a huge influx of population with 66% of the world’s population living in cities by 2050 compared to 54% in 2014. This will disrupt the status quo and dramatically change how our cities function, explained Dr. Nonnecke. In anticipation of this rapid growth, the public sector is already looking at cognitive technologies that could eventually revolutionize every facet of public services and government operations including oversight, law enforcement, labour, and human rights.

Dr. Nonnecke acknowledged AI’s promise to promote efficiency, effectiveness, and equity. AI can be used, for example, to locate human trafficking hotspots, mitigate biases in job application processes, and detect discrimination in law enforcement. Although AI has the power to promote equity, this power is not an inherent one. AI is as prone to bias as the humans who design its algorithms. Given that algorithms and machine learning (ML) are increasingly used to make decisions, developers need to be aware of their human fallacies that can easily make their way into ML in the form of bias in data and prediction.

Dr. Nonnecke also stressed the importance of ensuring inclusiveness and equity in all stages of AI development. She cautioned that, if we want a good design and an unbiased outcome, we need non-heterogeneous groups, not only in the purview of technical ability, but also in every interdisciplinary team involved in the development of AI from engineers to legal scholars.

Designing for the Average

Big Data inherits methods from quantitative research where outliers (or “noise”) in data is eliminated to find dominant patterns and generalizable findings. In effect, this method “normalizes” the data that is used to recognize speech, faces, illnesses, or to predict loan and credit worthiness, academic potential, and future employment performance.

As Prof. pointed out, just like , AI designs could easily fail to consider people who do not fit the “norm” and when AI applications are offered to everyone but are, at the same time, designed with the average person in mind, “normalization” of data becomes a large issue. Prof. pointed out that we cannot rely on predictive models or be overconfident in statistical tests where the minority can eventually be discarded as “noise in data.” Rather, we need to recognize diversity and rethink our methodologies having regard to the individuals at the margins. Although the audience was left with questions on how to tackle the potential biases of AI design towards the “average person,” the panelists drew everyone’s attention to the scary fact that as AI permeates our daily lives, the effect of serving the “average person” will lead to further marginalization and widening disparity between those who fit the norm and those who do not in one way or another.

Autonomous Cars for the Unreasonable Person

Traffic, congestion, and parking – situations that will make any driver not want to drive - but what if you could sit back and read a newspaper on your way to work in the comfort of your own car and not have to deal with all that? Prof. , a proponent of autonomous cars, argued that we need to get rid of regular cars. He argued that despite our (over)confidence in our driving ability, it is difficult to find a “reasonable person” on the road. The effect of “” allows drivers to feel anonymous and makes them feel less accountable for their risky behaviours behind the wheel. Citing the high number of fatalities by car accidents everyday around the world, the economical costs of keeping a car that we only use for 10% of the day, the amount of space wasted on parking (e.g. if we get rid of cars in the US we will avoid using a territory that is the size of Sri Lanka just for parking cars), etc., Prof. Seidman argued that it does not make much economic sense to keep regular cars and if autonomous cars can alleviate some of these burdens even slightly, we will see a huge economic improvement.

However, the promise of autonomous cars is tempered by a caution about some of the flaws of the technology as it currently stands. For example, Prof. Treviranus explained that in simulations involving autonomous cars, a pedestrian propelling backwards due to her disability is hit by the autonomous car because the technology failed to recognize the “out of the norm” movement.

Overcoming Algorithm Aversion

While some of the earlier panelists voiced their concern about the over-reliance on algorithms, Prof. argued that the problem is an under-reliance on algorithms. Prof. Grossman states that and hold algorithms to a much higher standard. She voiced her concerns that we will not reap the tremendous benefits of AI innovation because it is hard to get people to rely on algorithms even though one of AI’s key attributes is its ability to learn. Given that we can rely on lawyers, doctors, and pilots with our lives, how can we justify our skepticism towards using algorithms that can be more accurate than humans? If there is even a chance to reduce the hefty legal costs and improve access to justice, then why are we not relying on algorithms more often in the legal system? Prof. Grossman stated that in certain low risk situations where using algorithms is the better and more logical alternative we should be using algorithms. So how do we alleviate this aversion to using algorithms? The research shows that to get people over this hump in using algorithms we may need to sacrifice some of the efficacy of algorithms and give people back some level of control. Furthermore, it is critically important to have peer-reviewed research and scholarship on algorithms in order to give it credibility in the long run. In conclusion, Prof. Grossman suggested that we need to look at the psychological, social and economical incentives, and move away from the zero sum game and find ways to make this a win-win proposition for everyone in order to reap the benefits of AI.

After the closing remarks of the conference were delivered, attendees and panelists engaged in further discussions at a cocktail reception. By the end of the day-long conference, I believe we were all in agreement that algorithms make mistakes, just like humans. More importantly, the conference was a call for our nation to invest in AI research and uncover the key elements to sparking the next AI innovation wave and better understand the impact of human cognitive bias on AI.

 

Ekin Ober is an IPilogue Editor and a JD/MBA candidate at Osgoode Hall Law School and the Schulich School of Business.

 

 

The post AI for Social Good: Becoming Aware of Different Interests appeared first on IPOsgoode.

]]>
Regulation by Machine: Prof. Benjamin Alarie on the Power of Machine Learning /osgoode/iposgoode/2017/03/07/regulation-by-machine-prof-benjamin-alarie-on-the-power-of-machine-learning/ Tue, 07 Mar 2017 21:04:02 +0000 http://www.iposgoode.ca/?p=30465 Oliver Wendell Holmes, Jr. once described the law as nothing more than “prophecies of what the courts will do in fact.” If the practice of law is largely an exercise in fortune-telling, Benjamin Alarie believes that computers are very good at reading tea leaves. Alarie is the Osler Chair in Business Law at the University […]

The post Regulation by Machine: Prof. Benjamin Alarie on the Power of Machine Learning appeared first on IPOsgoode.

]]>
Oliver Wendell Holmes, Jr. once described the law as nothing more than “prophecies of what the courts will do in fact.” If the practice of law is largely an exercise in fortune-telling, believes that computers are very good at reading tea leaves.

Alarie is the Osler Chair in Business Law at the University of Toronto and the CEO of , a company which develops software that uses machine learning to analyze and apply legal precedents. He believes that properly trained algorithms will make more timely, cost-effective, and accurate decisions than human regulators, lawyers, and judges.

Alarie’s recent presentation at Osgoode Hall Law School was based on his paper, . In both the paper and the presentation, he used the example of classifying workers as either employees or independent contractors to argue not only for the possibility of computers applying existing legal precedents but also for allowing them to evolve the law. He asserts that computer algorithms could eliminate biases and resolve problems with the existing specification of the law.

is a field of computer programming in which computers learn from the data, rather than being explicitly programmed to create a particular outcome. Instead of having each step in decision-making pre-determined by a programmer, the software analyzes existing data for similar decisions and uses statistical analysis to find patterns in that data. It then applies the patterns to unseen data to make predictions about that new data.

Alarie’s presentation addressed the classification of workers as either employees or contractors for income tax purposes. This is a common problem in both employment and tax law, for which the . Classification involves weighing a variety of factors and does not always have a single clear answer.

Alarie and his colleagues found hundreds of Tax Court of Canada decisions involving a worker’s classification. They divided the data into two sets. The larger subset was used to train their software, which determined what weight judges gave to each of the factors. The smaller subset was then used to test the software, by asking it to predict the result based on facts from cases it had not yet seen. It provided a prediction – either employee or contractor – and a percentage level of confidence in the prediction.

They then compared those predictions to the actual findings of the Tax Court in the tested cases. They reported that the software correctly predicted the Tax Court’s decision in over 90% of cases. The erroneous predictions came with low confidence ratings, indicating that the algorithm knows when it might not be correct.

Even more interesting than the legal prediction software itself were Alarie’s own predictions about how this kind of software will change the law itself.

The first step, which is already in progress, involves legal professionals using software to support their own research and the professional advice they offer clients. As algorithms become more reliable and engender greater trust from regulators and corporate users, their predictions will become the de facto standard for making a particular type of legal decision.

Furthermore, the predictions will be more consistent and cheaper to obtain than a trial decision. As well, predictions will be available ex ante, so users will be able to adjust their real world behaviour in order to align with the legal result they are seeking, rather than acting now and hoping that any future judgement will go their way.

Only in those fringe cases, where the algorithm cannot make a confident prediction, would an actual trial be necessary. The trial decisions in those cases can then be fed back into the algorithm to improve its accuracy in subsequent analyses.

Somewhere in the future, Alarie imagines a : a time when statutes could be written to reference the results of particular machine learning algorithms as the de jure law of the land. The results might continue to evolve as a result of judicial review or legislative tweaking, but receiving a decision from an algorithm would have the same legal weight as receiving a decision from a trial judge.

A very active question period followed the presentation. Audience members questioned whether people would accept a decision-making process that has a known error rate. They also raised concerns about whether the algorithms would entrench systemic biases and legal errors that otherwise might be corrected in the case law.

Alarie envisions a technological future where computing power allows us to identify and overcome these problems. He also reminded the audience that we accept a significant level of error in a human-controlled legal system, and that we should not expect perfection from a computer-controlled system. It is enough that computer algorithms could substantially improve upon the performance of human decision makers.

All of which leaves only one outstanding question:

When computers take over the legal world, will any of us still have jobs?

 

Jacquilynne Schlesier is an IPilogue Editor and a JD Candidate at Osgoode Hall Law School.

 

 

The post Regulation by Machine: Prof. Benjamin Alarie on the Power of Machine Learning appeared first on IPOsgoode.

]]>