disruptive technology Archives - IPOsgoode /osgoode/iposgoode/tag/disruptive-technology/ An Authoritive Leader in IP Thu, 14 May 2020 04:59:35 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 AI or Human Doctor? /osgoode/iposgoode/2020/05/14/ai-or-human-doctor/ Thu, 14 May 2020 04:59:35 +0000 https://www.iposgoode.ca/?p=35456 The post AI or Human Doctor? appeared first on IPOsgoode.

]]>
This post is a response to the following question, initially posed to students in the Legal Values: Artificial Intelligence seminar:

“You are not feeling well and need to see a doctor.  You have two options: (i) you can be treated by a human doctor, or (ii) you can be treated by an AI doctor, but not both.  Which do you choose and why?

  1. Introduction

This question is so open-ended that a scope-narrowing discussion is needed. As such, I make three assumptions. The first overarching assumption I will make is that there exists a trade-off of capabilities between AI and humans such that a weighing of their relative benefits is required to render a decision. This assumption births further assumptions, namely that AI doctors are technically superior, that humans possess some intangible positive factor, and that I have zero information regarding my affliction. .

  1. Setting the Stage with Necessary Assumptions

#1 – AI doctors have technical superiority

Current research suggests AI doctors are to or than their human counterparts at diagnostic accuracy. For the purposes of this discussion, I will assume that AI doctors possess, across the board, technical superiority. Otherwise, what are they providing us that humans do not?

#2 – Humans possess some intangible factor of positive, albeit uncertain, value

For the question, ‘AI or human?’ to be worth asking, there must be a trade-off. What advantage does the human possess? The , something that is undoubtedly beneficial in a patient-doctor relationship.

A to the use of AI in medical settings is “the belief that AI does not take into account one’s idiosyncratic characteristics and circumstances”. The more ‘unique’ one perceives herself to be, the stronger the preference will be for a human doctor. However, this reliance on ‘uniqueness’ is, at least in part, unjustified because there is a limit to how ‘unique’ medical conditions can be. To some extent (greater or lesser, depending on the specific ailment), sickness X afflicting person A is equivalent to sickness X in person B for diagnostic and treatment purposes. For example, a doctor does not need to know about a diabetic’s personality and emotions in order to create a treatment plan.

This trade-off then begs the question: what is the relative value of the ‘human factor’ versus the superior diagnostic ability of an AI in a doctor-patient setting? It depends on the patient’s specific needs.

#3 – I have zero information regarding my affliction (i.e., it can be benign or fatal)

The third assumption I must make involves the nature of the malady requiring medical intervention. In a scenario in which I have knowledge of my condition, my decision would be predicated on the following considerations. If there is a chance I could be dealing with a hard-to-detect, serious disease, the presence of the ‘human factor’ would be of limited value. In this case, what I need is the superior technical capabilities of the AI doctor.

On the other hand, I may be dealing with a condition that is mostly, or even exclusively, psychological in nature. I imagine that a human doctor would be desirable in dealing with a patient’s depression, anxiety, or any other condition in which the patient may derive some benefit from ‘being heard’. In particular, the need for human intervention (as opposed to a robot therapist) can be , such as flashbacks and nightmares from PTSD. This ties in with the above discussion on ‘uniqueness’ as a source of resistance to AI doctors. Psychological disorders, I believe, possess a greater degree of uniqueness than ones of a primarily physiological nature, which supports the assertion that human doctors are better equipped to deal with them.

The only context provided by the question is that ‘you are not feeling well’. I can only assume, then, that the mystery condition is equally likely to be physical or psychological in nature, and equally likely to be serious or non-serious. In simple language, ‘it could be anything’.

III. Assessing the Trade-off

I don’t feel well, and that’s as specific as I can be

The above considerations are summarized as follows: AI doctors provide superior diagnostic service that is preferred in situations involving hard-to-detect, potentially life-threatening conditions. Human doctors, while not as technically savvy, possess an intangible ‘human factor’ that is of increased utility in a psychological/psychiatric setting.

With reference to my contextualization above of the words ‘you are not feeling well’, I must assume the worst-case scenario. For me, that consists of a hard-to-detect, life-threatening condition. This is based on an implicit premise that a life with psychological unrest is preferable to no life at all. Therefore, in a situation of zero information, I would opt for the AI doctor.

Additional considerations

Pragmatically speaking (discounting for a moment the ambiguity of the statement ‘you are not feeling well’), it would not be difficult to distinguish a psychological condition from a physiological condition. A stomach pain is likely to be primarily physiological, in the same way that auditory hallucinations are probably psychological in origin. Given the trade-offs between diagnostic accuracy and the ‘human factor’ discussed above, even a small amount of information about my condition would assist in making the optimal choice between an AI and human doctor.

Another consideration I set aside until the end is that of the coincidence of physiological and psychological trauma. I deliberately separated the two in the above discussion to give due consideration to each factor; but it is true that physical maladies are often accompanied by psychological distress (see and for examples). However, does this change the above calculus? In my mind, it does not. The mental toll exacted on a cancer patient notwithstanding, her primary focus remains detecting and combating the cancer itself. Benefits accrued from the ‘human factor’ are secondary to the AI doctor’s superior ability to protect the patient from the actual disease.

In summary, I prefer an AI to a human in the vast majority of medical scenarios.

Daniel Joseph is a second-year JD student at Osgoode Hall Law School and a Fellow at the IP Osgoode Innovation Clinic.

The post AI or Human Doctor? appeared first on IPOsgoode.

]]>
첥Ƶ and IBM develop and launch AI-powered student support pilot /osgoode/iposgoode/2019/03/27/york-university-and-ibm-develop-and-launch-ai-powered-student-support-pilot/ Wed, 27 Mar 2019 15:53:20 +0000 https://www.iposgoode.ca/?p=3288 첥Ƶ and IBM have launched an innovative student support solution that uses artificial intelligence (AI) to provide students with support services designed to improve their university experiences by delivering both academic and personal guidance covering a wide range of topics in real time. Developed collaboratively by York and IBM, this virtual assistant demonstrates how […]

The post 첥Ƶ and IBM develop and launch AI-powered student support pilot appeared first on IPOsgoode.

]]>
첥Ƶ and IBM have launched an innovative student support solution that uses artificial intelligence (AI) to provide students with support services designed to improve their university experiences by delivering both academic and personal guidance covering a wide range of topics in real time.

Developed collaboratively by York and IBM, this virtual assistant demonstrates how technology – specifically AI – can be used in an educational setting to enhance the quality of the overall student experience. This is the first time that IBM AI technology has been used in this way at a Canadian university.

첥Ƶ is continually introducing inventive new forms of experiential education and technology-enhanced learning that students want. The virtual assistant brings this same creativity to student services, so they are more tailored and responsive to individual student needs whenever they arise. More than 100 York students are directly involved in developing this solution. They are helping the virtual assistant to get better and better at guiding students to the right self-service or in-person contact for academic support or counselling in such areas as mental health, campus involvement and career services.

Since starting a pilot phase in January, there have been more than 75,000 student interactions with the virtual assistant. These interactions and over 1,700 feedback comments have been used to train the solution, improving the way it understands student questions and the answers it provides.

“This is a transformative time for learning and York is proud to be collaborating with a global tech industry leader like IBM to connect our students immediately to the right network of people and supports to help them meet their goals,” said Lisa Philipps, 첥Ƶ’s provost and vice-president academic. “Together, with IBM’s powerful AI technology and York’s innovative student services professionals, we are learning how to combine high-quality, in-person services with real-time information that is delivered to students’ personal devices. The unique virtual assistant is a breakthrough for 24-7 student support services and York is leading the way.”

Leveraging IBM’s AI technology, the student support solution relies on augmented intelligence to interact with students, letting them communicate in their own words, in English or French. The virtual assistant uses information about a student’s program of study and year level to respond to questions submitted in a free-form chat window. The more the solution is used, the more it is trained to better understand students’ questions.

“We are seeing a real appetite for transformative innovation in our post-secondary communities right across Canada, and 첥Ƶ is one of the pioneers,” said Colette Lacroix, the national leader for higher education at IBM Canada. “By harnessing the power of artificial intelligence to make interactions personalized and engaging, York and IBM are making significant strides in improving the student experience. York’s commitment to the success of its students is impressive and it’s been a great experience working together.”

The virtual guide offers immediate assistance to students seeking answers to their questions
“The virtual guide provides assistance in cool and engaging ways with the addition of emojis and interesting facts within the chat,” said Jasmin Itaychany, a student at 첥Ƶ. “It has the potential to solve many problems. One very helpful tool is the fact that it sometimes provides a list of questions a student may want answers to and all they have to do is click on one to reveal a very detailed answer. This guide is especially useful for students who may suffer from anxiety when speaking to someone in person, which will help increase inclusivity and accessibility on campus.”

York and IBM are committed to exploring innovative educational services that personalize learning both inside and outside of the classroom.

 

This article was originally posted on yFile, click to read the original post.

The post 첥Ƶ and IBM develop and launch AI-powered student support pilot appeared first on IPOsgoode.

]]>
Moral Ethics of Artificial Intelligence Decision-Making – Who Should be Harmed and Who is Held Responsible? /osgoode/iposgoode/2018/11/16/moral-ethics-of-artificial-intelligence-decision-making-who-should-be-harmed-and-who-is-held-responsible/ Fri, 16 Nov 2018 17:57:59 +0000 https://www.iposgoode.ca/?p=2808 As autonomous vehicles begin their test runs and potential commercial debuts, new liability and ethical questions arise. Unlike other computer algorithms which are already available to the public, a fully automated car divorces the authority of the device from the driver, instead vesting all power and decision-making into the car and its software. Accidents may […]

The post Moral Ethics of Artificial Intelligence Decision-Making – Who Should be Harmed and Who is Held Responsible? appeared first on IPOsgoode.

]]>
As autonomous vehicles begin their test runs and potential commercial debuts, new liability and ethical questions arise. Unlike other computer algorithms which are already available to the public, a fully automated car divorces the authority of the device from the driver, instead vesting all power and decision-making into the car and its software. Accidents may become less accidental, and more preordained in their execution. While the death toll from human negligence numbers from vehicular accidents in Canada alone, and automated cars will supposedly decrease this by a high amount, the approach to delineating liability fundamentally changes. If the hypothetical drop in incidence of vehicular mortality is to be believed, then future automation of cars objectively supersedes the question of ‘if’ these technological advancements should be performed, and instead transforms into a ‘how’ scenario regarding integration.

While theoretically superior in terms of public safety, autonomous cars and advanced algorithms bring up a type of mechanical morality -- in decisions to be made between life and death of pedestrians, occupants, or property damage. Determined by humans, these ethical ‘what-ifs’ translate into potentially tangible scenarios; extensive hypotheticals and their implications are not a question of existence, but of approach. While coded ethics, according to Noah J Goodall, a University of Virginia research scientist in transportation, will have “situations that the programmers often will not have considered [and] the public doesn’t expect superhuman wisdom but rather rational justification,” what counts as ‘ethical’ is subjective . Factors in ethics go beyond death toll. Janet Fleetwood, from the  Department of Community Health and Prevention, Dornsife School of Public Health, Drexel University, how age, assigned societal value, injuries vs. fatalities, future consequences on quality of life, personal risk, and the moral weight of killing against a ‘passive’ death all contribute to the problem.

The legal framework for autonomous vehicles does not yet exist, but the laws and policies that govern automated cars must not derive from their creators. Doing so would place considerable responsibility on the programmers of automated vehicles to ensure their control algorithms collectively produce actions that are legally and ethically acceptable to humans. The ethical decisions must be uniform for the sake of simplifying liability and establishing an optimal process for future programming. Negligence, the current “governing liability for car accidents,” will expand to the field of programming. Consequently, it is important to in the cases of accidents.

Ethics being prescribed solely by manufacturers may result in an arms-race to best serve the consumer’s interests. The technique of marketing reveals itself in ethical ambiguity of preferential treatment of the consumer; instead of being ethically neutral, an autonomous car may be marketed to show strict deference to the consumer, diluting ethics into a protectionist game. Conversely, programming an automated car to slavishly follow the law might also result in dangerous and unforeseen consequence. Extensive research and decision-making experiments explored in a non-profit based model may simplify the scope of the problem; instead of imagining how to market the product, discussions over autonomous cars become purely performative. Consideration from multiple stake-holders such as consumers, corporations, government institutions, organizations and others may be necessary to construct clear and stringent ethical guidelines that is necessary in the implementation of automated vehicles into society, which requires extensive efforts to reach a uniform conclusion. However, ethical dilemmas regarding control algorithms that determine the actions of automated vehicles will inevitably be subject to philosophical issues such as “the trolley problem” — a no-win hypothetical situation in which a person witnessing a runaway trolley would either do nothing and allow it to hit several people or, by pulling a lever, divert it, killing only one person. In such circumstances, there is simply no right answer and this make ethical guidelines difficult to construct.

While one may propose a utilitarian approach should be adopted for the sake of simplicity, questions such as would humanity be comfortable having a computer decide the fate of their life -- and what if the machine’s philosophical understanding extends beyond dire incidents would undoubtedly arise. Professor Azim Shariff of University of California that found that respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car. However, this raises the question of, would a customer buy a car in which they and their family member(s) would be sacrificed for the benefit of the public?

To delineate the complexity of the situation, Fleetwood multiple hypothetical situations regarding moral preferences to participants. The study concluded 76% of participants favored a utilitarian approach in which the maximum number of lives were saved, yet participants were reluctant in purchasing a vehicle that sacrificed its own passengers. Understandably, consumers maintain a bias towards their own life; few would desire a product that chooses to sacrifice its owner.

Perhaps, legal ethics of automated vehicles should not exist in human management but rather in a machine’s own ability to learn. This is known as machine learning, where the program provides systems the ability to automatically learn and improve from experience. Machine learning focuses on developing computer programs that can access data and use it learn for themselves without human intervention. Static programming arrives at pre-determined ethical conclusions, while machine learning generates its own decisions, distinct from purely human determined ethics. While introducing an objective or impartial arbiter to complex situations would be desirable, questions on how accurate its judgments may arise. Scholars propose to model human behaviour to ensure that cars, rather than behaving better, behave exactly like us, and thus impulsively, rather than rationally. One by Leon R. Sütfeld et al. state “simple models based on one-dimensional value-of-life scales are suited to describe human ethical behaviour” in these circumstances and as such would be preferable to pre-programmed decision-making criteria, which might ultimately appear too complex, insufficiently transparent, and difficult to predict. This solution regarding machine-learning to mimic human decision-making appears to be oriented towards an essential aspect of social acceptance; the uniformity of robots with human behaviour. Therefore, instead of regulating automated vehicles through ambiguous ethical guidelines, they will base their decisions through humanistic thinking while still lowering accident rate compared to human drivers. Nevertheless, other issues still exist; questions of whether this technology is achievable in the future and who should be held responsible for automated incidents in the context of machine learning. Regardless, whether the guidelines for automated vehicles arise from policy regulators or machine-learning, society needs to embrace that autonomous cars will debut on the market in the coming years, and work towards addressing the floodgate of concerns for wider applications including life-or-death accidents.

Rui Shen is an IPilogue Editor and a JD Candidate at Osgoode Hall Law School.

The post Moral Ethics of Artificial Intelligence Decision-Making – Who Should be Harmed and Who is Held Responsible? appeared first on IPOsgoode.

]]>
The Tech Law Ultimatum: Consent or Exile? /osgoode/iposgoode/2018/11/16/the-tech-law-ultimatum-consent-or-exile/ Fri, 16 Nov 2018 16:43:32 +0000 https://www.iposgoode.ca/?p=2797 Living in the twenty-first century comes with the need to manage expectations. While we live in a modern age with a variety of technological advancements, we may not be as innovative as we previously imagined. After decades of television shows like The Jetsons, some may even be inclined to ask, “Where’s my jetpack?”  Professor Daithí […]

The post The Tech Law Ultimatum: Consent or Exile? appeared first on IPOsgoode.

]]>
Living in the twenty-first century comes with the need to manage expectations. While we live in a modern age with a variety of technological advancements, we may not be as innovative as we previously imagined. After decades of television shows like The Jetsons, some may even be inclined to ask, “Where’s my jetpack?”  Professor , during his at Osgoode Hall Law School this fall term, recently spoke about the challenging relationship between technological innovation and the law. Prof. Mac Síthigh addressed the technological advancements we have made and what is still on the inventive (and legal drafting) table in his “Help! My Jetpack is an Algorithm: Smart Cities, Sharing Economies, and Law in the Face of Disruption”.

Professor Mac Síthigh drew on Sadiq Khan’s, the Mayor of London, this year and stressed the important role the law has in relation to technological and social development. At SXSW, Khan explained that the law plays a balancing role in mitigating the potentially negative impact of disruption while allowing society to evolve.

The concept of “smart cities” is something that highlights how the law is performing in the face of twenty-first century “disruption”. Professor Mac Síthigh linked the smart city concept to the sharing economy, which he defined as a situation that deals with transforming under-utilized assets in a manner that makes them more accessible to a community. This could lead to a reduced need for individual ownership of these resources.

Citing a recent , Professor Mac Síthigh explored how the collection of data in these cities unveils new legal tensions. For example, Alphabet’s Sidewalk Labs is reimagining Toronto’s eastern waterfront area, . This variation of a smart city will use sensors to measure garbage disposal, recycling, noise, and pollution. The increased presence of cameras can even collect data to help improve the flow of traffic. While the project promises some of the twenty-first century innovations many have been waiting for, it also reveals how some of the risks of such technologies are underexplored.

There is an inherent trade off in collecting data to help cities become more efficient and green. Residents will be giving up their privacy rights for the good of society. There is no way to live off the grid in this type of environment, which means that if individuals want to be excluded from data collection, they would likely reside outside of this community. Is full consent or exile the only choice in the age of smart cities?

Currently, different Canadian laws may apply depending on which entity is collecting the data, thus presenting different methods of action for residents.

  1. If there is a commercial technology company collecting the data, the (PIPEDA) applies to these processes.
  2. When this data is collected, accessed, or used by federal government institutions, the applies.

Both of these acts regulate how personal information can be shared and this may be applicable to data collected through smart cities.

Research from the (CIPPIC) reveals one of the weaknesses of the law in their current forms. Where information is not “personal”, it can be freely shared with third parties. In order for data to be non-personal, technology companies would be required to strip the data of personal identifiers. So, the data on garbage disposal, for example, cannot be linked to any addresses, names, photographs, and so on in order for the information to be sharable. Another caveat in sharing personal information is that individuals can choose to protect their information through confidentiality terms in a contract. This means that there could be a great onus on the residents in smart cities to find ways to protect their information if they truly wish for their data to remain private.

As Professor Mac Síthigh’s talk makes clear, smart cities and the concept of a sharing economy are not new forms of technology, rather they are new processes that rely on data in novel ways. In the same way that technology companies have rethought data collection, it is necessary for lawyers and policy makers to rethink how the law applies to this newest iteration of technology. It requires a careful balance of the existing laws that seem applicable to smart cities, such as privacy laws, in addition to new provisions that give consumers more opportunities to protect and take control of their data without completely excluding them from the innovation process.

 

Summer Lewis is an IPilogue Editor and a JD candidate at Osgoode Hall Law School.

The post The Tech Law Ultimatum: Consent or Exile? appeared first on IPOsgoode.

]]>