AI Regulation Archives - IPOsgoode /osgoode/iposgoode/tag/ai-regulation/ An Authoritive Leader in IP Wed, 15 Mar 2023 16:00:00 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 Engineers Launch Free Access to AI Ethics and Governance Standards /osgoode/iposgoode/2023/03/15/engineers-launch-free-access-to-ai-ethics-and-governance-standards/ Wed, 15 Mar 2023 16:00:00 +0000 https://www.iposgoode.ca/?p=40679 The post Engineers Launch Free Access to AI Ethics and Governance Standards appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


The Institute of Electrical and Electronics Engineers (IEEE), a professional organization for engineers and technology experts, recently announced the launch of the IEEE GET Program, aimed at providing free access to AI ethics and governance standards. The program is part of IEEE's ongoing efforts to promote responsible AI practices and help organizations develop and implement ethical AI systems.

The opened seven standards for public access:

  1. Age-Appropriate Digital Services
  2. Addressing Ethical Concerns during System Design
  3. Transparency of Autonomous Systems
  4. Data Privacy
  5. Transparent Employer Data Governance
  6. Ethically Driven Robotics and Automation Systems
  7. Assessing the Impact of Autonomous and Intelligent Systems (“A/IS”) on Human Well-Being

Assessing the Impact of A/IS

The most cited of the standards is the , which addressed the growing concern of how autonomous and intelligent systems may affect society. This standard provides a structured approach to evaluating the impact of A/IS on individuals, communities, and society, and helps organizations ensure that their systems are developed and deployed in a manner that supports human well-being. Recommended practices aim to bring an increased awareness about well-being concepts and indicators for A/IS and an increased capacity to monitor, evaluate, and address the well-being impacts of A/IS. Successful application of the standard includes implementing the ability to evaluate the ongoing well-being impact of A/IS on users and stakeholders while continuing to improve the system to safeguard human well-being, resulting in a greater ability to avoid unintentional harm.

The Standard also suggested numerous domains of well-being and accompanying indicators that system designers should be concerned with. These domains pertain to individual well-being —satisfaction with life, affect/feelings, and psychological well-being, social well-being —community, culture, education, economy, health, and work, and regulatory domains —environment, government, and human settlements. The Standard noted that these suggestions are a starting point for selecting indicators and that “indicators should be adapted to fit the circumstances of measuring and gathering data about the well-being impacts for an A/IS on user(s).”

Ethics and Systems Design

The is also frequently cited. This Standard provides a set of guidelines and best practices for organizations that engage in system and software engineering to make value-based ethical system design and investment decisions.

A number of interesting points to consider are included within the Standard Model Process. The standard provides guidance to organizations on establishing key roles in Ethical Value Engineering Project teams. These teams are then tasked with defining how a system is expected to operate from the users’ perspective Concept of Operations — identifying stakeholders and determining the context of use and potential for ethical benefit or harm (Context Exploration). There is also an Ethical Values Elicitation and Prioritization Process, which aims to obtain and rank values and value demonstrators, followed by an Ethical Requirements Definition Process that guides the defining of value-based system requirements to reflect the prioritized core values and their value demonstrators. Finally, the Standard also sets out an Ethical Risk-Based Design Process and a Transparency Management Process, guiding the realization of ethical values and required functionality in designing a system and how to inform stakeholders of the system’s implementation of ethics.

Impact on AI

that these IEEE standards are already being incorporated into AI governance. For instance, the European Union's Artificial Intelligence Act (“EU AI Act”) references many of the components that the IEEE makes available in this package. This will likely continue to be relevant both for regulators and AI developers —“TÜV SÜD sees a strategic advantage for those looking to demonstrate eventual compliance to human-centric regulatory measures or market pressures to leverage these IEEE standards and certifications.” Developing ethical AI systems is a multifaceted problem which requires extensive deliberation by organizations involved with AI systems development. The release of free standards by an authoritative governing body will likely immensely benefit everyone involved.

The post Engineers Launch Free Access to AI Ethics and Governance Standards appeared first on IPOsgoode.

]]>
NIST Releases their AI Risk Management Framework 1.0 /osgoode/iposgoode/2023/02/10/nist-releases-their-ai-risk-management-framework-1-0/ Fri, 10 Feb 2023 17:00:00 +0000 https://www.iposgoode.ca/?p=40589 The post NIST Releases their AI Risk Management Framework 1.0 appeared first on IPOsgoode.

]]>

Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.


The (NIST) has been tasked with promoting “U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology.” On January 26, 2023, NIST released their alongside a suggesting ways to use the AI RMF to “incorporate trustworthiness considerations in the design, development, deployment, and use of AI systems”. Both the framework and playbook are intended to help organizations understand and manage the potential risks and benefits of AI. The framework is also meant to ensure that AI systems are developed, deployed, and used in a responsible and trustworthy manner. The framework is intended to be a flexible and adaptable tool that can be applied to a wide range of AI systems, including those used in various industries such as healthcare, finance, and transportation.

NIST describes a trustworthy AI to have a set of characteristics: valid and reliable, safe, secure, and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair – with harmful bias managed.

Valid and reliable: Produces accurate and consistent results. Its performance should be evaluated and validated through ongoing testing and experimentation, with risk management prioritizing the minimization of potential negative impacts.

Safe: Does not cause harm to people or the environment and should be designed, developed, and deployed responsibly with clear information for responsible use of the system

Secure and resilient: Maintains confidentiality, integrity, and availability through protection against common security such as data poisoning, and the exfiltration of  other intellectual property through AI system endpoints.

Accountable and transparent: Provides appropriate levels of information to AI actors to allow for transparency and accountability of its decisions and actions.

Explainable and interpretable: representing the underlying AI systems’ operation and the meaning of its output in the context of its designed functional purposes. Explainable and interpretable AI systems offer information that will help end users understand their purposes and potential impact.

Privacy-enhanced: Protects the privacy of individuals and organizations in compliance with relevant laws and regulations.

Fair – with harmful bias managed:  NIST has identified three major categories of AI bias to be considered and managed: systemic (broad and ever-present societal bias), computational and statistical (typically due to non-representative samples), and human-cognitive (perceptions of AI system information in deciding or filling in missing information).

AI RMF’s core is organized around four specific functions to help organizations address the risks of AI systems in practice: Govern, Map, Measure, and Manage.

Govern: This includes establishing policies, procedures, and standards for AI systems, key decision-makers, developers, and end-users.

Map: AI RMF is intended to contextualize and frame risks by identifying the system's components, data sources, and external dependencies, as well as to understand how the system is used and by whom.

Measure: AI RMF evaluates the potential risks and benefits of the AI system by assessing the system's vulnerabilities and potential social impacts.

Manage: AI RMF allocates risk resources to mitigate identified risks and continuously monitor the system and its environment by establishing monitoring processes and procedures to detect and respond to incidents, as well as updating controls as needed.

NIST’s AI risk management framework is a voluntary but very important prompt for organizations and teams who design, develop, and deploy AI to think more critically about their responsibilities to the public. Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust in AI – a critical part in AI adoption and advancement.

The post NIST Releases their AI Risk Management Framework 1.0 appeared first on IPOsgoode.

]]>