My IBM Log in
Precision Regulation for Artificial Intelligence
Jan 21,2020

Among companies building and deploying artificial intelligence, and the consumers making use of this technology, trust is of paramount importance.

Companies want the comfort of knowing how their AI systems are making determinations, and that they are in compliance with any relevant regulations, and consumers want to know when the technology is being used and how (or whether) it will impact their lives.

 

62% of Americans and 70% Europeans prefer a precision regulation approach for technology, with less than 10% in either region supporting broad regulation of tech. 85% of Europeans and 81% of Americans support consumer data protection in some form, and 70% of Europeans and 60% of Americans support AI regulation.
Source: Morning Consult study conducted on behalf of the IBM Policy Lab, January 2020.

 

As outlined in our Principles for Trust and Transparency, IBM has long argued that AI systems need to be transparent and explainable. That’s one reason why we supported the EU and the OECD AI Principles, and in particular the focus on transparency and trustworthiness in both.

 

Principles are admirable and can help communicate a company’s commitments to citizens and consumers. But it’s past time to move from principles to policy. Requiring disclosure — as appropriate based on use-case and end-user — should be the default expectation for many companies creating, distributing, or commercializing AI systems. In an earlier Policy Lab essay, we articulated a disclosure requirement for law enforcement use-cases of facial recognition technology. Something similar should be required of AI more generally in order to provide the public with appropriate assurances that they are being treated fairly and equitably by AI-based determinations in sensitive use-cases.

 

That is why today we are calling for precision regulation of AI. We support targeted policies that would increase the responsibilities for companies to develop and operate trustworthy AI. Given the ubiquity of AI — it touches all of us in our daily lives and work — there will be no one-size-fits-all rules that can properly accommodate the many unique characteristics of every industry making use of this technology and its impact on individuals. But we can define an appropriate risk-based AI governance policy framework based on three pillars:

 

  • Accountability proportionate to the risk profile of the application and the role of the entity providing, developing, or operating an AI system to control and mitigate unintended or harmful outcomes for consumers.
  • Transparency in where the technology is deployed, how it is used, and why it provides certain determinations.
  • Fairness and security validated by testing for bias before AI is deployed and re-tested as appropriate throughout its use, especially in automated determinations and high-risk applications.

 

Wisely, the OECD AI Principles suggest a solid accountability bedrock for this framework, arguing that “[g]overnments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems.” This implicit recognition of the fundamental difference in accountability between stages of AI development can help appropriately assign responsibility for providing transparency and ensuring fairness and security, based on who has better control over the protection of privacy, civil liberties, and harm- prevention activities in a given context.

 

In the lifecycle of AI capabilities in the marketplace, organizations may contribute research, the creation of tooling, and APIs; in later stages of operation, organizations will train, manage, and control, operate, or own the AI models that are put to real-world commercial use. These different functions may allow for a distinction between “providers” and “owners,” with expectations of responsibilities based on how an organization’s role falls into one or both categories.

 

Differentiating accountability can help to better mitigate potential harm by directing resources and oversight to specific applications of AI based on the severity and likelihood of potential harms arising from the end-use and user of such systems. Risk-based regulatory approaches like this — which also allow for more manageable and incremental changes to existing rules — are ideal means to protect consumers, build public trust in AI, and provide innovators with needed flexibility and adaptability.

 

Building from these pillars, we propose a precision regulation framework that incorporates 5 policy imperatives for companies, based on whether they are a provider or owner (or both) of an AI system. These policies would vary in robustness according to the level of risk presented by a particular AI system, which would be determined by conducting an initial risk assessment based on potential for harm associated with the intended use, the level of automation (and human involvement), and whether an end-user is substantially reliant on the AI system based on end-user and use-case.

 

  • Designate a lead AI ethics official.

    To ensure compliance with these expectations, providers and owners should designate a person responsible for trustworthy AI, such as a lead AI ethics official. This person would be accountable for internal guidance and compliance mechanisms, such as an AI Ethics Board, that oversee risk assessments and harm mitigation strategies. As the complexity and potential impact of AI systems increases, so too must the accountability embraced by different organizations providing various functions in the AI life cycle. A market environment that prioritizes the adoption of lead AI ethics officials, or other designated individuals, to oversee and manage this increasing complexity could help to mitigate risks and improve public acceptance and trust of these systems, while also driving firms’ commitment to the responsible development, deployment, and overall stewardship of this important technology.

 

  • Different rules for different risks.

    All entities providing or owning an AI system should conduct an initial high-level assessment of the technology’s potential for harm. As noted previously, such assessments should be based on the intended use-case application(s), end-user(s), how reliant the end-user would be on the technology, and the level of automation. Once initial risk is determined, a more in-depth and detailed assessment should be undertaken for higher-risk applications. In certain low-risk situations, a more cursory appraisal would likely suffice. For those high-risk use-cases, the assessment processes should be documented in detail, be auditable, and retained for a minimum period of time.

 

  • Don’t hide your AI.

    Transparency breeds trust; and the best way to promote transparency is through disclosure. Unlike other transparency proposals, this approach does not entail companies revealing source code or other forms of trade secrets or IP. Instead it focuses on making the purpose of an AI system clear to consumers and businesses. Such disclosures, like other policy imperatives here, should be reasonably linked to the potential risk and harm to individuals. As such, low-risk and benign applications of AI may not require the type of disclosure that higher-risk use-cases might require.

 

  • Explain your AI.

    Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion. To achieve that, it is necessary for organizations to maintain audit trails surrounding their input and training data. Owners and operators of these systems should also make available — as appropriate and in a context that the relevant end-user can understand — documentation that detail essential information for consumers to be aware of, such as confidence measures, levels of procedural regularity, and error analysis.

 

  • Test your AI for bias.

    All organizations in the AI developmental lifecycle have some level of shared responsibility in ensuring the AI systems they design and deploy are fair and secure. This requires testing for fairness, bias, robustness and security, and taking remedial actions as needed, both before sale or deployment and after it is operationalized. Owners should also be responsible for ensuring use of their AI systems is aligned with anti-discrimination laws, as well as statutes addressing safety, privacy, financial disclosure, consumer protection, employment, and other sensitive contexts. For many use-cases, owners should continually monitor, or retest, the AI models after the product is released to identify and mitigate against any machine-learning resulting in unintended outcomes.Policies should create an environment that incentivizes both providers and owners to do such testing well.This can be done without creating new and potentially cumbersome AI-specific regulatory requirements, but rather by adhering to a set of agreed-upon definitions, best practices, and global standards.

 

74% of American and 85% of EU respondents are in agreement that artificial intelligence systems should be transparent and explainable, and strong pluralities in both countries believe that disclosure should be required for companies creating or distributing AI systems. Nearly 3 in 4 Europeans and two- thirds of Americans support regulations such as conducting risk assessments, doing pre-deployment testing for bias and fairness, and reporting to consumers and businesses that an AI system is being used in decision making.
Source: Morning Consult study conducted on behalf of the IBM Policy Lab, January 2020.

 

To achieve this, governments should:

 

  • Designate, or recognize, existing effective co- regulatory mechanisms (e.g. CENELEC in Europe or NIST in the U.S.) to convene stakeholders and identify, accelerate, and promote efforts to create definitions, benchmarks, frameworks and standards for AI systems. Ideally, standards that are globally recognized would help create consistency and certainty for consumers, communicating to end- users that the AI is trustworthy;
  • Support the financing and creation of AI testbeds with a diverse array of multi-disciplinary stakeholders working together in controlled environments. In particular, minority-serving organizations and impacted communities should be supported in their efforts to engage with academia, government, and industry. Working together, these stakeholders can accelerate the development and evaluation criteria of AI accuracy, fairness, explainability, robustness, transparency, ethics, privacy, and security; and
  • Incentivize providers and owners to voluntarily embrace globally recognized standards, certification, and validation regimes. One such potential mechanism is by providing various levels of liability safe harbor protections, based on whether and how an organization adheres and certifies to globally recognized best practices and standards.

 

Finally, any action or practice prohibited by anti- discrimination laws should continue to be prohibited when it involves an automated decision-making system. Whether a decision is fully rendered by a human or a determination is assisted by an automated AI system, impermissibly biased or discriminatory outcomes should never be considered acceptable. But whereas correcting the bias of humans is a daunting and difficult task, in AI systems it may be a matter of addressing historical bias in some training data by testing for, and correcting, statistical failures in the model. While this will take time, AI offers us the promise of a world where bias and discrimination may one day fade away. With precision regulations helping to promote trustworthy AI, that future could be sooner than we think.

 

 

-Ryan Hagemann, co-Director, IBM Policy Lab – Washington, DC

-Jean-Marc Leclerc, co-Director, IBM Policy Lab – Brussels

 

 

 

 

 

About IBM Policy Lab
The IBM Policy Lab is a new forum providing policymakers with a vision and actionable recommendations to harness the benefits of innovation while ensuring trust in a world being reshaped by data. As businesses and governments break new ground and deploy technologies that are positively transforming our world, we work collaboratively on public policies to meet the challenges of tomorrow.

 

Sign up for the IBM Policy Lab newsletter for our latest updates.

 

 

Share this post: