Artificial Intelligence

The Science of AI and the Art of Social Responsibility

Share this post:

I am a computer scientist and engineer, inspired by the art of the possible and driven by the practice of computing applications. For decades, I’ve quantified my professional achievements using metrics like computability, performance, scalability, and usability.

But the transformational nature of artificial intelligence requires new metrics of success for our profession. It is no longer enough to advance the science of AI and the engineering of AI-based systems. We now shoulder the added burden of ensuring these technologies are developed, deployed and adopted in responsible, ethical and enduring ways.

I realize that many of us—perhaps most of us—lack the academic qualifications to pass judgment on the ethics of computer science. We did not study philosophy. We did not go to law school. But that does not excuse us from considering the social impact of the work we do.

That impact will be significant. This year alone at least 1 billion people will be touched in some way by artificial intelligence, which is transforming everything from financial services to transportation, energy, education and retail. In healthcare alone, IBM Watson is engaged in serious efforts to help radiologists identify markers of disease; to help oncologists identify personalized treatments for cancer patients; and to help neuroscientists identify genetic links to diseases like ALS, paving the way for advanced drug discovery.

It is no exaggeration to say that in the years ahead, most aspects of work and life as we know it will be influenced by these technologies. And that makes us more than computer scientists. It makes us architects of social change.

This is a profound and daunting responsibility. And it would be easy for us to bury our heads in our work, to retreat to our areas of expertise, our comfort zones. But that is simply not an option. Because the work we do now lives at the intersection of science and society. Therefore, we must engage with the messiness of the real world.

This is not the first time that scientists have been asked to consider the consequences— intended and unintended—of their work. Thankfully, we are not alone in this obligation. This is a shared responsibility, along with business, government, and civil society. Everyone must do their part.

That is why we have engaged with a broad coalition of partners and ethics experts to inform our work. And why IBM is a founding member of the Partnership on AI, a collaboration among Google, Amazon, Facebook, Microsoft, Apple and many scientific and nonprofit organizations charged with guiding the development of artificial intelligence to the benefit of society.

In addition to this work, IBM has developed three core principles that we believe will be useful to any organization involved in the development of AI systems and applications:

Purpose: Technology, products, services and policies should be designed to enhance and extend human capability, expertise and potential. They should be intended to augment human intelligence, not replace it.

Transparency: AI systems will make clear when and for what purpose it is deployed and all major sources of data that inform its solutions.

Opportunity: Developers of AI applications should accept the responsibility of enabling students, workers and citizens to take advantage of every opportunity in the new economy powered by cognitive systems. They should help them acquire the skills and knowledge to engage safely, securely and effectively in a relationship with cognitive systems, and to perform the new kinds of work and jobs that will emerge in a cognitive economy.

These principles have forced my team to ask some new and difficult questions of our work—questions I hope all of you will ask as well.

For example, we are challenging ourselves to not just consider the use cases of our work, but also the “misuse” cases. Not just how this technology will be used, but how it might be abused.

We are asking ourselves what requirements should be met to ensure transparency. What level of transparency—of evidence-based explanations—will lead to trust in cognitive systems? We remind ourselves that trust is a precursor to adoption, and that adoption is the only path to business success.

Finally, we are thinking about how we can empower all users with this technology, especially users who are not technically inclined. How can we ensure that these solutions augment their skills rather than obviate them? How can we build training into the solutions themselves, so that the user and AI can evolve together?

These are difficult questions. They force us to stretch our thinking and speculate. But we didn’t go into this field because it was the easy thing to do. I’m confident that if we take these matters seriously, and let these questions guide our actions, it will lead to stronger, better products. It will even lead to artificial intelligence that benefits all of humanity. And that is a metric for success on which we all can agree.

Live, February 20: Watch Dr. Banavar’s 2017 Turing lecture, Beneficial AI for the Advancement of Humankind, beginning at 6:30 PM GMT, 1:30 PM US Eastern.

__________________________________________

Guru Banavar has been selected to deliver the 2017 Turing Lecture, a prestigious annual lecture co-hosted by the British Computer Society (BCS) and the Institution of Engineering and Technology (IET). The Turing Lecture is not related to the Association of Computing Machinery’s A.M. Turing Award.

A version of this story first appeared on Huffington Post.

Add Comment
2 Comments

Leave a Reply

Your email address will not be published.Required fields are marked *


shweta

Hey,

AI is the most imp part of the engineering. I have also read this.This blog provides me some different information.thanks

Reply

Jason Murphy

Well if we talk about AI and Social responsibility, then we have to think according to perspectives. I liked the line mentioned

I realize that many of us—perhaps most of us—lack the academic qualifications to pass judgment on the ethics of computer science. We did not study philosophy. We did not go to law school. But that does not excuse us from considering the social impact of the work we do.

See we have to think like this.

Reply
More Artificial Intelligence Stories

A Parable: “The Blind GPUs and the Elephant”

“…Each blind man feels a different part of the elephant body, but only one part, such as the side or the tusk. They then describe the elephant based on their partial experience and their descriptions are in complete disagreement on what an elephant is.” – The Rigveda This parable is helpful describing the problem that we […]

Continue reading

Do Doctors Fear AI? Not the Hundreds I Work with Around the World

Headlines abound about doctors cowering from AI. The reality: not so much. I’m a physician, and don’t think that’s a realistic concern. Rather, I envision a future in which AI-enabled insights help health and medical experts deliver patient-centered, personalized, value-based care. The future is here. For those of us at IBM, we are augmenting experts’ […]

Continue reading

How AI and Machine Learning are Aiding Schizophrenia Research

In the U.S. about 20 percent of adults suffer from a mental health condition, ranging from depression to bipolar disorder to schizophrenia, and about half of those with severe psychiatric disorders receive no treatment. While early identification, diagnosis, and treatment for patients with psychosis tends to mean improved outcomes, there continues to be significant barriers […]

Continue reading