New Thinking

Building a brain trust for good AI?

Share this post:

Whether or not rapidly-advancing artificial intelligence has doomsday potential is a debated topic, with luminaries such as Elon Musk and Mark Zuckerberg disagreeing about the degree of risk. The overall threat to employment through automation and digitization, along with the possibility of AI learning to act against human interest, is either grossly exaggerated or a practical consideration depending on who you ask.

What most people agree on, however, is that AI is such a massively powerful technology that it behooves us to be thoughtful about its development and deployment. The plans we create today will determine the future. Thus the movement towards “responsible AI,” the democratization of research around AI, and an overall focus on ensuring that its benefits are not consolidated in the hands of just a few companies or countries.

Recent developments include the launch of OpenAI, a non-profit co-founded by Elon Musk and comprised of 60 researchers and engineers, that states its goal as “discovering and enacting the path to safe artificial general intelligence.” In a similar vein, the first AI for Good Global Summit was held in Geneva in June 2017. A collaboration between the International Telecommunications Union (ITU; the leading United Nations agency for information and communication technologies) and the XPRIZE Foundation. “With large parts of our lives being influenced by AI,” stated the conference press materials, “it is critical that government, industry, academia and civil society work together to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity.” The intent of the Summit was to have a neutral platform for a diverse range of international stakeholders providing recommendations and guidance as they relate to AI innovation.

While many of the scenarios around powerful AI massively disrupting society are based on technology that far exceeds what we see today, most researchers are advocating for a proactive approach. It’s better to map out the future we desire than to react as certain technologies move from theoretical to existent. Speaking to the New York Times, Shane Legg from Google’s Deepmind mentioned the importance of understanding all the potential outcomes of advancing AI. Deepmind is the British company famous for developing AI to beat the world’s top Go player and created a neural network that learns how to play video games similarly to how humans learn. Legg oversees AI safety for Deepmind.

“There’s a lot of uncertainty around exactly how rapid progress in A.I. is going to be,” he said. “The responsible approach is to try to understand different ways in which these technologies can be misused, different ways they can fail and different ways of dealing with these issues.”

Part of understanding the various ways that these technologies can be misused is to ensure that diverse global voices are part of the conversation at large. As Sherif Elsayed-Ali, the head of technology and human rights Amnesty Tech (part of Amnesty International), recently wrote in a piece for Medium: “We can’t assume that any AI system developed in Palo Alto will work as intended in Cairo.” I spoke with Elsayed-Ali, along with other AI experts, to better understand the range of ethical concerns on the horizon and how we can develop AI to maximize the benefits for society.

Elsayed-Ali attended the AI for Good Global Summit in Geneva and was encouraged by the organizations and companies involved. While there has not been an official announcement making the AI for Good Global Summit an annual event, Elsayed-Ali says there was a strong desire among those in attendance to make it more regular. Elsayed-Ali left inspired by the idea of a “global public AI research institution,” taking a cue from scientist and professor Gary Marcus (inventor of the “trolley-problem” commonly used for self-driving cars), who proposed a “CERN for AI.” The underlying argument is that even major initiatives like OpenAI are miniscule in comparison to a global pursuit connecting thousands of researchers and billions of dollars in research money. And unlike smaller initiatives that may be tied with advancing the specific interests of a company or country, a global public AI research collaborative would aim to advance the field of AI for the good of humanity.

To provide an example of where one’s origin may influence the underlying technology, Elsayed-Ali mentions the AI being developed to fight toxic behavior online. The person creating the tool to flag toxic behavior may be inserting their own conception of appropriateness. As Elsayed-Ali notes, ”your view of what’s abusive would be extremely different from the view of somebody in a completely different context.” The larger point is that much of the AI that is being developed may not have the same level of smooth transferability across countries and cultures. When technology is created for self-driving cars or utilized for political elections, developers may have certain blind spots to how the tool could be used or abused elsewhere. “There is something that is inherently different,” says Elsayed-Ali, “than if you are just designing an ipod in California that will more or less have the same use no matter where you take it.”

The biggest ethical concerns he foresees in his role with Amnesty International involve predictive policing and autonomous weapon systems that may be utilized by terrorist or criminal groups. Deep learning systems, he jokes, have quickly moved from recognizing cat videos to sentencing decisions. “One thing as a society that we need to decide on is what are acceptable systems that cannot be interrogated,” says Elsayed-Ali. The process of “interrogating” a system is used to find, and then presumably fix, errors.

Suchana Seth is a physicist-turned-data scientist who thinks about ways to operationalize ethical machine learning and AI in the industry. A 2016-2017 Ford-Mozilla Open Web Fellow, she is currently working with the New York-based research institute Data & Society.

“Even the best designed systems operate in this messy real world,” states Seth, “and can break in some fashion.” She mentions that every successful tech company likely has a customer service team dedicated to handholding their client through the process of product adoption. “[W]e need humans and processes and laws to deal with these failures.”

Seth points out that the questions around what “responsible AI” entails are messy, nuanced, and often without one right answer. It is important to communicate this transparently, she adds. “As data scientists and human beings we should think harder about the impact of our work. We should think carefully about design choices we make… We must make a better effort to articulate these decision points and design choices to our teams, our managers, our customers, our friends and family.”

Similar to Elsayed-Ali, Seth sees the complication in building a consensus amongst different stakeholders who hold not only different views but often ones that conflict with each other. “The real question,” she adds, “is how will we put structures in place to make sure that data reflects the truth about all stakeholders, that all stakeholders get to have a say in how AI is impacting them, that we build grievance redressal mechanisms that actually work.”

When considering how to develop AI in a manner to maximize the benefits of society, Suchana Seth likes to think of social good in terms of empathy and voice. While having voice is often structural in nature and hard to fix solely through tech, she states that the first part—empathy—is a function of our imagination, and something that can be increased through the exposure of different points of view. AI research, according to Seth, could benefit from this expanded collective imagination.

If we expand our benchmark datasets to include underrepresented groups, if we audit our datasets carefully for bias, then the AI we build has fewer unexpected ethical failures. If we have people with imaginations very different from ours playing around with AI, we get unexpectedly beautiful applications or elegant AI architectures.

As an example, she cites the work of Angelica Lim, where insight from studying how toddlers learn emotions was applied to building AI that can better parse and reflect human emotions.

Attendees and speakers from over 40 countries will be represented at the World Summit AI, which will have its global conference on October 11-12th in Amsterdam. “The aim with the agenda at World Summit AI is to bring together the first international meeting of the most influential ‘AI brains’ from the entire ecosystem,” says Sarah Porter, its Founder. Sarah Porter is also the CEO and Founder of InspiredMinds, a team of technology and science strategists. The London-based entrepreneur, who has launched over 20 tech initiatives, serves as an ambassador to the Royal Marsden NHS trust in the use of technology in oncological advancement.

The audience for the Summit will include not only industry, academia, and government, but also members of the general public. One of the three themes for the upcoming World Summit AI is “Safe AI.” Are we discussing innovation in the context of ensuring AI develops in the right way? Are we taking into consideration how society can absorb this exponential change? “This gives us the unique opportunity to foster collaboration between policyholders, business leaders, financial institutions, academia to citizens on a global scale,” adds Porter. “We see it as a first step to the formation of a global think tank at a scale not seen so far.”

Similar to the concerns brought up by Elsayed-Ali, Porter mentions the need to consider how the growth of AI technology derived from private institutions looking for a commercial advantage compares to the potential advantage that the technology could impact all facets of society. In order to better alleviate this underlying tension, Porter believes that “responsible AI” would be “the development of advancements that are regulated by policies that are a) are government approved and b) adhered to as a universal ‘ten commandments’ of internationally accepted laws by commercial, private and public institutions alike.”

“We are still in the early days of AI’s development” states Jonnie Penn, “so it will take a diverse group of minds to parse out how best to deploy new technologies as they arrive.” Penn is a Rausing, Williamson and Lipton Trust doctoral researcher at the University of Cambridge and a New York Times bestselling author. He is the co-founder of The Buried Life and serves as an advisor to InspiredMinds.

“My hope is that the public will be able to use AI as citizens and not just as consumers,” adds Penn. “This type of use would certainly shape the direction of future AI research initiatives, in terms of which subjects get funding.” He mentions the early history of computing in the 1940-50s, which included drastically different ideas in terms of the accessibility of computers. While the arcanely-designed systems were geared to academics and the military, the openly-accessible systems attracted students and industry to computers. A recurring theme in Penn’s research is how technology is shaped by its user base. “I compliment the folks at Mozilla, CFI, and OpenAI for taking up this line of work.”

In addition to hosting experts and policy makers at the upcoming World Summit AI, the conference will also include invited students. “One area I’m interested in is how young people can become more involved in the next wave of AI’s development,” states Penn. “I think it’d be fruitful to have their voices included in discussions about how to shape digital rights legislation, for instance.”

Echoing the sentiments expressed by Data & Society’s Suchana Seth, Penn mentioned how the organizers of the World Summit AI recognize the need for imagination in regards to the ethics of AI. “[T]his will be a fresh opportunity,” adds Penn, “to pursue consensus around which issues, in regards to the ethics of AI, deserve our attention most in the years to come.”

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More New Thinking Stories

Fulfilling the Potential of AI Requires Getting Interpretability Right

I’m awesome at commuting. Take your pick: subway, bus, or train—I can navigate a complex fare system, bob-and-weave through a difficult transfer, and make it to my destination on time even after leaving way later than I should. But, inevitably, when any of those vehicles break down, I’m stuck. It’d be crazy to put me […]

Continue reading

The Savings App Explosion

On Black Friday and Cyber Monday 2017, shoppers around the country turned to their smartphones and laptops. According to Adobe Analytics data, American shoppers spent approximately $6.6 billion on Cyber Monday and $5 billion on Black Friday. For value-savvy shoppers, there was one extra ingredient in the mix: Apps and websites offering online-only coupons and […]

Continue reading

Four Predictions on the Future of VR from a Watson Product Director

There are two reasons that Michael Ludden, director of product of IBM Watson Developers Labs & AR/VR Labs, starts his presentations with a video of Star Trek actors playing the virtual reality game Star Trek Bridge Crew. First, it’s a perfect example of how his labs are melding Watson’s artificial intelligence capabilities with the burgeoning […]

Continue reading