A Q&A with Futurist Amy Webb
On December 6 and 7, academics, medical professionals, even professional humorists, among others shared their expertise and vision for how technology is changing the world, and how we live in that world, at the Future Today Summit. Founder and CEO of the Future Today Institute Amy Webb, who is an adjunct professor on futures forecasting at the New York University Stern School of Business, spoke to IBM (a sponsor of the Future Today Summit) about what it means to be a futurist, how futurists predicted fake news, and skills we all need in the future.
When and why did you decide to call yourself a futurist?
Amy Webb: Fifteen years ago, I was a journalist based in Tokyo, reporting and writing about the future of technology, the economy and digital culture. I’d had grown restless, though – my reporting was inherently a reflection on the past. I wanted to research, in a more concrete way, what was happening at the fringes so that I might anticipate what was on the horizon.
Around that time, I came upon Alvin Toffler’s seminal book Future Shock in a used bookstore. That’s when I first heard the job title “futurist.” There isn’t a degree in futurism – you don’t need a certification to become a futurist. So I read everything I could find from those who’ve established this field over the past century, people like Bertrand de Jouvenel, Ossip Flechtheim, Herman Kahn, Olaf Helmer, Arthur Clarke, Margaret Atwood.
Like those futurists, I have an interdisciplinary background: in college and graduate school, I’d studied math, game theory, economics political science, journalism and music. I also knew how to code. I researched the organizations pursuing futures work, and I mapped out what my own future would look like. Eventually, I started doing that work professionally. My company, the Future Today Institute (FTI), just entered its second decade of providing futures forecasting for Fortune 100 companies, the federal government and organizations all around the world.
Why start the Future Today Institute?
AW: I feel very strongly about democratizing the tools of a futurist, and empowering everyone to do what I can do. To that end, I have a strong vision for where the Institute will be a decade from now. You can think of us a research and strategic advising organization that answers “what’s the future of X” for all kinds of organizations.
I’m working towards a campus where FTI will both serve our clients and welcome those individuals and organizations who want to reflect, research and work collaboratively on the challenges that will confront humanity in the farther-future.
At your recent Future Today Summit, experts from many different fields and disciplines spoke. How did ‘New Yorker’ cartoon editor and cartoonist Bob Mankoff fit in?
AW: At our recent Future Today Summit, we invited IBM’s Chief Technology Officer and IBM Fellow Rob High, as well as IBM research scientist Francesca Rossi to talk about the more pragmatic challenges and opportunities of cognitive computing. I wanted to get us beyond those polarizing conversations – “AI is going to take our jobs!” or “AI will solve all of our problems!” – and to instead have a meaningful conversation about bias in algorithms and machine learning, the double-edged sword of training techniques like adversarial images, and the like.
Since our intention with the Summit was to bridge the divide between scientists, technologists, business leaders, the federal government and everyday people, I wanted to make sure that the conversation didn’t get so far into the weeds that folks got lost in the conversation. Bob Mankoff, in addition to being the New Yorker magazine’s longtime cartoon editor, is an extraordinarily gifted speaker and has a deft understanding of AI. He’s partnered with IBM and Microsoft to see if computers might be able to algorithmically generate humorous cartoon captions, in fact. I thought he’d be the perfect complement in a conversation about the frontiers of cognitive computing, and he was.
How does the (near) future deal with fake news?
AW: This is a bigger and more complicated problem than most of us realize. One of the challenges has to do with data: what’s fake to [one person] may seem very real to someone else. As every research scientist knows, even empirical data is still subject to outside interpretation once a project is reported in the media or talked about by non-scientists. And that’s compounded in this age of social media. We have machine learning algorithms that are just performing their prescribed functions – deliver us content that we’re likely to click on. The platforms have a financial incentive for us to click, because more clicks equal more income. The people who create those realistic looking URLs and who’ve copied popular news sites like CNN – they are also financially incentivized to create scandalous headlines and news stories that people will click on. And our attention spans are decreasing because there’s just so much competition for our eyes, ears and minds.
Let’s be clear – it’s not because of the recent [U.S.] election that people suddenly developed this idea of fake news. Humans have been spreading misinformation since we were first grunting at each other in caves. About six or seven ago, we at FTI forecasted this would be an emerging problem. We recommended to a consortium of newspapers that they develop a verification system – a simple line of code – that would travel digitally wherever the news story did. At the time, there wasn’t yet a critical mass of problematic stories as we’re now seeing today, and without an immediate need they didn’t feel a sense of urgency. I still believe this is the best way forward – to certify and verify news organizations. But now, people are looking to platforms to make that value judgement, and that’s something that makes me uncomfortable. Clearly something needs to be done – but now, everyone is scrambling, and that includes the social platforms. Leaders should never make decisions under duress. Had they have been using the tools of a futurist, they could have foreseen today’s fake news debacle as a likely scenario and planned accordingly before the crisis hit.
We hear conflicting messages on tech: it’s moving fast, yet full AI, quantum computers, and other technology won’t be here for years. What’s your point of view? How do you separate the “speed” from the “reality of the future”?
AW: At FTI, when we’re tracking the movement of emerging technologies like quantum and AI, we don’t use the standard S-curve. Hype cycles and S-curves describe product adoption on the marketplace. They aren’t helpful tools to project the development of a technology and its impact on an ecosystem, and that’s because they don’t take into account the external events – like the introduction of new legislation, or a sudden natural disaster, or the launch of a surprise app or company that suddenly fascinated everyone in Silicon Valley.
It’s more useful to think of timing the way your GPS does. As the crow files, your trip might take 30 minutes. But with traffic, icy roads, a car crash – the trip might take longer, it could take less time if you speed, or you may never get there if you careen off a cliff. It’s the same with an emerging technology, which doesn’t and cannot develop in a linear way, as the crow files. Let’s not forget that AI didn’t just arrive last week. After the Dartmouth meeting in the 1960s, lots of people in the federal government, not to mention universities and would-be computer science students, all thought the newly-named field of AI would speed ahead… until it didn’t. Their speed calculation didn’t take into consideration the fact that we just didn’t have the compute power back then to do what the non-scientists had imagined.
What are your students at the NYU Stern School of Business excited about (and worried about)?
AW: Like everyone else, they’re both excited by all the technological advancements and very concerned about how they will change the job market. There’s an underlying current of insecurity not just here in the U.S., but around the world. As we stand on the precipice of modern cognitive computing, a lot of people are worried about whether machines will take their jobs. As with every new transformative technology, some people will inevitably lose their jobs – but entirely new jobs will be created, too.
So I’m constantly reminding everyone that since, as MIT professor Edward Lorenz one said, “only one thing can happen next,” the best we can all do is to dedicate ourselves to using the tools of a futurist and keep vigilant watch on the horizon.
What are skills everyone, regardless of age, needs to learn in order to keep up with the future?
AW: This is the purpose of my new book, The Signals Are Talking: Why Today’s Fringe Is Tomorrow’s Mainstream. It describes the six-part methodology we use at FTI. In the book, you’ll learn how to start at the fringe – how to seek out unusual suspects who are researching new ideas, methods and concepts. Next, you need to spot patterns. That’s done using what we call our CIPHER model either by answering some simple questions or, for the computationally-inclined, regression analysis. The next steps, which can be done by anyone, involve proving out your hypothesis, calculating the timing, writing scenarios, building a strategy map and then finally pressure-testing all of your work. The book can be read in a weekend, and then it’s just a matter of practice.
What is one technological advancement you want to see in your lifetime?
AW: I lost my mother at a young age to a rare form of cancer. Even the doctors at the world’s top research hospital didn’t have a treatment protocol for her. Not a day goes by that I don’t think about her. I am looking to advancements in cognitive computing that will yield advancements in precision medicine. I’m not advocating that we extend the lifespans of humans indefinitely – that’s a philosophical conversation for another time. I do think that science and technology will soon help ease human suffering, however, and that gives me great hope.
Watch the Future Today Summit
Follow Amy Webb on Twitter: @amywebb