Think Leaders

Fuzzy or Techie?! Why AI needs more interdisciplinary thinkers

Share this post:

iTunes | Spotify | Overcast | Pocket Casts | more!

Does AI need more fuzzy thinkers? How can we increase interdisciplinary perspectives in emerging tech? Can an interdisciplinary lens help us better foresee unintended consequences? In this episode of thinkPod, we are joined by Scott Hartley (author of The Fuzzy and The Techie: Why Liberal Arts Will Rule the Digital World) and interdisciplinary artist Carla Gannis. We talk to Scott and Carla about the disciplines missing in the AI conversation, how we can bring greater ethical thinking into AI, and the dramatic influence of sci-fi writers on emerging tech. We also tackle whether a four-year degree is an antiquated idea, how the mundane uses of AI can often be more important, and the borderless nature of data.

Some of the questions we tackle include:

-Why is it important to have an interdisciplinary approach to AI?

-What skillsets are missing from the tech landscape right now?

-How can we bring in a more ethical conversation to AI?

-How will education alter in the future to adjust to a more interdisciplinary approach?

 

Some quotes from our discussion:

 

“Humanity kind of thrives from storytelling and myths and we want to mythologize any of these technologies as well. We’re developing them as an outgrowth of human goals and needs and desires. And so it is really important that we are looking at it from many different perspectives. I like to call it the prismatic point of view.” –Carla Gannis

 

“[W]e talk about AI versus ethics code verse context, a stem versus liberal arts. And it’s not a versus, it’s how do we bring these two sides together. So the sort of notion that the liberal arts will rule the digital world is really sort of a homage to the fact that we need holistic thinkers because in the future, as technology solves more of the routine tasks, it actually forces us to become more human, more artistic, more creative, more empathetic, more communicative. And how do you train for that? I think you’ve got to have sort of a pluralistic approach to education.” –Scott Hartley

 

“We’re so focused on the physicality of borders and yet data and technology is inherently borderless. And so I think what you’re touching on is we really need to be thinking more intelligently about how that information is shared, but also how it’s impacting people differently based on culture and social norms, et Cetera, in different parts of the world.” –Amanda Thurston

 

“I think the presumption for a long time has been artists are just these fuzzy, you know, feely people who aren’t really thinking about these implications are participating them. And more and more artists are taking proactive approaches.” –Carla Gannis

 

“I was recently in Madrid at the Reina Sofia Museum standing in front of the Guernica picture, which is this incredible piece of art. And if you’re familiar with the piece of art, at the top of the middle of the painting is a light bulb that’s sort of shining light on this terrible grotesque scene of war and battle and death. And it’s kind of this reflection that we’re seeing the world and our own humanity through technology. And at the time, you know, the light bulb. But I think the same way that we think about, we blame Facebook, we blame Snapchat, we blame the technology. It’s only as reflective as you know, as human nature. And so the same way that I think Picasso put the light bulb at the top of that painting to sort of say technology is shining a light in some ways on just the fundamental nature of humans. The same is true today.” –Scott Hartley

 

Don’t forget to rate us on iTunes!

 

Connect with us & the guests:

thinkLeaders @IBMthinkLeaders

Scott Hartley @scottehartley

Carla Gannis @carlagannis

 

Scott Hartley is a venture capitalist and best-selling author of THE FUZZY AND THE TECHIE (Houghton Mifflin Harcourt, 2017), a Financial Times business book of the month, and finalist for the Financial Times and McKinsey & Company’s Bracken Bower Prize for an author under 35. He is a global keynote speaker on future of work, and human skills in our technology age. He has served as a Presidential Innovation Fellow at the White House, a Partner at Mohr Davidow Ventures (MDV), and a Venture Partner at Metamorphic Ventures. Prior to venture capital, Scott worked at Google, Facebook, and Harvard’s Berkman Center for Internet & Society. He has been a contributing author at MIT Press, and has written for publications such as Quartz, The Financial Times, and Foreign Policy, and been featured in USA Today, Harvard Business Review and The Wall Street Journal. He holds three degrees from Stanford and Columbia, has finished six marathon and Ironman 70.3 triathlons. He is a Term Member at the Council on Foreign Relations, and has visited over 70 countries. HartleyGlobal.com

 

Carla Gannis is an interdisciplinary artist based in Brooklyn, New York. She produces virtual and physical works that are darkly comical in their contemplation of human, earthly and cosmological conditions. Fascinated by digital semiotics and the lineage of hybrid identity, Gannis takes a horror vacui approach to her artistic practice, culling inspiration from networked communication, art and literary history, emerging technologies and speculative fiction.

 

Gannis’s work has appeared in exhibitions, screenings and internet projects across the globe. Recent projects include “Portraits in Landscape,” Midnight Moment, Times Square Arts, NY and “Sunrise/Sunset,” Whitney Museum of American Art, Artport. A regular lecturer on art, innovation and society, in March 2019 Gannis was a speaker at the SXSW Interactive Festival on the panel “Human Presence and Humor Make Us Better Storytellers.” Publications who have featured Gannis’s work include The Creators Project, Wired, FastCo, Hyperallergic, The Wall Street Journal, The New York Times, El PaÍs and The LA Times, among others. In 2015 her speculative fiction was included in DEVOURING THE GREEN:: fear of a human planet: a cyborg / eco poetry anthology, published by Jaded Ibis Press.

 

Gannis received an MFA in painting from Boston University in the twentieth century. In the twenty-first century she is faculty and assistant chair of the Department of Digital Arts at Pratt Institute. CarlaGannis.com

 

INTRO

 

Amanda: Welcome to another edition of thinkpod. I’m Amanda Thurston from IBM joined by the thinkLeaders team.

David: I’m David.

Jason: Jason here.

Amanda: We went deep on a few topics.

David: Yeah, we did.

Amanda: With Scott Hartley and Carla Gannis.

David: Oh yeah. We were talking about interdisciplinary and kind of this need for a mashup between the fuzzy and the techie, how education, the four year degree, might need to change.

Jason: I was also happy to hear the arts adequately represented here.

Amanda: Very adequately represented.

Jason: You should bring a pen and paper to this. You’re going to have like a book list.

Amanda: I mean, I think that the education piece was really interesting. Not only thinking through whether or not a four year degree is still relevant, but just how we bring along generations of people to think more critically about how technology interplays with humanity and what we want that to look like going forward. Because we do, and we talk about this a lot in this program, but we have a lot of autonomy in how we engage with technology. And technology is not just happening to us as a population. We are actively engaged in the evolution of technology and how it’s impacting our daily lives. And I think that that became very clear in the course of this discussion.

David: Oh, that’s what I loved about today’s conversation. It was like reasserting agency over over technology.

Jason: So a lot of great examples of how humans are driving with their different expertises the applications of AI and how humanity is at either side of that process.

Amanda: Well and how it’s not new. I mean I think the wonderful thing about the arts is that it’s so historically based and if we start to apply some of that art theory to technology. None of this is new, the implementation of new tech and how that changes the way that humans engage with the world around them is a tale as old as time.

Jason: And none of it is strictly objective. I think that’s the other important piece. Like none of it’s new and it’s all also subjective to the people that are putting in the data and interpreting the data. I think we too often look at this like math, that there are universal truths and AI or giving them to us. And it’s not, it’s just it’s art. It’s people. It’s machines helping us do what we’re already doing.

Amanda: Yeah, and that there’s this huge responsibility for us to be aware of the mundane implementations of AI that maybe we’re not paying as close attention to because we’ve got some hang up on the weaponization of whatever. But the reality is we need to be truly focused on what data we’re putting out into the world and how different versions of ourselves and our geopolitical situation are being represented in those data sets that are feeding AI.

David: This is where I thought Carla and Scott gave us great historical kind of reference. They were naming some great kind of books and other artworks that we should check out and we thought they were hooked up to the Internet, but they’re just really smart guests that we had.

David: Very smart.

Jason: You’ll enjoy this one.

David: Happy listening.

START

Amanda: Hey everybody. Welcome to thinkLeaders. I am joined today by Scott Hartley who is the author of The Fuzzy and the Techie: Why Liberal Arts Will Rule the Digital World and is former Facebook and Google. Thanks for joining us. As well as Carla Ganis who is an interdisciplinary artist and is both the faculty and Assistant Chair of the Department of Digital Arts at Pratt Institute. Welcome.

Scott & Carla: Thank you.

Amanda: So a couple of weeks ago we had a conversation about the Stanford Institute of Human Centered Artificial Intelligence. That is a huge mouthful but since then its founder Fei-Fei Li, am I saying that right David? Is that on par?

David: Yeah.

Amanda: She’s come out and said that they’re really focused on, let’s just summarize, this is always dangerous but on being interdisciplinary and that AI has historically been about computer science, but now it’s about so much more than that. And I guess given both of your backgrounds, that’s kind of a fun place to start, which is why is it, in your opinion, important to take that more interdisciplinary approach to thinking about artificial intelligence?

Scott: Fei-Fei Li has a great quote where in the New York Times, a few months back she said, we have this word artificial intelligence, but in fact there’s nothing artificial about artificial intelligence. That it’s made by humans for humans to solve human problems. And so it’s very human centric. And so I think it’s not surprising that we need to have an interdisciplinary approach in the sense that we are applying AI toward the ends of solving human problems. So we need broader context than just the technology.

David: But do you think there’s part of us that want it to be kind of mystical? Because they talk about like machine bias that a lot of times if you put something in the machine then you would just assume that whatever comes out is right and it’s magical. Do you think there’s any kind of resistance from kind of demystifying AI?

Carla: Yeah. Humanity kind of thrives from storytelling and myths and we want to mythologize any of these technologies as well. We’re developing them as an outgrowth of human goals and needs and desires. And so it is really important that we are looking at it from many different perspectives. I like to call it the prismatic point of view. Right? And particularly as an artist and someone who’s been working in art and technology for a long time and seeing how outside of just AI but these different technologies, how they have kind of evolved and grown. The ones that succeed, I think, are the ones and particularly I think AI will be more successful if it is focusing on what humans find important. Right.

David: What do you think some of the biggest areas right now that have been kind of missing, like in your opinion, so Carla, you’re coming at this from a artistic kind of Lens, but here you are kind of immersed in the tech space, curious to one, learn about your journey, but also what do you, what do you think is kind of missing right now from the AI field? What kind of disciplines do you think are most in need to be injected?

Carla: Well, I would rally, you know, for the arts because…

David: You’re not biased at all. [Laughing]

Carla: I’m not biased at all. No bias there. There is an exhibition opening this week on Thursday night and it is a speculative show where it’s including all of these different artists who are approaching or looking at AI. Again from a perspective that doesn’t happen maybe in the lab or in the think tank sessions. And so for example, Zach Glass is looking at the AI that I have a note for a Tay, remember Tay went on Twitter, right? He and another artist have revived Tay now and given her a new voice to reflect on what that first experience was like because it was terrifying. She ended up coming out being, you know, completely misogynistic and Nazi, you know, all of these things from just a few minutes on Twitter, wasn’t it?

Amanda: Yeah. A lot of times I think what happens in product and design is that we create something that seems algorithmically correct or that pushes forward technology for technology’s sake. And we don’t think about the fact that it’s only as good as it can be if it’s used by a mere human. So I actually, Scott, I think with all of your background and the research that you did for your book, I would love to hear how you came to the conclusion that liberal arts is so important.

Scott: It was partly autobiographical being somebody from a political theory backgrounds coming into tech, and it’s partly observational, looking around Silicon Valley and seeing people like Susan Wojcicki who runs YouTube as a history and literature major, great entrepreneurs like Stewart Butterfield at Slack, Reid Hoffman at Linkedin, philosopher Peter Thiel at Paypal and Palantir philosopher. So you kind of look around and you hear this sort of punitive narrative that it’s technologists that are ruling the digital world.

Amanda: And yet…

Scott: And yet this sort of counterfactual is that there are a lot of people from the humanities that are kind of the big thinkers that have partnered with a technologist to really solve a human problem. Really kind of going back to 1959, CP Snow. This gentleman who was a physicist and a novelist, he delivered this lecture at Cambridge where he lamented this sort of two cultures of the humanities versus the sciences and how there were these two sides of the chasm and we needed people to sort of straddle both worlds. And I think the names of all changed. We talk less about humanities versus sciences, but we talk about AI versus ethics code verse context, a stem versus liberal arts. And it’s not a versus, it’s how do we bring these two sides together. So the sort of notion that the liberal arts will rule the digital world is really sort of a homage to the fact that we need holistic thinkers because in the future, as technology solves more of the routine tasks, it actually forces us to become more human, more artistic, more creative, more empathetic, more communicative. And how do you train for that? I think you’ve got to have sort of a pluralistic approach to education.

David: It kind of begs the question, Carla, I love to hear your opinion on it. And how do we actually get over that hurdle, right? So if I enter college today, I still have to pick a major. I still pick a bucket. I’m still going to be kind of siloed away on some level. So even your own background kind of teaching. I’d be curious to see how we can kind of break that.

Carla: I’ve listened to one of your talks yesterday, Scott, and I mean just, I love that not being binary about this. So it’s not Fuzzy versus Techie, you know, fuzzy and techie. But you’re right David, that generally students come in their first year at Pratt. Anyway, we have this kind of foundations program and so in the foundations program, all of these students are together. So they’re not broken into cohort groups. And so they’re learning basic kind of design skills and art skills. And by their second semester they are choosing a major.

Amanda: Yep.

Carla: So that can lend itself to being problematic. Because at Pratt for example, we have a school of art in a school of design and we already know, I mean in terms of my own life, I’m an artist, I’m a designer and I’m an educator, you know, and we all wear many hats today. There’s a lot of cross pollination. And so what happens is we have the school of art, we have the school of design, we have communications, design and design, and then in art we have film, we have fine arts, we have digital arts, I’m the assistant chair person of digital arts and we have photography. So you know, all of these different programs that don’t necessarily reflect in terms of the way that they are siloed that the contemporary artists or the contemporary creative works across disciplines and across platforms. In my department, I’ve helped organize a couple of different symposia where we’re including representatives from all of the different departments to talk about different issues, including AI and VR, MR XR, you know, emerging technologies and our relationship to those emerging technologies and how we can look at them and work together from different vantage points. These new technologies aren’t arising from a vacuum. One of my favorite books is the Victorian Internet, which talks about the days of the telegraph system and how much that kind of relates to our internet today.

Scott: Actually, I have a joke about the telegraph because people often think of…

David: There’s not enough jokes about the telegraph. [Laughing[

Scott: Well, Samuel Morris, you know the inventor of Morse Code and the Telegraph was actually a portrait painter, and so he was an artist who was able to visualize sound by thinking in terms of lines and dots. It’s a fascinating kind of example of orthogonal thinking or metaphor thinking, which I think you get from exposure to completely different disciplines.

David: Wow. So let’s imagine, imagine Scott, you are this interdisciplinary thinker. Do you also know that there is an avenue inside of tech? Do you feel like you’re a black sheep? Do you feel like there is a point to get involved and even from your research with The Fuzzy and the Techie, I know you talked to probably a lot of people who were saying, okay, I have a background as a CS and philosophy. Well, how do I get involved with some of these tech companies?

Scott: One of my favorite stories is actually about a TED fellow who was here in New York, Catie Cuan and Catie was a professional ballerina at the met and she had this interest in robotics and people said, well, you know, you’re not a mechanical engineer. What are you going to do with robotics? And she started volunteering at a lab at the University of Illinois and found out that actually one of the biggest challenges in robotics is for end of life care. You’ve got to generate trust with older people and robots. So if a robot is helping you sit up in bed or feeding you food in a hospital, these are highly sort of engaged that require a lot of human trust. And so interestingly, she’s one of the very few people who has a background in choreography and ballet who can teach graceful maneuvers to a robot. So fast forward a year, she’s now a Phd student in mechanical engineering at Stanford and she’s focused on choreography for robots. And so I think it’s a perfect example of somebody who had a deep background in the humanities but also had this deep interest in technology. Didn’t feel locked out, but it felt like she could apply this completely formative, new way of thinking about technology. Which back to Fei-Fei Li’s point of having sort of interdisciplinary approach to AI or robotics. I think it’s not about having everyone be sort of at the average of somewhat humanities and somewhat techie in some ways. It’s being a complete humanist or a complete technologist and being able to really collaborate in interesting new ways.

Amanda: Bring that expertise in.

Scott: There’s always a conversation of, okay, well I have a data science team. I would love to hire philosophers, but when the rubber hits the road, I’ve got to do data science. So how do I have a team of philosophers and do data science at the same time? At Google we had this concept of 20% time where you could work on a side project which may become Gmail in your 20% time. The joke was the 20% is actually called Saturday. [Laughter.] Monday through Friday, right? You’ve got the weekend if you want to do a side project, but I think we could approach this 20% talent concept of maybe a a team has 80% sort of core discipline around the function that’s required, but we really have sort of a 20% outside perspective outside bias that poses big questions poses, sort of contrarian viewpoints, which maybe you can sort of steer the ship a bit toward the middle and sort of missing some of the blind spots.

David: And what are some of the big questions that you think we should be posing with with AI right now?

Scott: One interesting example is in predictive policing and sort of taking data sets, why would it be not deploy police officers more effectively to areas where there’s high likelihood of crime? Right. But we forget that the context behind the code is that we don’t have sort of omniscience around where crime happens. We have reported crime data and reported crime data is…

Amanda: Inherently biased.

Scott: Reflecting urbanization, sociology, a wealth distribution race class. So there are these fundamentals, sociological issues where if we just take a data science, just a tech approach and we forget that we need an urban studies person on the team or a sociologist on the team or an anthropologist on the team, we made sort of fall victim to this belief in the magic or the mythology of technology where we pump it through ones and zeros. We think that it’s objective on the backend, but it’s only as objective as the inputs and the biases that created the code.

Amanda: Yeah, I think you see that a lot in business too and be interested in your take on this, but you have these data scientist teams that sit within, let’s call it a chief analytics office or as part of a quant hedge fund, and they’re really focused on numbers and without having either an MBA’s perspective or somebody who’s really focused on customer engagement or marketing or how we’re actually implementing that information in the long run, it becomes sort of a scientific exercise in quantification for quantification sake. Rather than having that backbone or the metronome of why are we doing this and what are the business implications and how are we affecting people in the long run or even affecting it. The strategy of our business in the long run.

Scott: Yeah. One of the great lines from Cathy O’Neil, author of the book, Weapons of Math Destruction. Kathy talks about how the choice of what data to pay attention to, what to include as well as what to exclude are fundamentally moral choices and we think of the benefits of big data, right? The benefits of more and more data, but we forget that we still have to classify, provide taxonomies, provide nomenclature, put it into certain buckets. And those are fundamentally choices that are made by humans. So without the metronome, without the ethicists, without the sort of thinking of where does this go, what are the ultimate ends of what we’re trying to solve for as a human problem, we can sort of fall victim to this belief in the big data will contain all the answers.

Amanda: Right.

Scott: In sort of information, which big data is, it’s not the same as knowledge is not the same as wisdom.

Carla: Right. And it’s kind of monotheistic belief system that’s emerging where we’re just kind of believing in this big data or this AI or those kind of things. And that can be terrifying. I was at South by Southwest recently and Kate Crawford and Trevor Paglen, Trevor Paglen is an artist and Kate Crawford has developed this AI [Now] Institute here in New York. And they were talking about some of these issues and in terms of the way data is being quantified and we’re not even thinking about these biases that it’s not recognizing African American faces for example, this is a huge problem and how some of this even passes through. That there aren’t the checks along the way and why aren’t we thinking about that? Different people who could be participating in that conversation and in the development of those technologies.

Amanda: You brought up earlier, we’re often positioning ethics versus AI rather than ethics in AI or ethics with AI I would love to hear from either of you on how we get better at that was sort of what the opportunities are for bringing a more ethical conversation to AI.

Carla: So there are other artists and activists, Caroline Sinders and she travels the country in the world and she teaches these seminars and gets people involved in you know, developing datasets. And I mean it’s interesting cause I had been part of these Wikipedia edit-a-thons and so kind of evolving the Wikipedia landscape in terms of the representation, particularly for women on Wikipedia. And now this is happening in AI and again, artists are becoming active in this. Because I think the presumption for a long time has been artists are just these fuzzy, you know, feely people who aren’t really thinking about these implications are participating them. And more and more artists are taking proactive approaches.

David: Would you say that the, let’s say you’re an artist, that you need to have some cursory knowledge in AI because I think one of the issues that seems to happen is maybe there are people who want to get involved in this conversation, especially around ethics in AI, but they might feel intimidated by the space to say, all right, well people are going to be talking a language that I don’t understand.

Carla: Of course, and that’s quite an issue and it’s something you know that I’ve contended with for a long time because I actually studied painting. I have two degrees in oil painting. [Laughing] Fortunately, my father had taken me to computer graphics conferences when I was a kid and so something’s stuck. And so when I finished my second degree and moved to New York, I threw away my, all of my oil paintings and embarked on a career in working with emerging technologies and newer technologies, but just the same and doing that quite a few of the relationships in communities. I was a member of. Those people really felt threatened by my making this decision to start working with these new technologies in an art world context. And they felt threatened because they don’t understand the language and they don’t feel like that they can relate to that. So I think that’s why education is key and that’s really important. Last semester we had these two technologists who had just finished at NYCITP and they’re basically creating this Photoshop. Runway AI, intuitive way to work with artificial intelligence, discover new machine learning models and try them on simple visual interfaces. That’s really exciting. Something that kind of levels the playing field in a way.

Amanda: We saw a couple of years ago, we did a collaboration with Markiza and you use artificial intelligence to make a dress, which I think was early state for how can creative start to use those technology. And now in the last six, eight months, we’ve seen Sotheby’s and Christie’s come out and actually sell artificially intelligent produced art for a lot more money than they thought they were going to be able to. It was pretty exciting. I would love your perspective on how do we get more artists excited about the possibility and not even just artists in the, you know, surrealist art state, which is what they sold it at Sotheby’s recently, but more tactically speaking, starting to use those technologies as a backbone or like a pallet in how they’re creating art over time.

Carla: What has happened over the past few decades, there was someone like Lillian Schwartz who was an artist who was brought in to Bell Labs right now, sixties, right? She was essential to the development of animation. You know, I’m working with that. And so we need more situations like that. A lot of our programs don’t have professional practices or we don’t have curriculum that stimulates that kind of branching out. A lot of students, my department, they really do know how to code, but some of them who feel like the coding is just a bam or something that they can approach. As we build more and more platforms where they can use other skills and those skills can be employed and working with these AI technologies. I think that that will get more, more artists motivated.

Amanda: When they’re using that technology. Is the data set historical pieces of artwork, what are the data sets that are feeding into the AI that those artists might be using?

Carla: Yes, there were historical data sets. There were some gamification game datasets, you know, so there was definitely like an element of play to it and drag and drop and these kind of things.

David: This is exactly, Carla, why it needs to be interdisciplinary, right? Because as artists adopt more AI into their artwork, there’s probably a host of legal issues, especially around copyright, about who is creating something. When you have an AI creating something similar to that chimp that took the selfie two years ago. Who owns it? Did the chimp ticket, can the chimp own a copyright or is it the owner of the, of the chimp? I think you’re going to have the same issue with AI as well as, is that something that you’ve kind of thought about or has been kind of brought up in that field?

Carla: IP. No, that isn’t something, I mean with Ar and Vr that’s definitely something that we are discussing but it hasn’t come up yet with AI. But that’s definitely a significant more.

David: Yeah, we need more conversations like this.

Amanda: Well it’s interesting because art is so derivative. I mean that’s how you train. You understand the masters and you understand what’s worked and what hasn’t worked and artists individually are drawn to muses that may cross all different types of disciplines in the art world. But inherently to create art, you are drawing upon history and so this idea of IP and something that even is computer generated as sort of problematic and what I would think.

Carla: Well, I mean there’s a whole lineage of appropriation in the arts and Jeff Koons who was in the lobby for example, has been sued several times for appropriation.

David: His balloon dog is in the the lobby. Just to clarify. Not Jeff Koons. [Laughing]

Carla: Although I can just imagine. I can see him there. You know, and I’m sure Marina Abramovich and Jeff Koons are now making VR work, so I’m sure soon they might be embarking on working with AI Technologies, who knows.

Scott: What’s interesting too is we’re talking about art being created by technology or influenced by technology. I was recently in Madrid at the Reina Sofia Museum standing in front of the Guernica picture, which is this incredible piece of art. And if you’re familiar with the piece of art, at the top of the middle of the painting is a light bulb that’s sort of shining light on this terrible grotesque scene of war and battle and death. And it’s kind of this reflection that we’re seeing the world and our own humanity through technology. And at the time, you know, the light bulb. But I think the same way that we think about, we blame Facebook, we blame Snapchat, we blame the technology. It’s only as reflective as you know, as human nature. And so the same way that I think Picasso put the light bulb at the top of that painting to sort of say technology is shining a light in some ways on just the fundamental nature of humans. The same is true today.

Carla: Within academia there are a couple of conversations going on, but one students, oh well this is the first time artists are working with tech and, and that’s not true. And it’s definitely also not true that the first time artists are affected by technology. So if we think of Picasso for example, and we’re thinking of fourth dimensions and what happens with cubism and the invention of photography. And so painters have to kind of take a new direction because a photograph can depict reality more efficiently and more correctly and accurately than a painting. And so then we see the emergence of abstraction and then with Einstein and theories of relativity, we see all of these artists going into cubism and all of these different ways of kind of shattering space and thinking about time and space in an entirely different way. Also, another conversation comes up though, and this is something that my opinion has actually radically shifted. I used to talk about technology as just a tool like a paintbrush. But computing is ubiquitous and so we do have a different relationship with our computers and there’s more of a collaboration going on. And so even when we think about or talk about authorship, I think about all the works I produce. And should I be giving credit to all of these programmers at Adobe for example, or, or the people? Yeah, I use processing. Or when I’m using ar technologies and so authorship is extended and I think there’s a lot more fluidity they are, which is exciting to me because it’s, you know the hive mind.

David: We’re almost going to have to decide. Is the the technology acting as like the paint that’s merely the material or is it more substantial?

Carla: Yeah, the metaphor breaks down because it’s a postmodern tool because paint or even a camera serves one function. The computer is totally postmodern.

Scott: I can’t wait for the next Oscar speech that thanks Amazon web services. [Laughter]

Carla: Or Netflix.

David: Yeah, it’s bound to happen. So here we are talking about you know, what influenced you. There was an interesting kind of collection of Sci-fi i that was done by Luminary Labs in New York City consultancy agency last year and they kind of looked at Black Mirror, Her, WALL-E and other kind of Sci-fi. Then kind of help us think about unintended consequences and tech ethics. I’d love to hear from both of you what has been influential, whether it’s a book or a movie, what’s affected your thinking?

Scott: I think these tools are really effective and can sort of shine light on some of the potential dystopian futures that may exist. But I actually think the more dystopian and frightening stuff is the far more banal. AI determining the ability to receive a loan, a social credit score, things that are happening already in places like China that are far less theatrical or histrionic, but they are far more real in some ways. And I think that these shows are good and that the point light on some of these more theatrical corners, but I think that we also have to pay attention to sort of the underlying more banal places that AI and big data could potentially harbor bias.

David: Sure.

Carla: I have a list of books. [Laughter] In terms of things that have been particularly important to me. Donna Haraway’s Cyborg manifesto from like the 80s, Marge Pearcey who was a poet and science fiction writer. William Gibson has given many shout outs to her, but she’s never been someone kind of recognized as much as a Gibson or Neal Stephenson Snowcrash, some really, really important and, and future forming books and texts. Right. And when I say future forming, they were kind of inserting like if we think of, you know, the whole idea of the Cyborg or if we think of you know, William Gibson Neuromancer he kind of created a culture and then the programmers as they were developing, they were developing that with these words, these worlds he had built and he had kind of envisioned that. And then that kind of creates a trajectory for tech. Now getting back to Marge Pearcey quickly, one thing that was really significant about some of the works early works, a speculative fiction one from 1976, Woman on the Edge of Time. She’s able to travel. She’s a woman who is in New York City at the time. She’s a Latino woman and she is considered insane and she’s put in an institution like many amazing thinkers throughout history, right? [Laughter] And she somehow is able to travel into the future. But she has two different trajectories and one trajectory…and this really, really is important to me. One trajectory or, or in terms of kind of our fascination with the apocalypse. And I really what you were saying Scott, about like it’s the more subtle invasive intrusions that we’re not even like seeing because we’re thinking more of the theatricality of the apocalypse, but she imagines one future where women are shadow, it’s like mega 1% but like there are high rises, everything looks really glitzy but just for only a few can actually enjoy that. But in the other future, she first gets there and she’s really disappointed because it’s so green and there aren’t tall high rises and technology like talk about the Internet of things like technology has really become part of nature.

Carla So it’s no longer artificial and the way that it’s used. And these people have gone back to the land and they realized and recognize that if they did not make some extreme choices, that they were going to destroy the planet earth. And she wrote this in 1976 talk about prescience, right? But if they weren’t going to destroy the earth, they were going to at least destroy humankind, you know? And so, and she’s a poet, she’s an activist and she was speculating these on these things in the 70s you know, or Donna Haraway or cyber feminists from the eighties you know, Zeroes and Ones by Sadie Plant. All of that was particularly important to me as I was developing as a digital artist and a practitioner working with emerging technologies. And one thing that I think is really important for us all to remember is, you know, a new app does not a paradigm shift make.

Scott: Another interesting thing that I think is concerning is sort of the geopolitics or the colonialism around sort of data collection. You know, have certain standards of safety, for example, a self driving cars in the States that may not exist in other geopolitical areas. So what’s to stop a Tesla or a Waymo or anyone else from going in collecting data elsewhere where the risks the human life are taking less slightly. nd then bringing data back and sort of importing data. Because if you talk to, you know, Kai-fu Lee, author of AI Superpowers or other people in this space, basically all data collection happening, the powers Ai, this sort of oil of AI is basically being imported into the u s and China nowhere else.

Amanda: Right.

Scott: And so there’s this kind of new colonialism in other ways. Or you think about sort of the intersection of technology and politics and you think of like what’s happening in the EU with Brexit. There is a sort of a fraying or attention there with geopolitics and we’d sort of again point to these theatrical of cyborgs and all these things. We forget that like in the very near future that’s being exported by China that’s heavily intertwined with the Chinese government will be adopted by some EU countries. Likely and create sort of these friction points within how do you have a supernational geopolitical structure in the EU where suddenly China is effectively powering all the IoT in let’s say Bulgaria for example, because people need better connectivity. They adopt a higher five g standard. Suddenly you have the Chinese government involved in sort of running smart cities and Eastern European country. And so these are the kind of rubber hit the road, I think five to 10 year questions.

Amanda: Or two. We’re so focused on the physicality of borders and yet data and technology is inherently borderless. And so I think what you’re touching on is we really need to be thinking more intelligently about how that information is shared, but also how it’s impacting people differently based on culture and social norms, et Cetera, in different parts of the world.

Carla: There’s these inequities across the world. And even in terms of IP, you know, if you think about how China deals with IP. I recently had an issue with one of my works of art that was being distributed and sold his address in China. And there was really nothing I could do about it too, you know, and, and these kinds of things. And so yeah, that all just ripples out.

Amanda: So you guys both have an understanding of education and an interest in how we get people to a more interdisciplinary state. How do we start to think through educating the next generation so that they’re better equipped than we are to help influence on some of these topics and also have more of an interdisciplinary approach. Certainly Jen Shin, who we had on a few weeks ago tweeted recently that she basically didn’t think she was going to need math in her professional career and sort of stopped focusing on it. And I think that that’s something you hear a lot from people is, well, I know I’m not going to be a mathematician, so why do I need to go to calculus, or why do I have to take these classes? And we’ve explored whole language as a way of teaching English or teaching a language. Do you think that there’s an approach to education that can help us get closer to a more interdisciplinary approach to learning?

Carla: In my department, one thing that I am rallying for is more humanities for example. Because even in my department, we’re an art and technology department at an art school, but sometimes the students even come in with this expectation cause this Gen z and millennials and there are not a lot of job prospects and so they’re wanting to kind of just learn these skillsets. But my theory is if you’re just learning those skillsets, the thing I want to prepare them for and, and we as a faculty cohort want to prepare them for is adaptivity being able to adapt and having a diverse kind of universal skillset that deals with poetics as much as it deals with, you know, coding and understanding philosophy and the philosophy behind something, understanding semantic approaches to things and metaphors in idiom, you know, which is all part of the design process. And so, so essential. Because they can learn this stuff online too. Why pay a lot of tuition just to learn skillsets that are going to change, the softwares are going to change. I mean I used to teach actionscript and Flash. Remember that. [Laughter] Yeah. And so like it’s really more a way to approach technology with a kind of a elasticity and to also imagine new roles for themselves.

David: Scott, how do you deal with the question about the four year degree. Is that something that you think is going to still be around in the future? Is this something that we need micro credentials and MOOCs and things like that?

Scott: Yeah, absolutely. I think…the big question of education, are you educating for particular skillset? Is it a sort of an insurance product where you’re insuring against future, uh, you know, a job, joblessness? Are you investing in your future or are you consuming something that’s enjoyable to you? Right. Is education a consumption, an investment, or an insurance product? And these are like the large questions that we have. Or is it to train citizens? Is it to train holistic individuals that can work through and be adaptable to a future that is uncertain. So I think the four year degree, whether it’s four years or five years or three years, I think these are debates that the one could have. I think the concept of learning collaboration, I think we live in a short form content world where people have a shorter and shorter attention spans. I read something recently about how somebody had posted a question on Quora, the question and answer website, and they said, how should I learn empathy? I know that in my future job I need to be empathetic.

Amanda: Oh lord, this is terrifying.

Scott: And somebody had responded, well you should read books about empathy and facepalm you want to say, well, actually you should go on a trip. You should enjoy another culture. You should talk to somebody who is very different than yourself. You should read a book of fiction that puts you in the 18th century.

Amanda: Go back to kindergarten.

Scott: And so I think we were looking for these shorter, quicker answers to things that require some times sitting down and like learning a language, learning an instrument, learning to play, learning to collaborate. These are things that I don’t think there are shortcuts. So I think the future we have to sort of blend digital skills but also these analog skills that don’t go away.

Amanda: Well, I think we were seeing this in K through 12 education to where we over-indexed on leveraging technology in the classroom and giving kids iPads or computers. And now you see that a lot of the, if we go with charter schools or private schools that have really gone completely back to no technology, no screens, and yet inner city schools are now on technology and, and it’s this lag between where education theory is going and where it’s been over the last five years. And we’re starting to realize that engaging children, I think you brought up gamification earlier, but really getting kids involved in conversations about theory and topics that are maybe more macro. And then the learning comes from understanding, you know, if you’re trying to understand, animals will then start with biodome rather than starting with an individual species. And I think that that is just an interesting, maybe we haven’t figured out exactly where we’re going with that yet, but that all of a sudden teachers are saying, you know what, this whole screen thing really isn’t working and we need to start thinking about how we engage students in ways that are more compelling and more human.

Scott: I think that the way we teach technology, we can sort of metaphorically teach in the same way that we might teach Biology or other, other domains. So we think of, let’s teach a kid html or javascript or CSS. If we sort of zoom all the way in. It’s like teaching somebody about elephants or rather than about ecology and about the biodome.

Carla: It’s like a Power of Ten, you know, like the Eames movies from the Seventies it’s like a powers of 10 approach where you’re like macro and micro. And I think that that is so essential. Yeah, agreed.

David: So how do we bring tech to the macro level right now if, if we’re kind of lacking that, that larger kind of viewpoint?

Carla: Well, one in curriculum, like a four year education for example, is everything is set up so that you’re learning a particular skill set. And what we have is we have kind of tech courses and then we have studio courses and then we have a combined course. So one is where they’re really conceptualizing and problem solving and doing a lot of credits and there’s a lot of conversation, one where they’re basically learning skills, but they’re not applying it specifically to anything. Sometimes that that can be too taxing and then one where they’re kind of putting that all together. And so each semester is built upon another semester. I think that that is kind of important because it’s an of knowledge and this whole idea of the T-shaped student, you still need to kind of go deep into something. But then having that T at the top where you’re branching out and you’re bringing in all of these influences while you’re going deep into whatever your practice may be and whatever kind of technology you’re grappling with. Because like I’ve worked with AR, VR, I’ve worked with all sorts of things, but I also know I’m not an expert in things. And so when I’m working on larger, more ambitious projects, I bring experts in to work with me. And just like a company. And as an artist though, I employ that too. And that’s important.

Amanda: I mean, if you look at universities like Brown where you don’t have a major right and you’re and you don’t have grades either, whatever they’ve chosen or Gallatin and NYU where you’re choosing a colloquium and you’re defending a thesis in college rather than actually declaring a major, it gives you the leeway and the flexibility to say, I’m interested in a topic, but inherently that topic is going to be multidisciplinary. And I may be taking classes in the business school, I may be taking classes in the art school, communications, et cetera. But ultimately that ladders up to one cohesive conversation that I have to be able to defend in front of my peers, but also my teachers. And I think that that’s a compelling approach. Maybe it’s not perfect, but especially looking at the four year degree and trying to bring some of those other perspectives in. I mean certainly seems like one way to start to accomplish that goal.

Carla: One way we address that is through electives, but it is still, you know, pretty structured. And the one thing I found though is sometimes for students it can be stultifying if there’s no structure, particularly for undergraduate. So I think it’s a delicate balance right now. We as a species cannot, it is really difficult for us to access everything to unpack everything. We process more information in a day then you know, a person 500 years ago did in a lifetime. And so I think in education, the one thing that is important, why teachers are still important, why we won’t hopefully be automated and they’ll just be taught by AIs, right? Is that we have empathy. And we also know when we need to set parameters for them because they’re paralyzed by all of their options and choices that can be as stultifying in the technological age.

David: That brings up a question though. Do people need a major for a sense of identity as a form of identity?

Scott: As a form of identify, I think that’s fine. I think we have to get past the one and done nature of framing a diploma in putting it on the wall and saying, “I’m a computer scientist. Therefore I’m relevant in the future.” Because the truth is coding languages are changing on a monthly, annual basis. And so interestingly, Zach Sims, one of the founders of Code Academy when they were Building Code Academy here in New York, Zach was a political science major at Columbia and he tried to hire the coders to build code academy and he couldn’t find the coders that had the right skills. Even those that graduated from top computer science programs at all sorts of great universities. And it was sort of proof exactly for the product market fit of Code Academy. The fact that even the best computer scientists in America didn’t have the most relevant coding skills for the modern world because they’d been trained in C++ and sort of the underlying mechanics of how computer science works, but they didn’t necessarily know ruby on rails. They didn’t necessarily know some of the more relevant modern languages, that he was requiring for building Code Academy. So I think I love the metaphor of a passport. When we think about education that we debate sort of the plane ticket mentality of flying to one destination or another. And my destination’s relevant and yours is not. But instead of thinking about this like a passport where we each collect stamps from various places and we try to build out the most holistic passport that we can. And so if you’re somebody that loves the theater arts, take a data science class, and if you’re somebody that loves computing, take a standup comedy class, it scares you to get on stage and forces you…

Carla: And I’ve got a joke. [Laughter] So I wanted just wanted to couch my presentation today. I hope you all don’t expect too much out of me because like when I was in school I was a C++ student.

Amanda: No, but it is interesting. I mean it’s certainly an IBM, this is not a plug, but we spend a lot of time helping our clients to figure out how to maintain skills within their organization and every time you implement new technology that means completely transforming the way that your workforce works together, but also how they have the skills to stay relevant. I think the best companies in the world recognize that sending people out for a degree is not necessarily going to a workforce make. And so you need to have on the job training and really think about how there are practical skills that come along with each job and role. But that there’s also a sort of higher level disciplines that people need to be aware of. And that even within an organization is cross disciplinary.

Carla: In terms of obsolescence because in the field of the arts, Christian Paul for example, who is a curator at the Whitney Museum is now working with technologies that are going obsolete and how we can preserve these works of art because artists have been making works with computer technologies for over 40 years. And so they are really working on preservation methods. And I wonder if that’s something in the corporate sector is there any issue or are there thoughts about preserving pass technologies? I mean, are you thinking about those things? [To Amanda]

Amanda: I think obsolescence is a huge challenge for not only technology companies, but if you think about science and how so much of information is in paper form historically, and so you can actually go back and physically access that. But the digital world is fleeting at all times. And if you think your photos are safe on Facebook, you better believe again, right? You back up those files. But yeah, I think it’s a very unique challenge and it’s something that we always, I mean, I’m speaking for humankind, which is a really dangerous game, but I think that we always assume that the next solution will come out that will help us to fix that problem. And you see a lot of technology companies that are taking that onus on themselves because they’re realizing that this magical solution is not going to happen unless we start to think through how we actually…even from generation to generation of an individual technology, how do we make sure that there’s continuity of information and that we can actually maintain sort of the status quo while pushing forward the boundaries of what we’re trying to accomplish.

Carla: Yeah. Like in all things, it’s finding that balance.

Amanda: Life is balance. [Laughing]

Carla: It really is.

Amanda: Yeah. Well, this was amazing. Thank you so much for joining us.

Scott: Thank you both.

Carla: Thank you. This was fun.

David: It was.

Amanda: Thanks for listening. We will have more content out to you next Friday.

David: Hey, and if you like what you heard, don’t forget to like and subscribe.

Jason: And rate and review and tell your friends. And then we also put another thing out this week. We put up video from our New York event from last one.

David: Oh, it’s good.

Jason: Which you can find at…

Amanda: IBM.biz/AIForEveryone

David: Hey, that was collaboration.

Jason: Thanks for listening.

 

More Think Leaders stories

Behind the code: Meet Saloni Potdar

September 13, 2019 | Trends and Profiles

We talk to IBM Watson's Saloni Potdar about her passion for technology, how she uncovers ideas that shape the next generation of technology – and her advice for anyone interested in pursuing a career in AI. ...read more


Digital innovation, SaaS, and sales & marketing trends. An interview with Nicolas Vandenberghe & Sean Johnson

August 30, 2019 | Think Leaders

How can a SaaS business meet the user where they are? In this episode of thinkPod, we are joined by Nicolas Vandenberghe (co-founder & CEO of Chili Piper) and Sean Johnson (partner at Founder Equity). ...read more


The future of the fan experience at the US Open

August 27, 2019 | News and Updates, Watson APIs

With the help of IBM, the US Open is transforming its technology operations to create the future of championship sporting events. ...read more