Product design, ethics and human-centric digital transformation
Product design, ethics, & human-centric digital transformation with Kate O’Neill & Jennifer Shin.
Does being human-centered design correlate with ethics or is all product design strictly business-driven? In this episode of thinkPod, we are joined by Kate O’Neill (founder and CEO of KO Insights) and Jennifer Shin (data science expert, Founder at 8 Path Solutions). We dig into the news regarding Stanford’s newly-announced Institute for Human Centered Artificial Intelligence, whether ethics has a central role in product design, and the intersecting roles of consumers, government, and companies. We also touch upon aligning business objectives with human objectives, the terminology of bias, and our conflicted relationship with data security.
Some of the questions we tackle include:
-How successful will Stanford’s new Institute for Human Center Artificial Intelligence be?
-Does human-centricity correlate with ethics?
-What is the personal responsibility level needed in protecting our data?
-Is there a movement within startups to alter data collection practices?
-How should we approach bias?
Connect with thinkLeaders & our guests!
Kate O’Neill @kateo
Jennifer Shin @jennjshin
Some quotes from our discussion:
“I feel like a lot of universities make this announcement every few years…But I’ve seen a lot of these initiatives and I haven’t seen that much produced by them. I see a lot of the faculty join, their names are on the website, but if you actually start looking at what results have come out of that, I haven’t seen that many. So I am kind of interested in seeing what comes of it.” –Jennifer Shin, discussing Stanford’s new Institute for Human Centered Artificial Intelligence
“It seems like the time is now that we’re starting to have these conversations in more practical terms and more grounded reality. So I’m excited to see what comes of it cause I think it is imperative and I think we’re starting to realize the sort of emerging nature of the opportunity and threat that AI and intelligent automation represents. So if it takes, you know, forming these lofty goal-representing think tanks to do something about it and to form some set of guidelines that people can sort of widely agree to and apply in business. I think that’s a great approach if that’s going to work.”-Kate O’Neill, discussing Stanford’s new Institute for Human Centered Artificial Intelligence
“The validation of data is actually really important. I think a lot of times people don’t think about it.” –Jennifer Shin
“We are looking at a three-part problem. One is, companies definitely need to out what their best approach is going to be that aligns with how they make the best business decisions and what’s good for the people on the other side of those decisions. People need to be responsible about the data we share and we need to become more sophisticated about what it means to participate online and share our data and trade that for conveniences and, you know, security and things like that. Government needs to step in where those things aren’t incentivized appropriately” –Kate O’Neill
“It’s about what the results mean and how you can use it and what you can’t do with it. Because every model has assumptions. You have to know what the assumptions are. If you break the assumption, the model does not work.” -Jennifer Shin
“Our philosophy (at previous company, Meta Marketer), and it remains my philosophy today, is if that you focused on making a better customer experience, you will be more profitable as a company.” -Kate O’Neill
“I see way bigger violations happening outside of the business environment [using data inappropriately], in fact. The biggest violations are actually from the political campaigns…They’re using the same analytics you have in private industry and businesses in a different way. And in a way that actually is potentially much more damaging.” -Jennifer Shin
“As someone who used to be a tech writer in Silicon Valley, I heartily endorse [that programmers understand what they’re building]. Trying to get programmers to document what they were doing in their code was incredibly difficult.” –Kate O’Neill
I always tell people bias in statistics is a technical term. Literally your expected value minus the actual, it’s very much a number…There is no model out there that you’ll build and statistics that does not have a bias. There will always be a bias.” -Jennifer Shin
“I use this term human-centric digital transformation in my work because I think it’s really important within the construct of business that we don’t abstract too far from the human level and one of the ways that I think business tends to do that is this fixation on profit as the only measure of success.” –Kate O’Neill
Amanda: Hey everybody. Happy Friday.
Jason: Happy Friday.
David: Happy Friday.
Amanda: This is the think leaders team. Amanda.
David: This is David.
Jason: And Jason
Amanda: So we were joined today by two very brilliant women. Kate O’neill, who’s the founder and CEO of KO insights and Jennifer Shin, who’s the founder at 8 Path Solutions.
David: It was a great discussion. Jason, what did you think?
Jason: It got very math heavy.
Amanda: It did. I actually liked the balance of the math with sort of the more brand orientation.
Jason: Yeah, it was a nice crossover and you got to wear your data science hat, Amanda.
Amanda: For like two seconds, which is basically all it’s due. [Laughter]
Jason: Yeah, I was very impressed.
David: So no matter what your interest is, if it’s math and brand or human centered, I think we scratch your itch on this one.
Amanda: Yeah, so we talked about human centered design.
Jason: Yeah, we did. That was, that was the main focus of this, but it went in a few crunchy and less crunchy directions.
Amanda: Tangential discussions.
Amanda: What’s interesting to me is that coming from a marketing background that we’re having these conversations in depth about math and data now in a way that is very meaningful.
David: Well didn’t you say Amanda on your last show of that everybody really is a data scientist now?
Amanda: I’m a little afraid of your recall of things. [Joking]
David: I have total recall.
Amanda: But also just thinking through bias and ethics and what those things actually mean because we talk about them a lot and they are a little buzz wordy, but they’re important. And so breaking down what that means in statistics as well as in business.
Jason: Yeah. Well we also have been hammering away on this show for like the last three years about the rising amount of hats that the CMO has to wear.
Jason: And the increasing amount of knowledge that that office has to hold. And this is just more of that, but it’s nice to see that, I think, people are actually more fluent in that than they were three years ago. People were actually better data scientists now and thinking about ethics.
Jason: Like, imagine the CMO office having to think about ethics and data science and all these things as we’ve been suggesting they might have to do for a couple of years now.
Amanda: It’s interesting because, while that’s true and in business there is a greater consideration of data and information, what we were also talking about with Jennifer and Kate was [that] consumers need to be more aware and they need to take more responsibility on themselves around these topics. And recognize that when you’re putting information out into the world, it’s probably not that private.
Jason: Hmm. And what’s part of a marketer’s job is educating consumers. We have a part of that ecosystem.
David: I think that’s where like one of the major takeaways, at least I took from it, is that it’s like all these different players, right? So it’s the tech industry. As the tech workers, it’s us as users becoming savvy digital citizens.
Amanda: It’s the government.
David: What’s their role? Even Zuckerberg now talking about government’s role, so everybody is always focused on this.
Jason: He’s running. [Joking] Did we break that news in the show?
Amanda: No, we’re not there. We wish. On that note, let’s turn it over over to the actual professionals in the room, Kate and Jennifer.
David: Happy listening.
Amanda: Hey everyone. I am joined by Kate O’neill who is the founder and CEO of KO Insights and Jennifer Shin, who is the founder at 8 Path solutions. Thanks for joining.
Jennifer: Happy to be here.
Kate: Thanks for having us.
Amanda: So Stanford announced that they’re creating a Human Centered AI Institute. And given your backgrounds, which we will get into, I’m wondering how we feel about this, how we are thinking about AI and and its impetus on the human condition and bringing that human centered piece into this conversation in a meaningful way the way that Stanford has with this announcement.
Jennifer: I feel like a lot of universities make this announcement every few years. I’m on the faculty at Berkeley, I teach at NYU. So you know, obviously believe in education and all of that. But I’ve seen a lot of these initiatives and I haven’t seen that much produced by them.
Jennifer: Alright. I see a lot of the faculty’s join, their names are on the website, but if you actually start looking at what results have come out of that, I haven’t seen that many. So I am kind of interested in seeing what comes of it.
Jennifer: Because from what I can tell, it’s a lot of fancy people on the board. A lot of fancy people affiliated [with it]. Right. But I mean also fancy people have jobs and they’re busy and I’m not sure what their contribution really will be in this instance.
Amanda: Yeah, it’s like the best intentions, best laid plans.
Kate: But I think the time is different. Like maybe. From what you’re describing kind of as the historical precedent for this. And it seems like the time is now that we’re starting to have these conversations in more practical terms and more grounded reality. So I’m excited to see what comes of it cause I think it is imperative and I think we’re starting to realize the sort of emerging nature of the opportunity and threat that AI and intelligent automation represents. So if it takes, you know, forming these lofty goal-representing think tanks to do something about it and to form some set of guidelines that people can sort of widely agree to and apply in business. I think that’s a great approach if that’s going to work.
Kate: We’ll see.
Amanda: Do we feel that human centricity in this case is corresponding to ethics? I mean, Is that too far of a leap or is that kind of the direction that people are going and where in order to be ethical you need to be human-centric?
Kate: It seems to be part of the problem set right now. How do you quantify or how do you define what ethics are as it relates to data as it relates to machine learning, as it relates to AI in general? And I think probably a group like this is sort of assuming that they may be in the best position to take that on and to articulate a set of guidelines that are the codification of what ethics really are when it comes to these technology applications.
Jennifer: So I think the thing is ethics in reality is legal, right? It really comes down to what someone can be held accountable to and when you can’t, right. Whereas product design is not, it’s a business driven. In that sense they’re not same.
Jennifer: I think the ideal, everyone wants to believe that they’re going to be the same or that somehow we can link the two, but they kind of work against each other, right? If you want to make more money, you want looser regulation. Right. And then of course people want everything to be more regulated. But the problem with that even is that if we ever have gone to government agency, right. It’s like you kind of have to think about who’s running a lot of these restrictions. And they’re not always the best one run by people who, say, don’t build technology or don’t understand technology. So I think it’s a lot trickier and reality than the idea of it.
Amanda: Our CEO has come out and said, she said this at Davos I believe, that essentially, I’m paraphrasing, if we as companies do not start to self regulate, then we will have regulations and very likely those governmental regulations or going to be way more strict than what we actually need to do in order to protect humans or our customers. And I think it’s interesting that you’re bringing up the legal piece of ethics because I do think there’s this boundary between good business and being able to continue to thrive as a business and making the right steps, sort of to Jennifer’s point, in order to avoid overly constrained regulations, if that makes sense.
Jennifer: Yes. I think she’s bringing up a really good point, right? Which is if everyone’s just fast and loose with these things, like they data privacy, how we’re building things, you know, security. Then of course, as soon as there’s a huge controversial issue that comes up, the government’s going to step in, have to regulate, cause there’ll be an outcry and avoid the outcry.
Jennifer: It’s better for everyone really. But I think that’s the unfortunate part of when people don’t, they don’t want to really think about that side of things because they’re making a lot of money, doing great, everything’s fantastic. But there’s the other side. As soon as something goes wrong, people will get very concerned and upset. And what’s so interesting about AI and ethics right now is I don’t hear people in technology talk about the danger of AI. I hear everyone else outside of technology talk about danger of AI.
Amanda: Is that an information asymmetry or is that inconvenience?
Jennifer: Oh, we are nowhere near where everything’s automated that we should really be worried about the AI alone is going to destroy everything.
Jennifer: The more likely scenario is a person decided to use AI for the wrong thing, knowing the wrong things about it, and now we might be screwed because the logic is flawed. That’s my biggest fear. Honestly. Someone set something on course, iterate, and really just not be doing it, right. And I’ve made that mistake myself, like with say a AWS. If you ever did auto scaling, it turned the auto scaling is great unless it uses the wrong metric and keeps shutting down and exponentially increase your hard drive, right. Then you get billed a lot of money, right? So even in the nicest scenarios, right, you can end up having AI not work in your favor.
Kate: I feel like we must have different circles within technology because I definitely hear people within technology talking about not necessarily the dangers of AI, but certainly the implications of AI at scale. And that I think is what we’re trying to get ahead of. To your point, the more you get ahead of regulating or deciding what the practices are, the more you don’t have to be reactive in the moment when something goes wrong.
Kate: So we are looking at a three-part problem. One is, companies definitely need to out what their best approach is going to be that aligns with how they make the best business decisions and what’s good for the people on the other side of those decisions. People need to be responsible about the data we share and we need to become more sophisticated about what it means to participate online and share our data and trade that for conveniences and, you know, security and things like that. Government needs to step in where those things aren’t incentivized appropriately, where there’s not financial incentive, where there’s not the appropriate alignment between those things, that that’s where the right size regulation will fit in. So those three parts need to be in balance and you know, the right balance is going to be up for discussion. But I think all of those pieces are relevant to the process of figuring out how we’re going to make it into the next stages of this.
Amanda: It’s interesting that you bring up the impetus on people to be responsible about their data. Because I think we’re in a consumer economy, and this may be a controversial statement, but I think that consumers are very quick to put it on the companies for collecting data. And we’ve seen this in the backlash with Facebook, right, that the misuse of data, and I’m not defending Facebook by any means, nor am I condemning them because there are a lot of platforms out in the world that are doing similar things. But I do think that people have been putting a ton of information out into the world and then there’s somehow surprised when that information is being used in ways that they didn’t necessarily know it was going to be used. And the Internet is a vast place. And to think that somehow by putting something behind a password or putting something behind a privacy wall where you have to request friendship or something, that you are somehow protected in what that information is and how it’s going to be used is a little bit crazy.
Kate: The metaphor that’s imperfect, but I think of with this is like roads and crossing roads. You know, if roads were first designed for horse drawn carriages and it was reasonably safe as a pedestrian to go across the street, that doesn’t mean that you no longer have to become more sophisticated as automobiles start using those roads. And as 18 wheelers start using those roads. You can’t claim naivete if you’re just crossing the street and you assume that things are still happening at the rate of a horse drawn carriage. But at the same time there is some responsibility for the 18 wheelers themselves and for the government to figure out what are the obligations of those 18 wheelers in using that, that roadway. So I think there’s a kind of multifunction there that the people who are interacting with the street need to understand why they’re interacting with the street, what they’re doing, what the risks are when they do that. But also we need to figure out what the right amount of traffic and what the right amount of usage of that is and what the restrictions need to be and what the appropriate safety and precautions need to be to make sure everyone is, is kind of protected in appropriate ways.
Jennifer: The difficulty with things like this though is, especially with things like development that’s happened in last 10 years. I remember having a lot of these conversations with the people who are now up there. You know, up the ranks, you know, if they’ve done very well for themselves. And we used to have this about, you know, what data is available via API. Developers have known that it’s been available for a long time.
Jennifer: It’s just now that people are more aware of it publicly, I think. And also obviously what the use was, that people aren’t more outraged by it. But from the technology standpoint, it’s been there this whole time. And I tell people like, I’m not just making this up. You can go look in a textbook somewhere and you’ll see that it shows you how to access that information, right?
Jennifer: Twitter, Facebook, they’re classic examples and those tell you exactly how to get information about demographics and about people and with the customers are. The other side is, I’ve had, for instance, political campaigns come to me over the years and asked me to pull that information from say Linkedin or other places. Now I had an ethical concern about that. You know, coming from corporate, I know legally speaking they can come back later and say, hey, you did something wrong. But a lot of the developers had a hard time believing that; they were younger. They had less of the sort of exposure to the corporate environment where they could see that there are repercussions sometimes. But that’s a tough thing when you were kind of outnumbered by people who are generally less experience and don’t see anything wrong because the laws don’t say, I can’t do it. Right. And with newer technologies and newer development. That’s the bigger issue is that, is if it doesn’t exist yet, then how do we get people to be aware of the fact that it can always come back and be a problem later on.
Amanda: So you hit on a few things that you’ve done in your past and I did, do you want to take a second to just backpedal a little bit and ask you both to walk through your backgrounds and your path through tack and sort of where you are now in your career?
Kate: I have a 20 plus year background in various parts of technology that came about as a result of studying languages in undergrad. And I was supervising the language laboratory at the University of Illinois at Chicago when I saw the graphical web for the first time. And it blew my mind and I thought this is going to change everything. So I got very curious and wanted to learn how to build a website. So I built what turned out to two of them, one of the first departmental websites that UIC, which got noticed by someone at Toshiba because in those days, in 1994 or whatever, people were making manually curated lists of the websites that were new every day. Right. So Toshiba actually ended up recruiting me to come out to California to build their intranet for them, which I didn’t know how to do, but nobody else did either. So I figured it out. A lot of my subsequent years where like that, you know, nobody knew how to do this thing, but I was very curious and passionate about it. And so I figured it out. One of the things was being one of the first hundred employees at Netflix, I was the first content manager. I did a lot of the development on the site and led a lot of the projects that brought things like dynamically personalized content to the home page and things like that that are now standard.
Kate: That are now standard in e-commerce. But we’re first time at that point in 1999, 2000, 2001. So I think as you were saying, Jennifer, I was very close to a lot of these kind of early data decisions and and the fact that we knew how to do this stuff and the fact that we knew information that was passing around about people back then we could customize behavioral experiences, turned into even more of a career for me. 10 years ago I started an agency digital strategy agency and experience optimization shop called Meta Marketer, and a lot of the work we were doing was behavioral optimization. So using a lot of the data that people had online in analytics to be able to customize the experiences for them. And our philosophy, and it remains my philosophy today, is if that you focused on making a better customer experience, you will be more profitable as a company. I think it’s just a question of having the right incentives and motives in mind and not being driven entirely by greed and profits. So that’s where we come to sort of the tech humanist era of my work, is bringing these things together. And making sure that the business objectives align with the human objectives, that we can actually bring those forward at the same time.
Amanda: That’s cool. It’s crazy to think that you were thinking about personalized content in the turn of the century.
Kate: Twenty years ago. [Laughing]
Amanda: Yeah, and there are a lot of brands that are still terrible at it.
Jennifer: Yeah, I was about to ask you to do it on Note Pad. But then you said 94. I was like, there wasn’t Windows95 in 94 so I don’t think it was. [Laughing] I know, and it’s funny because the reason I bring that up is I’m helping the American History Museum get data from Magellan, which is a satellite phone from the nineties. And so I try to explain to people like, why is it so hard? Well, back then there was no Windows95 and you find any of the context for like the kind of work you got to do and the type of data you’re dealing with. Which is, it’s images but not images when it’s transmitted.
Jennifer: You’ve got to re-render it.
Kate: [To Jennifer] What are some of the other projects you’re working on now?
Jennifer: So I do a lot of teaching. I teach at NYU, I teach data visualization, data science, business analytics. I also have the day job and have my company that I created seven years ago now I think. And I started out in consulting, I was in finance, hedge funds, private equity, the economy crashed and I went, maybe it’s a good time to look at other career. [Laughing] Probably because all the jobs going away, I mean like record number, and I was still getting calls. But you know, it’s interesting when the economy crashed because they don’t need sophisticated skills anymore. It just needs to not hemorrhage money. So accounting was more popular suddenly then say more advanced modeling. So you know, I switched careers and I realized that people were having a hard time understanding that there was going to be this big data movement that’s going to happen. And I put out all these grant proposals at NASA and NSF and no one thought it was going to be anything. So it’s interesting to have gone through that experience and now have it, you know, 10 years later be such a field where everyone’s interested in. I feel fortunate and just being able to still do scientific work, right. Furthering science, further engineering, helping those, you know, people who actually are really skilled in what they’re doing, and experts, they can’t just go back and kind of learn a whole new technology and still working with those type of groups, helping further the bigger cause and technology. Right. And then also math as well. Being able to kind of reconnect with that community and not only just teach it but find new ways to teach with technology. Like I was teaching this week and I showed my students like everything you do with an excel from a formula, you should be able to calculate. They’re learning statistics, but they’re learning like how to run SASS. That’s not statistics, though. I actually did it. And at some point one a student was like, “Wait, what did just show us?” Because he didn’t understand like the numbers were exactly the same, that they’re actually done differently one’s calculated, one’s from the formula through Excel, but that’s the sort of thing I think that you really need people to understand. From my work over the years, that’s what I’m hired to do. Figure out where the discrepancy is, right? If it’s done properly, right. The validation of data is actually really important. I think a lot of times people don’t think about it.
Amanda: The thing that’s crazy to me about Quantum, for example, is that we literally have no idea what happens in that machine in some cases, and mathematically that’s kind of mind blowing because one thing that is true of math is that there are answers and so to have a machine that’s creating some sort of dataset and turning it into an outcome without having a clear explanation of why that is, is really fascinating to me. As a former data scientist and somebody who’s an enthusiast.
Jennifer: Yes, I would definitely not use it unless you can interpret the results. Right? Statistically speaking, as a statistician, I think that’s really important. Statistical analysis. I always tell people the stats calculating it, that’s only half the other half of that term. It’s analysis.
Jennifer: It’s about what the results mean and how you can use it and what you can’t do with it. Because every model has assumptions. You have to know what the assumptions are. If you break the assumption, the model does not work. Right.
Kate: I was gonna say, it’s interesting too that you bring that up because it feels like, it brings us back to the discussion about AI ethics and the whole notion that explainable AI. Do you feel the same way about the explainability of AI? That you should be able to back something out and if you can’t back it out and you shouldn’t use it? Was that kind of what I’m getting from what you’re saying there?
Jennifer: Yeah. Cause I mean really at the end the day it’s, it’s going to be some sort of algorithm, right? Which is basically, you know, some set of equations, again, some sort of model and then some assumptions that you made to start that. There is no universal model that answers everything. If you think about just the world as it is, you come with one idea. There’s always an opposite sides of that. It’s almost like very zen, right?
Jennifer: Everything’s going to have a contradiction if you don’t have boundaries on what it is you’re seeding. And I think the really important part is actually putting it down on paper. Having some record of it is the other part that I think people tend to neglect cause they’re like, “I got to build fast.” Like that’s great, but you also need to know what you’re building right? That’s also important.
Kate: As someone who used to be a tech writer in Silicon Valley, I heartily endorse. Trying to get programmers to document what they were doing in their code was incredibly difficult.
Amanda: This is a leap, so bear with me for a second. I think that there’s something in assumptions and understanding business purpose, right? That when you’re making an assumption there is an inherent bias, but there’s also an inherent thought process behind it and I think as as an organization that’s starting to use data, when you’re making those assumptions, oftentimes you’re coming to it with whatever your business purpose is or what your business outcome is. And I’d be interested, Kate, in your perspective additionally because I think that with digital transformations a lot of times there’s a mapping of what those actual intentions are. And then Jennifer on your side, I think helping to think through the bias statement and whether when you’re coming to these assumptions you should be doing it with the outcome in mind or really as a mathematician or a statistician, is it your sort of ethical duty to make sure that you’re making those assumptions without thinking about the outcomes or trying to drive to a specific answer?
Kate: I use this term human-centric digital transformation in my work because I think it’s really important within the construct of business that we don’t abstract too far from the human level and one of the ways that I think business tends to do that is this fixation on profit as the only measure of success. Whereas I feel like if you take a business back to often its origins story, you can usually find some sense of why the company exists in the first place, what they set out to do or to make or to change in the world. And that sense of articulated purpose. It doesn’t have to be a humanitarian purpose, but just a sense of why the company exists and what it’s trying to do at scale is really what I think helps drive the sense of digital transformation as well. So you can get a clear understanding of what it is the company is trying to achieve at scale and what they need to do to amplify that objective through culture, through brand, through operations, and finally through the data model and through the technology they use to amplify and accelerate that. Without that, I feel like you can be off in a very unfocused direction. You can go in any direction really, and it won’t necessarily suit your original purpose. They won’t necessarily suit the company in the long run and it certainly won’t necessarily suit humanity. The strategic purpose that sort of precedes that work, I think is incredibly important and it’s what keeps the human in focus in that work. The other part of that that sort of helps prove that idea is that in my view of all the work I’ve done around humanity and around technology, I’ve come to this understanding that what I think makes humans human is a sense of meaning. We crave meaning. We seek meaning. We ponder big questions, we want to know why things are the way they are. We think about what we’re communicating to each other in all of our language spoken and unspoken, like these levels of meaning from the semantic all the way out to the existential or cosmic are part of what makes us, us. And I think purpose is the shape that meaning takes in business.
Kate: It’s what it, what’s binds a team together. It’s what creates morale and a sense of autonomy that people really want to do the thing the business exists to do. So if you can really get that clear and really codified that into culture and in brand, it creates this kind of unified flow within the company and it creates a real magnetism from outside the company to the market. So that’s a really important construct I think. Then we come to that the bias question, and I’ll be quick on that because I’d love to hear Jennifer what you have to say on that, but I feel it’s pretty evident that we encode our own selves into the code we write, the machinery that we create. So our biases are in there. Our values are in there and it’s really behooves us, I think to be very clear and explicit about what we have codified into those machines so that as those scale through automation and through AI that we will see the best of ourselves scale as well.
Kate: Jennifer, I’d love to hear what your thoughts.
Jennifer: I always tell people bias in statistics is a technical term. Literally your expected value minus the actual, it’s very much a number, right? If you’re going to do a model and you know it’s not like this value based idea, it’s a very statistical in statistics, at least the very specific terminology, so that’s all it really is. It’s differential. There is no model out there that you’ll build and statistics that does not have a bias. There will always be a bias. Then we look at other things like as you grow the number of samples you get the data points does it at least converge towards something, right? There’s other things we do to look at the bias. It’s not as simple as “Ss there bias or isn’t their bias?” I think that’s when people talk about it and you’re talking about it in the world of data, this is where I think there’s a lot of misconception about what bias really is. When we estimate something, we’re never going to know all the information. It’s not possible. So you’re always gonna be off by some amount. Now I think the bigger question that people are really trying to get at is, “Was it on purpose? But the intent is not something that you can just figure out from just looking at code or just looking at data. You kind of have to look at people. What did they gain from it? What was the incentive, what was the advantage, right? It’s really about why they did something more so than whether or not it’s in there in the code. If you start going down that road of trying to say someone did something based on just the code, that’s really difficult because sometimes we write code and we don’t realize that maybe there is a bias. It’s not always so evident. We don’t know everything about, you know, every scenario. One example might be be included a variable that you didn’t realize might be impacting your results. There’s difference in an honest mistake in that sense and not realizing something versus it being something intentionally done.
Jennifer: So I think that’s really the thing that we should think about more so is that people create algorithms and then of course data can also have bias, but maybe not intentional, meaning there could be a differential in say socioeconomic classes. It might not be that necessarily that it’s the person who wrote the algorithm. It could be the people who collected the data. It can exist elsewhere within the process of getting these models out there. And so, I think it would be careful about figuring out where the bias comes from, right? What the real motivations were and really be still have to go back to people and think about, you know, why someone would create something with bias if in fact we believe there is bias in there.
Kate: But I think the bigger takeaway and my sense of this tech humanist idea is that from a business leader standpoint, it just needs to be important that business leaders recognize that that exists, right? That there is bias in algorithmic models. And then inherently as you say, statistically there is this concept of statistical bias, right? But then we load the other semantic meaning onto that and we’re talking about bias on a more sort of socioeconomic or cultural level. And there’s that too in many cases. And there’s these unintentional consequences like you know, YouTube sort of leading people into increasingly extremist content because they’re looking to create more engagement and those videos create more engagement. So the model leads to more engagement, which unintentionally leads to more extremism. That’s not, intentional bias and or something that was an intentional flaw, but it’s there and it’s a consequence in the real world of that. So I think there’s this kind of bigger discussion that business leaders need to have and be aware of, which is you’re still responsible. You still need to kind of pull that out and figure out, what are you going to do in the real world about the consequences of the actions that happen through these biases, through these unintended consequences and the models that fundamentally fuel your business.
Amanda: Yeah. Bringing up sort of the human in this. I think it’s interesting, especially from a brand standpoint where when we talk to chief marketing officers, chief experience officers, everyone’s searching for that one to one addressability, but inherently in a model you’re looking at a large swath of a population and you’re making assumptions about behavior that’s based on many, not on an individual. And so over time, the way that that experience is built is really based on not you as a person, but rather your sort of statistical peers. I think it’s an interesting conundrum as marketers, as organizations where we’re trying to get more personal, but in doing so, we’re actually getting even further from it being about that one person and rather about a whole population set.
Jennifer: Well, actually, statistically I think it’s someone the opposite in a way, right? Because in the past, but the reason we use statistics is you take a sample of the population and you won’t expand it out to the entire know the US population. Do you take like say 10 people from the local area, 10 people from the next local area. And then you want to get an idea of across the US what everyone’s doing.
Jennifer: So I know everyone wants to make everything addressable, but to your point, I mean from a business standpoint, it’s not necessarily profitable, right? To get that.
Kate: Or effective.
Jennifer: Yeah. This is a thing that I think everyone’s so caught up in this idea of doing something that they’re not thinking about the fact that if I can get something for cheaper, it’s better. For instance, population projections, right? If you have a sample and that sample is going to converge the US population, it costs less money to use a sample. But it’s going to get so close to the population then. I mean, you save money not having to get every single person to give you information. So that’s kind of the idea behind it. If you really think about why we use population models and statistics. And from business standpoint and saves you money. I mean, think about the census, it’s every 10 years. By the time we got that data when no one’s gonna use it for, we’ve been 10 years, you know, too old. So I think that’s the other side of it is that you think about what really do get for the amount of effort you put into building an algorithm that’s so customized. But if you’re not actually getting a business value back and customers don’t feel like there’s a real benefit to it, then you may lose money on building that model.
Kate: There’s different applications for this, and the closer you are to a logged in intimate experience with like a bank, let’s say, the more you might want a one-to-one sort of experience, something that simulates the intimacy of interacting with a teller or someone who really understands your need, but it’s still going to be in context. You’re still gonna be in the context of being a banking customer at that moment. You don’t necessarily want the bank to be addressing you about your son’s soccer game or something like that. Right? Like there’s, there’s an overreach that can happen with that. Many of the risks that we run up against when we start looking at the blended big data picture and how companies can put that into play with behavioral targeting and things like that these days. I don’t think that we need addressable. I don’t think we need to one, one to one in most of the business context that companies are often striving for it.
Kate: I think we just need to understand relevance and context as it relates to where the human is in the world and what we’re doing when we find the brand, when we interact with the brand. My book before Tech Humanist, Pixels in Place, is largely about dealing with this in the offline and online integration conundrum. So how do you figure out, if you’re a retailer and someone’s coming into your store, how do you sort of interact with them based on the interaction you’ve already had online? How you bring something that’s meaningful to that interaction that respects the fact that this person has already invested some research and some time and to sort of figuring out your brand and what they want from your brand while also not overreaching and making them feel like you’re being a jerk right now.
Amanda: Being creepy. [Laughing]
Kate: Being a creep. It’s a hard line to walk, but I think that’s where the word empathy gets overused and like it doesn’t have any meaning anymore, but it’s still very important I think to step into what is going to be appropriate and relevant for somebody in that context when they approach the brand, when they interact with the brand at that moment in that place, you know, given what you know about them and give them what they know about you, how can you create the most meaningful experience in that moment using all the data at your disposal but not using it in a way that that overreaches what is appropriate.
Jennifer: And that’s one of the things I teach my students actually is context. To your point. I tell them what’s the difference between like say customer acquisition versus retention? Even things like, you know, in customer service when someone calls, so there’s a Harvard business review article that I had them go through where they give an example of someone like say customer service person and they’ll, you’ll call and they’ll be like, you know what the problem you’re dealing with. You know, the better way to do it is to actually track each time they call, what they call it about. So when they call you go, Hey, I know you call it three times prior about this issue. Rather than having the person has to be exasperated it and we explain why the last you call it didn’t work. Right. Yeah.
Amanda: There is literally nothing worse than getting transferred on a customer service call and having a next person have absolutely no idea what you were just talking about.
Kate: I did an article a while back about should a bot have to tell you what’s a bot. And it ended up being part of tech humanist. But some of the research I did in writing that article was about what consumers sentiment is about interacting with automated service bots, chatbots, and the main takeaway seems to be as long as it’s going to get the job done, as long as my data’s going to be secure and as long as my information is going to be transferred from point to point within the workflow, bring it on. Like that sounds like a great way to get fundamental business questions achieved. Like if I have a login problem or something like that and all I want to do is get my password reset, a chatbot is going to be hands down the best way to go about that barring some sort of integrated insert into the website itself. But assistance with that in whatever context it’s going to be the simplest and preserve the state of the interaction. So that yeah, like you’re saying, you don’t have to explain “Well, as I just told to customer service agents before you, here’s my situation.” [Laughing]
Amanda: Yeah. Smarter HQ did a survey of customers and they came out with what I guess essentially is being called the privacy paradox, which is that 79% of those consumers were worried that companies knew too much about them, but at the same time 90% of them were willing to give behavioral data if that meant that they were going to have a better shopping experience. So it’s like we’re terrified and yet our experience is so important to us at this point and they were like, Eh, the end of the day, as long as you’re going to use it effectively and keep me safe at the same time I’m all in.
Kate: Yeah, we have such a longstanding relationship of this trade off between our personal data and then the privacy and security of that data and the conveniences that are afforded to us. So as long as we believe that that data is going to be kept private and secure, there’s usually a reasonable exchange of “I can give you data as long as you give me conveniences or discounts” or whatever. Something that feels like an accommodation of that data. I think it’s just getting more complex. The more people realize that the more of it that there is out there and the more people become attuned to just how much data sort of floats out in free space conceptually about themselves, it starts to become more about how do I know that not only are you not going to abuse this data now, but that you’re not going to sell it in the long term. That it’s not going to become, and I don’t know that that necessarily consumers know how articulate all these concerns, but there’s this emerging sort of bubble of concerns that maybe are older notion of privacy and security is insufficient. So what we’re going to see going forward.
Amanda: It’s man and machine. Like, people you are a part of this. [Laughing]
Jennifer: Right, right. I mean that was one of the big things that I was always concerned about with all the rising of the startups is what happens in the startup gets bought out. Sometimes companies by startups for the data. Startups don’t, you know, they don’t have you fill out forms and don’t disclose information and there’s nothing in there that says if you get bought out then this is what happens to your data. And so I think that was one of the biggest concerns I had as all of these startups are out there and getting acquired that people don’t really talk about.
Amanda: I’m going to bring us full circle with a hypothesis that please tear apart, but you brought up this idea of purpose and brands and I think that a lot of what we see in the startup environment based on how venture capitalists invest, et cetera. That you have these companies that are starting because, and their pitches, “We’re going to be the Uber of healthcare” or “We’re the x of this” and rather than having an actual brand purpose, they’re just bringing some element of another company into a new market. Meanwhile, we’re talking about Stanford starting a program around ethical AI or I guess that’s not really what it is. It’s human centered AI at the heart of Silicon Valley. And I wonder if there is something in that as you bring up Jennifer, this idea that a lot of these companies get acquired and is there an important geographic element to Stanford launching this and is there this movement maybe that is starting to exist within the startup environment where we’re going to see me be more responsible data collection and use from some of these early stage companies going forward?
Kate: I feel like I’m hearing that as the Zeitgeist. as I was working on Tech Humanist and as I’ve put it out into the world, it seems to coincide with an awful lot of tech CEOs, many of them based in Silicon Valley, talking about a humane approach to technology or humanizing technology or some combination of those terms. So I do feel like this is a moment that we’re in where culturally we all kind of understand that there seem to be these kind of enlarging and consequences of, in-bigining [laughing], let’s say, consequences of what it’s gonna mean to have robotic process automation and AI and Internet of things and big data and all of these kinds of emerging…
Amanda: All the buzzwords.
Kate: But all of the emerging forces that kind of bring data and technology to bear in ways that really scale their experiences and consequences as they interact with the human environment. Like all of us in our human day to day experiences are going to be increasingly encountering that mesh of technology and data. So I think it is increasingly important that we have this conversation. And even if it’s just a sort of proactive PR move…
Amanda: Yeah, the lip service.
Kate: Yeah, I feel like we’re at least hearing that conversation be raised. I’m seeing it an awful lot from CEOs of tech companies, so I think it’s encouraging at least that the message is out there. Even if it is lip service at this point, we’ll see where it goes from here.
Amanda: Yeah, I agree. Cannes Lion just announced that the CEO of Cambridge Analytica, we’ll be there this year. That is a bold move, man, I don’t know how that’s going to work out for you. But interesting.
Jennifer: I find that idea very interesting too. This idea of you can’t fail if you’re high up enough. That’s a problem. Real problem, right? If you’re there in a position and you made something feel bigger or you know, you used data you shouldn’t have. Shouldn’t there be bigger repercussions as a society. Like we should not want to go hear them speak. And I think that’s part of the issue that comes up. And a lot of the data stuff that I’ve seen, I see way bigger violations happening outside of the business environment, in fact. The biggest violations are actually from the political campaigns, right? Where they need to target people for very motivated reasons. In business you want to make money. It’s a much more general idea. You know, you can make money and a lot of different ways in political campaigns when you’re trying to get someone elected or you’re trying to get someone in position of power, you know, at the very specific motivation that does that. And there’s more of a driver, there’s more incentive. They’re using that same analytics you have in private industry and businesses in a different way. And in a way that actually is potentially much more damaging.
Amanda: Are you going to participate in digital census now that they’ve launched? Are you all in on digital or are you keeping your information behind your locked door?
Kate: Are we still going be around? The robots won’t have killed us yet. I don’t know. What about you? [To Jennifer]
Jennifer: Well, actually no people at the Census Bureau, so I feel like I have to be an early adopter here. I know they have worked really hard coding it.
Kate: Alright, I’ll follow Jennifer’s lead.
Amanda: Awesome. Well, I learned a lot. Thank you so much for being here. People need to be more responsible about their data. That was a key takeaway, but they also need to hold people accountable and bring retribution to those who are abusing the data.
Jennifer: I feel like this was a successful podcast.
Amanda: Awesome. Thank you.
Jennifer: Thank you.
Kate: Thank you.
Amanda: Thank you for listening.
Jason: Like, subscribe. But what else did they do, David?
David: Find thinkLeaders on Spotify, iTunes, Google play, soundcloud or wherever else you listen.
Amanda: All the places.
Jason: Google plus [Joking]
David: We’re everywhere.
Amanda: All right. Cheers for the weekend.
Looking for more thought leadership on chatbots and conversational agents?