AI for the Enterprise

How can AI improve the employee experience?

Share this post:

iTunes | Spotify | Overcast | Pocket Casts | more!

How can AI Improve the Employee Experience? In this episode of thinkPod, we are joined by x.ai co-founder & CEO Dennis Mortensen and Ben Jackson, founder of For the Win. We talk to Dennis and Ben about hiring algorithms and the danger of bias, whether HR teams are equipped to make data-driven decisions, Inbox Zero versus Inbox Infinity, and the possibility of cultural change. We also discuss whether engineers can predict human behavior, the ethics of employee nudging, how startups will start operating like a mid-sized European soccer team, and the definition of a sandwich.

Some of the questions we deal with include:

-How AI is being utilized to improve hiring and look at how talent is being attracted?

-Is there a difference between an employee experience and a customer experience?

-How do we start to think about employee output and helping keep employees engaged?

-Have we all become data scientists?

-Is it appropriate to nudge employees?

-Will email persist in the future?

“I’m just so optimistic that AI, or just productivity software in general, as of this very moment might just be able to remove many of them [time-wasting chores] so that we can get back to what we were hired to do.” -Dennis Mortensen

 

“I think one of the big challenges with algorithmic bias in hiring is that very few HR teams in my experience are even remotely equipped to understand the inner workings of the tools that they’re deploying for these things.” -Ben Jackson

 

“I think I’ve surrendered now to the degree for where I just assume everyone’s reading everything. And anything which I do in private is really a public post at some point in the future and I need to specifically go out of my way if I want something in private.” -Dennis Mortensen

 

“I don’t know that I believe it’s easier to fix the algorithms than to fix the culture…I’m a great believer in the flexibility of the human brain.” -Ben Jackson

 

Connect with the Dennis, Ben, and IBM thinkLeaders:

thinkLeaders @IBMthinkLeaders

Ben Jackson @BenjaminJackson

Dennis Mortensen @DennisMortensen

 

Read the transcript of our conversation below, which has been lightly edited for clarity – and tune in for new episodes every Friday!


[INTRO]

Amanda: Hey everybody, this is the thinkLeaders team on Friday, March 29th we made it through yet another week. Congratulations us. I’m Amanda Thurston.

Jason: I’m Jason

David: And I’m David.

Amanda: And today we had the opportunity to speak with Dennis Mortensen, who’s the cofounder and CEO of x.ai. And Ben Jackson, who’s the founder of for the, when we talked about a lot.

Jason:  That was a lot. It was very long. Such huge conversation.

David: I’m feeling a little jealous too about their inbox zero. I need to kind of go back and redo my inbox. It’s looking a little messy.

Amanda: What I think I was most surprised by and that conversation was that they were also on board with my inbox infinity approach. It’s okay if you’re at zero or infinity, but if you’re somewhere in between, that’s not good.

Jason: Right? You gotta go maximalist and minimalist and nothing else.

Amanda: I appreciate their optimism that there are going to be solutions for me on an email.

Jason: There was a lot of optimism and a lot of directions around this. We did not just talk about emails. We talked about a lot of AI interfacing with hiring and workplaces and employees and customers and all those things.

Amanda: Inspiring joy in employees.

David: But there were certainly some disagreement though about how we can do that inside the workplace. Don’t you think?

Jason: Yeah. There there is pro people, optimism, pro AI optimism, a healthy degree of skepticism in both directions. Yeah. This was a good one.

Amanda: Ethical design. I thought that was an interesting discussion that we should be designing technology for all outcomes, nefarious and good, and that really should start on day one and the engineering.

David: I really responded to the idea of trust. Right. Do we trust our employees when we’re using software that’s going to nudge them or is that kind of Orwellian? Is that too much? I thought that was a big area of discussion.

Jason: And trust works in all directions in that exchange too.

Amanda: Okay. Well, I feel like we’ve given enough teasers at this point. We should just get into it.

Jason: Let’s do it.

David: This is going to arrive in your inbox.

[INTERVIEW]

Amanda: Hey everyone. I am joined today by Dennis Mortensen, who’s the cofounder and CEO of x.ai and Ben Jackson, who’s the founder of For the Win. Welcome.

Ben: Thank you.

Dennis:  Thanks much for having us.

Amanda:  Thanks for being here. Can we do quick intros? Can you tell us a little bit about yourself and what you do?

Dennis: Sure. I run a tech startup in New York that’s called X.ai. And we spend the last five years trying to craft these intelligent agents that can schedule meetings on your behalf. So imagine you having a human assistant would you might pay 60 k for. Now you can get access to one of these assistants and pay eight bucks and be hopefully equally happy.

Ben: I run a consulting firm called For the Win, HR consulting for startups. Typically startups that are a little bit naturally more averse to HR.

Amanda: All right. So you both work in the sort of human industry, right? Either replacing humans or helping humans as the case of may be. Can we talk a little bit about how AI is being utilized to improve hiring and look at how talent is being attracted?

Dennis: I think it’s interesting for anybody listening here to go back in their email archive and find that initial offer letter for where they got hired to do some seven bullets in some posting that was put in place. Look at those seven bullets, go to your inbox, look at that 100 emails, see how many of those emails you can match those seven bullets that you were hired to do.

Amanda:  [Laughing] Yeah.

Dennis: And if it’s not all of them, what’s remaining is probably just a set of chores that you will have to do to do your job. So if you’re an account manager, you weren’t really hired to schedule meetings, you were probably hired to do demos or speak to customers. If you’re a recruiter, you weren’t hired really to kind of file reference checks and upload them to Greenhouse or what have you. You can probably gonna hire to speak to candidates.

Amanda: Right.

Dennis: And I think there’s just a whole host of chores that somehow came along with the jobs that we have today for where I’m just so optimistic that AI or just productivity software in general as of this very moment might just be able to remove many of them so that we can get back to what we were hired to do.

Amanda: I mean, technology at its best removes friction from people’s lives, right? And I think what you’re getting at i it’s really empowering the employee to do the higher value work that we actually want them to be doing rather than all the bureaucratic grunt work that no one wants to do.

Dennis: What I like about that little thought experiment is that it’s one part real because we all kind of sit and stare that inbox and it’s a little bit scary and to all of the information is already digitized. As in, it is already in digital form in your inbox. It might be unstructured data, which now need to kind of figure out how to get a handle off and my own little slice of the universe, which is that random requests for where, “Hey Dennis, I’m in Manhattan in May. Can you meet up for a diet Coke?” I need to kind of somehow figured out how to solve. That is something for where we are so close now to be able to go deploy these agents in many forms who remove all those chores.

Ben: I think a lot of people are gravitating towards AI for hiring because they’re overwhelmed.

Amanda: Yeah.

Ben: You know, on the one hand, technology has made it much easier to find the job. It’s also made it much easier to apply to a job. You can click a button and import your Linkedin CV. And I think there’s a little bit of a tragic glut of information right now, both for candidates who, you know, they’ve got hundreds of companies they can apply to at the click of a button and then the result of that is that hiring managers are flooded with literally thousands of resumes.

Ben: I was talking to somebody who works in a media company yesterday, they were hiring for an editor position and they manually went through over 1100 CVs. And the process for that, if you’re doing it by hand, basically you look at the cover letter and you have about five seconds to read the first paragraph or so and you just check to see did they mention this job and did they give me a good reason to explain why they’re interested in this job. And you can lop off about half of them that way.

Amanda: Wow.

Dennis: I am empathetic to the challenge of having to read some 1100 cover letter and only have so much time to do it. And I’m a fan of deploying some sort of machine solution to it. I do think there’s two buckets. I happen to be in the less controversial bucket, which is that we work on the very mechanics. So you do find 20 of those individuals in that pool of 1100 for where we probably should have some initial screening call or whatever your process might look like. And I’m in the business of making sure that that can be scheduled. So those are just the very mechanics. The part which you talk about are in the decision making bucket for where you now need to go deploy an algorithm for where you feel okay and comfortable in whatever decision making that you have deployed here. And that is not easy. As in, not easy to get right. It’s very easy to kind of implement. It is super easy to create some sort of filter for you go from 1100 down to 20. Getting it right is uh, a four hour kind of podcast. But super interesting space to kind of be in.

Amanda: Yeah. And we talk about bias a lot. And this is a space where it’s particularly poignant to start to consider how you eliminate people and what criteria you use and how you do it in a way that allows for diversity of all kinds in the resumes that actually get pushed to the top.

Ben: I think one of the big challenges with algorithmic bias in hiring is that very few HR teams in my experience are even remotely equipped to understand the inner workings of the tools that they’re deploying for these things. In much the same way that when people talk about unconscious bias, they say like, you know, the brain is a black box. If you’re trusting your gut, you’re not really conscious of what’s the inner workings there. These HR teams are basically working with a black box that’s telling them who’s worth looking at, and if you don’t know how that was built, if you don’t know the data that it was trained on, if you don’t know the biases of the people who designed it and how thoughtful they were about mitigating bias, then good luck.

Amanda: Yeah.

Dennis: I am actually optimistic because as you alluded to, if we want to change whatever humans we put in place today, that’s going to take one or two generations. As it seems completely naive to suggest that through good training, education, change of morale, they will come out in two years and be better recruiters and a little bit less biased. Then they were before. I think that is just set. I have no aspiration of any of those humans changing. I do think though that the velocity with which we can go implement software for, we have some built in positive bias. Which is not that we don’t want bias, we just want a positive one, can be implemented much, much faster and not a generation out. Just a very dangerous territory to be in and we need to kind of pay attention in anything which we implement.

Amanda: Yeah, certainly the first step is having a conversation about it. Right, which we are definitely having, but I do think that it’s harder to change an entire culture of an organization than it is to fix an algorithm. That being said, a lot of the training data that we’re using is a black box and a lot of the information that goes into these outcomes, I think people still feel a lot of discomfort with.

Ben: I mean, I’ve never tried to fix an algorithm. Let me walk that back. I’ve definitely fixed some algorithms. Lately I’ve never tried to fix a hiring algorithm, but I have worked on cultural change. And when I think about the sheer amount of difficult work involved in eliminating bias from a hiring algorithm, dealing with hundreds of thousands of candidates, and when I compare that to the incremental work of doing cultural change inside a single organization or even a hundred organizations, I don’t know that I believe it’s easier to fix the algorithms than to fix the culture. To be quite honest. I’m a great believer. I’m an optimist and I’m a great believer in the flexibility of the human brain.

Ben: I’m a…

Amanda: I feel like we’re coming to an impasse on this one.

Ben: I’m impressed that you find that level of positive foundation in the humans, which I’m also looking at. And when I look at them, I don’t see willingness to change. What I see is some version. Call it 0.8 and that is the version I got and that’s the version that I’m going to be sticking with until they retire from the workplace.

Amanda: We’re going to follow up and see who’s right in a couple of years. Just shifting gears a little bit, is there a difference between an employee experience and a customer experience in your opinion at this point. And we’re talking about those employees helping them to, to change or to adopt new tools, etcetera. It’s not that different from what we’re trying to get customers to do on a daily basis as salespeople or marketers or brands. How are those things colliding or how are they different?

Ben: I think a lot of people, they’ve figured out that your employees have a choice to work for you. And it’s almost starting to become a cliche now that you know your employees are your customers. At least in my world. I think if anything, I’m starting to think of the employees as even more demanding than the customers. And the reason is that, you know, if you want better Internet connection, there are many places in New York where there’s nothing you can do to get one. There are lots of state sanctioned monopolies. There are unofficial monopolies like Google. But if you want someone to pay you money to give them some results, no one has a monopoly on that. And so ultimately if we’re thinking about, you know, who can vote with their feet the most easily, in many cases it’s the people who work for you.

Dennis: So I’m a, on my fifth startup, I’ve been a fan of this idea that whoever we get together, founding team, early employees and as we grow we turn up at one fixed address. This is called a job and. We meet early, we work late and we’ll do what we supposed to do and then we get paid come the end of the month. I think I’m changing a little bit and slowly, and this is a counter argument to my comment from before of where I don’t believe people can change.So

Amanda: So, now you’re arguing with yourself? [Laughing]

Dennis: 25 years in, I think I might be changing this. So the whole justification for the corporation is that there’s a very high transaction costs and having some service done outside of the corporation. So it’s simply just cheaper and more efficient to have an employee. So if you’re large enough, having a new button designed by a contractor is suboptimal. So you have a UI, UX guy on staff and he’ll have the whole host of things that he’s or she’s supposed to do. I do think though with both the increased set a demand from the set of employees you have and the decreased transaction costs in the marketplaces where if I do want a button or do want something transcribed, if I do want any number of things that there might be a crossover in the not too distant future for where this whole idea of this startup, we should really be a 200 people down on, you know, 100 Broadway Broadway. Perhaps should be 15, but the rest is really just a network that is loosely associated with this idea that we’re trying to solve for. I’m not sure I see this current upward trend on the employee having evermore power being sustainable. I think they might just work themselves out the corporation and into a much, much looser setting in the future.

Amanda:  Okay, so what does that look like?

Dennis: Like a mid sized European soccer team for where you have some campaign, it’ll take a season, you get some good players. You have for every launch, which is a game on Sunday. A set of people who come in and help. You pay as needed. If you win some, you can buy some better players. If you don’t, you change them out.

Amanda: So are all contract workers?

Dennis: I think so.

Ben: It sounds like what you’re saying is that as employees become more demanding of their employers that in many cases employers will decide that the cost of full time employment is no longer worth that trade off. Is that what you’re saying?

Dennis: That’s exactly what I’m saying. And I think coming back to the idea of, there must be some sort of transaction costs and it’s extremely low inside the organization. I can just send an email if you have no kind of project organization. But even with all the processes that we put in place, it is still slightly easier than imagining doing with outsiders. But I think that is about to change for where you’ll see even some organizations take a module for where or that exchange server or sue module, you know what we see is going to buy that elsewhere because we’ve reached a point where it, that seems almost a easier cause then we don’t have to work the kind of friction of the organization.

Dennis: I think that it’s very easy for senior executive leaders to undervalue fulltime employment. And the reason I think it’s very easy for them is that a lot of the costs of turnover, a lot of the costs of hiring frankly don’t show up neatly on their balance sheet. So no one is accounting for all the salaries that are paid out to people or working at a net negative output over the course of three months and then maybe working at 50% output for the next three months. Very few HR teams that I talked to can tell me off hand what their average cost per hire is, including the operating expenses for the salaries of all the recruiters, the job boards, and all the time they spend. And when senior leadership doesn’t have visibility into those things, it’s very easy to dismiss them and say that they don’t exist.

Amanda: How do we start to think about those things? How do we start to think about employee output and helping keep employees engaged?

Ben: There is a wonderful graph. It’s called the employee lifetime value graph by Greenhouse. Basically it shows you without putting real hard numbers to it, what the typical trajectory of an employee’s output looks like. And I’m going to draw in front of you, even though our listeners can’t hear it, but it starts at a negative, eventually moves up to a positive over time and then gets a little bit higher as you develop them. And then at the point where they check out and maybe give notice or maybe don’t if you’re not lucky, it just drops off precipitously. And each one of those points where it changes is an inflection point that you can end up influencing either by using more evidenced based hiring to figure out who the best person for a particular role is so that they ramp faster or by onboarding them and just giving them all the context they need to do their job by developing them or just by creating an environment where people are retained longer.

Dennis: So we’ve surprisingly seen that be one of us sales arguments. So really any software product today if sold into the enterprise will have some sort of our I argument for where you give me $100,000 then over time I’ll give you 150k. That’s probably a good argument. But if they didn’t buy that, we ended up on many times was this idea of, I hear your Dennis, perhaps there is some return. Yeah, the opportunity cost. You’re not full of BS but perhaps, but I liked the idea of just applying a little bit of joy. If I ask any one of my teammates, “Do like scheduling meetings?” “No, I [freaking] hate it.” If I can remove that, I might’ve injected just a small portion of joy. And we actually ended up in many of our pitches flipping this whole set of arguments where forget about the return. The cost of this solution is really kind of, it’s a slack type price, so it’s not dramatic. Just imagine the amount of joy you can inject it for a small amount of money.

Ben: It’s interesting because your customer experiences is your customer’s employee experience. You know, if there’s one thing that I keep coming back to that’s very, very different on consumer product teams versus HR teams, it’s the quality of the data that those people have on their customers is an order of magnitude more sophisticated then the quality of the data that these HR teams have on their employees. And it’s very difficult, not impossible, but it’s difficult to collect the level of data on your employees that you need to make data driven decisions.

Amanda:  It’s interesting because marketers have become data scientists, right? We’ve seen that progression over the last six years.

Ben: HR teams are becoming data scientists. Have you met the folks at Dropbox?

Amanda: And that is the direction we’re moving in. Yeah, you have to have that data layer in order to be able to make people happy. And we talk about bots being your brand advocates, but your employees are your brand advocates. I mean they are the first line of defense in talking to people about your company, but also attracting talent and bringing in the best of what’s out there. And so you would think that it’s just logical.

Ben: I agree. And I think also not only are they your best advocates, but they’re also potentially your largest detractors and I am consistently, I don’t want to say surprised anymore, but just I guess a little bit disappointed by the gap between how most executive teams treats say their app store reviews versus how they treat their glassdoor reviews and how often are those execs checking the app store to see, well how is our app doing versus how often are they going to Glassdoor and saying, well, what are our employees’ concerns? How can we address this? Read that little note that says advice to management and see maybe they’ve got some advice for me.

Amanda: How do we feel about technology like Keen or Vibe that are AI products that are actively monitoring either emails or Slack engagements, ways that employees are communicating with each other to get a finger on the pulse of their emotional state, if you will.

Ben:     For me, if an algorithm is monitoring my Slack conversations, especially if they’re monitoring private conversations, frankly me as an employee, I kind of want to meet the CEO of that company and sit down with them for about 30 minutes to understand whether or not they have a moral compass. Because the level of trust that I am extending to that company is higher than the level of trust I extent to my employer themselves.

Dennis: Yeah. Yeah, that’s definitely true.

Dennis: I think I’ve surrendered now to the degree for where I just assume everyone’s reading everything.

Amanda: Everybody’s reading everything.

Dennis: And anything which I do in private is really a public post at some point in the future and I need to specifically go out of my way if I want something in private.

Amanda: So Dennis has a stack of burner phones and his bag. Is what we’re finding out? [Laughing]

Dennis: My point is that if you really want something to be confidential, I think we need to both agree on what particular digital channels are we using and how are we going to handle the very information that we are discussing. My kind of flip of that, I’ve been a slightly more comfortable in how I interact with digital channels. But I think in all fairness, I’ve been, uh, just a slightly nicer guy because if you know what you say will be some tweet at some point or some screenshot on some platform, you just think it through twice.

Ben: But what does that do to trust? I just keep thinking about this investigation that happened in Brazil, somewhat recently called the carwash investigation. And there was one quote from a politician that really stuck out at me, which was they said that the only place that you can have a safe conversation in the capitol, Brasilia, was in the middle of a swimming port because everybody was recording all of their conversations with everyone else because the corruption was so widespread that the only way they could keep themselves out of prison was by having enough dirt on everyone else that they could cop their own plea deal. And I worry in a world where anybody can screenshot what you say and tweeted or where everyone is listening? I worry about what that does to trust.

Amanda: I agree. I think that it’s a really interesting conundrum.

Dennis: I agree…

Amanda: Dennis doesn’t believe in trust either.

Dennis: I don’t think we should conflate trust with transparency.

Amanda: Okay.

Dennis: And what I think I might be suggesting here, is just that most of what you do would probably be more healthy if it was out in the open. This is not some sort of inner Mark Zuckerberg of me, it’s just one way. Anything which you discuss in some executive meeting, which is supposed to be for the better of the organization. What is it in this meeting that can’t be exposed to the remainder the organization? It doesn’t, it doesn’t ring right to me. So I’m just extremely positive towards the idea of being overly transparent. I just tend to believe that I win more by being aggressively transparent.

Ben: I like defaulting to transparency. My concern comes when transparency is no longer optional.

Dennis: Fair.

Ben: And I can think of many, many, many, many, many situations, conversations in a boardroom meeting or in any other meeting where things like employee dismissals or impending layoffs or any number of sensitive subjects just can’t be transparent. This is why HR teams are so lonely. [Sardonic]

Amanda: [Laughing] We really need to implement employee engagement for our HR in place that they can be happy too.

Ben: I’m certain that a lot of them have a lot of thoughts on that.

New Speaker: I mean, I think Dennis is promoting the Elon Musk approach of transparency, which has worked out sometimes well for him and others not so well. But it is interesting to think about the difference between transparency and forced transparency and I agree. I think a lot of times where the technology breakdown happens with AI is that people feel like they’re being forced into a dataset.

Ben:  The people who are designing the algorithms, in many cases are engineers and in many cases had been engineers for a very long time. I have been an engineer for a very long time, so I can say this from my own experience. When you were talking to a computer for eight to 14 hours a day, that is time that you are not spending interacting with people. And when you don’t have that regular constant interaction and that feedback loop, you become less effective at predicting other people’s behavior and their reactions because you don’t understand what’s behind that behavior because you haven’t by virtue of your profession, put in the time to get that feedback loop. And I think oftentimes when people look at moves by large companies and they say, Oh my God, you know these people, it almost feels like they have no shame or like they’re somehow, they have no feelings, no empathy, in many cases I believe it’s because the controls are not built in at the design phase to ensure that all angles, not just the most optimistic scenarios are covered. Does that make sense?

Amanda:  Yeah.

Ben: It’s the same, you know when you see a website break on a different size screen, cause they only tested it on the smallest iPhone.

Amanda: Right.

Ben: But for people.

Amanda: It’s thinking about human centric design and behaviors in that process.

Ben: And being thorough about it. You know, there’s a school of thought that I love, it’s called red teaming.

Amanda: Okay.

Ben: Something that the US military and lots of other militaries love. But it’s basically the art of predicting or imagining all possible futures, including the worst possible futures that could be out ahead of us and working backwards from those nightmare scenarios…

Amanda: Like one with deviant bots that are trying to use algorithms against us? That would never happen, though…

Ben: Oh, of course not. And those bots will never be able to look at the video footage of people throwing rocks at Teslas. You know, if you’re designing a hiring algorithm and you were approaching it the way that a red team would approach it, instead of saying, “How can we make this find the best people as quickly as possible?” You say, how can we do that? And also let’s imagine how this can be used by bad actors. Let’s imagine how this can be used by good actors in ways that end up producing bad result?

Amanda: Right. Yeah. What are all the possible externalities?

Dennis: So some applications you would want to kind of implement for, so devotee and randomness for where you’re kind of simplest implementation where if you have to pick 20 out of that 1100 from before, you might want some version of one or two random candidates for where we must assume that whatever algorithm we build it will not be a hundred percent accurate.

Amanda: Right.

Dennis: That doesn’t exist and given it’s not a hundred percent accurate, how do I then create some version of where I might stumble into good news? And there’s plenty applications where that is a value and I actually think in HR, serendipity, randomness is a real variable which we should kind of implement against it.

Ben: I think that a lot of teams also are not necessarily aware of tools that are out there to help you at design time with these algorithms. I’m actually thinking about something IBM released that’s open source specifically for helping mitigate bias in the data sets that you’re using to train these algorithms, helping to mitigate bias in the models themselves and then helping you check the output after you’ve released it to make sure that you’re not generating tons of false positives or tons of false negatives based on attributes that just shouldn’t be classified.

Amanda: Yeah, I mean Open Scale is something that we’ve been thinking about for awhile just because as you start to integrate different platforms worth of data and you have a data lake or you have multiple data sets that are feeding into, that a lot of times it’s really hard to correlate what information is being used and how those outcomes are actually coming to fruition. I do think that we’ve seen solutions like that before though in commerce where we think about how we connect back office to front office and so it’s not a new idea. And I think that as we’re evolving AI, it’s basically like all of these issues have existed before where you didn’t have data continuity between systems or you didn’t have information that was properly getting to the technician or the person within the company. And it just seems maybe scarier and more complicated because the scale now, but like none of these are new issues. We’ve, we’ve been down this road before.

Ben: I think people underestimate how difficult it is to define what they’re looking for. There was this controversy a year or two ago about what exactly the definition of a sandwich is. Is a hot dog, a sandwich, you know, basically the only thing that’s not a sandwich according to one of my friends is a bowl of soup. And if we can’t agree on what a sandwich is for God’s sake, you know, how…

Amanda: It sounds like a subreddit board.

Ben: R slash, what is sandwich.

Amanda: No, it’s so true. And a lot of it is gut feel in the hiring process. And how do you remove bias from gut feel? I mean that’s, that’s the hardest one of all I think.

Dennis: But also because have an infinite amount of variables that you could take into consideration. And there’s a whole host of variables, certainly around the team that tend to be excluded or only exposed to the most minimal way. Meaning that you can hire the best engineer in the world. He’s an —. He doesn’t work well with the team. But certainly from an algorithmic point of view, he wins on each one of the parameters that we checked against. But if you can’t work with the seven people out here, he’s just not a good fit. So how do they become an input? But they can’t even be a fixed input because that team will change. There’ll be a dynamic team over time. So we can’t even be a from vendor fixed model. There needs to be some sort of fluid model dependent on this moment in time or what a particular project that you might be working on. So there’s this infinite out of variables where we won’t ever get it right. And that’s where I kind of come back to. You call it gut feel and I call it perhaps a certain deputy and randomness, but it’s kinda the same thing for where there might be some things that can’t be fully predicted.

Ben: I think you could probably train a machine learning algorithm to detect —. Feed them enough content, enough tweets or enough.

Amanda: I have one Twitter account they can definitely use for that.

Dennis: I think you can just call me. I have, I have a good internal API for that.

Amanda: Switching gears. So this is a leading question because I know how one person is going to answer it, but we have been predicting the demise of email for as long as email has existed. Especially in the marketing world. Do we think that email will persist?

Dennis: Yes. We do. And it must survive I think at all costs because it’s the last open messaging platform we have out there. For the rest we have is this set of small gardens. And I’m not so sure that’s a future I want to live in. And email, at least as a protocol, is fully democratized. As in you and me can set up a email server in the basement right now and run our own node on that kind of messaging network and I think that is absolutely fantastic and we should all really work towards making sure it survives. We should also help each other figuring out how do we kind of overcome some of the unfortunate struggles that came along with managing an inbox at work and we can’t all be as anal as you and I here, right [looking at Ben] running inbox zero.

Ben: Team Inbox Zero.

Amanda: No you’re not. [Wondering]

Ben: Most days, but not all days.

Amanda: Oh wow. You guys cannot be my friends.

Dennis: I came to this office here [podcast location] with one email in my inbox. So again, coming back to where you started, email deserves to survive. It’s a fantastic protocol. It is not owned by anybody and it’s been extremely robust and secure for decades on end. We should then also start to kind of work on our inbox. And what we’ve seen, at least up until this point have been mostly processes. As in, you come up with some sort of idea for how you best manage your inbox. You read it in the morning, in the afternoon and do inbox zero. You run it like some sort of river. You can have all sorts of processes which you deploy it but are really based on willpower for the most part. But I do think we are now on the verge of starting to see real technology come along and help us with our inbox.

Ben: I think the inbox is a little bit like a super highway in the sense that creating more lanes will inevitably create more traffic.

Amanda:  I want to be clear, the different inboxes within Gmail have not helped me. They have only created more opportunities for me to not read an email.

Dennis: I don’t use the whole tab idea I seen that is just taking a nightmare slicing off into four small one and I’m not sure that was a good idea.

Ben: On my personal Gmail, the only thing I use them for is to drive a set of filters that pushes them into my archive and marks them as read.

Amanda: Okay.

Dennis: Filters are your friend. If we want to talk about software versus traditional human processes, anybody who runs an inbox and have not set up a single filter are almost at fault. You must have a set of filters.

Amanda: That’s a bold statement right there.

Dennis: If you run an inbox that can’t be a set of emails arriving in your inbox, that doesn’t come at some recurring cadence for where you certainly need to have the information searchable. But you don’t need it to kinda infect your inbox.

Ben: I feel like the fact that we think that filters are necessary for email use, it feels to me like not super reasonable. And I say this from the perspective of an engineer who I have lost count of the number of times I’ve heard one of my colleagues say some variant of read the manual to a user. There is actually a phrase from the old school it support world. It’s pronounced I think PEBKAC. It stands for “problem exists between keyboard and chair.” Many technologists, and I count myself among them, frankly, there’s a little bit of, I don’t want to say derision because that sounds too mean, but just, “Oh, these poor people who you know, they can’t use the brilliant system that people like us have created to make the world a better place. ”

Amanda: It’s not very customer centric.

Ben: It really is not.

Dennis: Okay. I’ll push back on that just a little bit. I agree with the idea that my mom won’t set up any filters that is just not happening. I do think, or trying to at least to allude to the fact that we’ve had not three decades of stagnation on email, client innovation, but almost. As in, if I look at that first email client of mine where I have a set of folders on the left hand side, Iread, my kind of set emails at the top. That’s some I’ll kind of edit a new email pain at the bottom. It kind of looks like what I had many, many moons ago. I do think that might be this moment in time for it. We’ll start to see some of these ways of organizing. There’ll be automatic. My mom, if we continue to use her, don’t use a text expander. If you don’t have a text expander, you’re just signing up for extra pain.

Ben:     For the listeners. Text expander: an app that will take a small amount of text that you type and turn it into a book in your email if you want.

Amanda:  Thank you.

New Speaker:   So I have…twice a week somebody will ask me if I want to angel invest and that’s super kind and nice and I want to be kind of Nice and my response, but I don’t angel invest so it’s just right. No angel, but it extracts into a more kind of polite version of, Hey, I see where you’re coming from. I’m also into ___ go go on that, but no. I have all sorts of things like “yes, hire” “no hire”, “meet 200” that means I’ll have a long elaborate version of we should meet on 200 Broadway. But my point is that she does not have that. Once you do have now though is these little mini text expander versions that is coming with Gmail. She’ll right see you and then Gmail will write the next 11 some odd characters.

Ben: Have you noticed a spike in her enthusiasm?

Dennis: There’s more thank yous and that’s nice. Whether its her or Sergey, I don’t know. But I actually do think we might be on the verge of just more email client innovation or perhaps that’s just a dream of mine.

Ben:  What I like about inbox infinity is that it flips the dynamic of email where, you know, in the past I’ve heard email described as a to do list that the rest of the world gives to you without your permission. And with inbox infinity, if you want to talk to me, the onus is on you to email me at a time when I’m probably checking email with a subject line that will catch my attention and tell me what’s inside and the message and with few enough sentences that I’m not giving you 15 minutes of my time for free to get to the point you’re trying to make to me.

If it has too many words in it, there’s no way I’m reading it.

Dennis: We are on the cusp of this happening with the messaging platforms like Slack for for where it’s suddenly socially accepted to tell other people that I missed that Slack. It got lost in that stream and that seems to be, to a large degree, socially acceptable. Right?

Ben: Totally.

Dennis: But it’s not yet as socially acceptable to say, “no, I didn’t read your email.” That’s got another set of connotations attached to it, but I think we might now start to see some of the same set of emotions.

Amanda: I certainly be pushing for that.

Ben: I see it as it has become more socially acceptable across the board to simply say, I did not see your message regardless of the platform you sent it to me on.

Amanda: Have you heard of email dot forgiveness day?

Ben: I have.

Amanda: Because I’ve never actually actively participated, but it’s probably something I should.

Ben: I received an email from Matt Lieber, I believe it was on email debt forgiveness day. Is he the president, the CEO, the person who runs Gimlet media, which is the company that runs Reply All, which is the podcast that came up with email debt forgiveness day and mainstreamed it. Um, but I had emailed, Matt Lieber many, many months earlie, right when Gimlet came out, just like, hey, love what you’re doing. I’m a technologist. If you have any questions, I’m here. Blah, blah, blah. He got back to me like six months later. He was like, so how’s it going? It’s a email debt forgiveness day. And I got that and I was thrilled.

Amanda: The thing is, he clearly read it and he flagged it and it was in his brain somewhere that he had to respond to you. So there was some consideration given. It’s just sometimes hard to reply.

Ben: I was sitting with a bunch of other people and like a stadium size email purgatory, and I’m okay with that.

Amanda: Yeah. All right. Well thank you guys for joining.

Ben: Thank you for having me.

Dennis: Oh, this was fun.

Amanda: Thanks for listening.

David: Don’t forget to subscribe.

Jason: And like, and rate and review.

Amanda: We’re really shameless.

Jason:  I know. See you next week. [Laughing]

 

More AI for the Enterprise stories

Has AI raised the ceiling with marketing? An interview with Kate Bradley Chernis & Joey Camire

May 11, 2019 | AI, Think Leaders

Has AI raised the floor but not the ceiling with marketing? Have we over-indexed on having content at scale? And is there a way for marketers to understand when hyper-personalization will cross the line into creepiness? In this episode of thinkPod, we are joined by Kate Bradley Chernis (Founder & CEO of Lately) and Joey Camire (principal & founding team of Sylvain Labs). ...read more


Fuzzy or Techie?! Why AI needs more interdisciplinary thinkers

April 26, 2019 | Think Leaders

Does AI need more fuzzy thinkers? How can we increase interdisciplinary perspectives in emerging tech? Can an interdisciplinary lens help us better foresee unintended consequences? In this episode of thinkPod, we are joined by Scott Hartley (author of The Fuzzy and The Techie: Why Liberal Arts Will Rule the Digital World) and interdisciplinary artist Carla Gannis. ...read more


Can AI Help Us Find Love? A Podcast Discussion with Ryan Matzner & Rori Sassoon

April 16, 2019 | Think Leaders

Can we use AI to help us find love? In this episode of thinkPod, we are joined by Rori Sassoon (co-founder of Platinum Poire and author of The Art of the Date) and Ryan Matzner (founder of Fueled). We discuss and debate whether AI is an effective and efficient matchmaker, how AI may be able to know us better than we do, the amount of data a dating platform needs, and the prospect of using visual recognition technology to find a match that looks like your favorite celebrity. ...read more