Could AI wipe out software engineers? In Episode 28 of Mixture of Experts, host Tim Hwang is joined by Chris Hay, Kaoutar El Maghraoui, and Shobhit Varshney. First, the experts discuss GitHub reporting a rise of developers driven by AI code assistant tools. Next, Big Sleep finds a vulnerability in SQLite, what is the future for these kinds of AI agents? Finally, OpenAI released SearchGPT, what is the future of AI search? Tune-in today to find out!
Key takeaways:
The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
📩 Sign up for a monthly newsletter for AI updates from IBM.
Tim Hwang: Does the rise of AI mean that there will be more or fewer software engineers in the future? Chris Hay is a distinguished engineer and CTO for customer transformation. Chris, welcome to the show. What do you think?
Chris Hay: A billion software engineers by 2027.
Tim Hwang: Wow, 2027? Okay.
Shobhit Varshney is a senior partner consulting on AI for the US, Canada, and Latin America. Shobhit, what’s your thought?
Shobhit Varshney: Everybody will go from becoming a programmer to being a pro at grammar.
Tim Hwang: I will ask you to explain that more in just a moment. Kaoutar El Maghraoui is a principal research scientist and manager at the AI Hardware Center. Kaoutar, welcome. What do you think?
Kaoutar El Maghraoui: I think it’s going to be a different breed of software engineers that we will be seeing.
Tim Hwang: All that and more on today’s Mixture of Experts.
I’m Tim Hwang, and welcome to Mixture of Experts. Each week, we bring you the analysis, debate, and banter you need to stay ahead of the biggest developments in artificial intelligence. Today, we’re going to cover AI for cybersecurity and the launch of SearchGPT, but first, let’s talk about software engineering.
There’s a fascinating blog post that came out from GitHub the other week, reporting data that from their perch, there appears to be a rising number of developers, driven largely by tools like Copilot. Second, they also point out that Python is becoming incredibly popular, driven largely by data science and machine learning applications.
This is super interesting to me, and it’s one reason I wanted to bring it up as our first story. Had you asked me, I would have said, “Look where code assistance is going; we’re going to eventually replace all the software engineers. There will be no more software engineers in about a decade.”
Maybe Chris, I’ll toss it to you first because your prediction is that, if anything, we’re going to have way more software engineers by, I think, 2027? So literally like 24 months from now. Why do you think that?
Chris Hay: I think so for two reasons. Number one: with code assistants being everywhere and with things like ChatGPT and large language models in the everyday person’s hands, everybody can become a coder. You don’t need to pay someone to do that; you can literally have a go yourself. I think that is going to open up this democratization of coding we’ve all hoped for. I think more tools will come in, like you remember Scratch from MIT; we’re going to see more of that style, and everybody is going to become a coder.
The other one is... you didn’t say in your question, Tim, whether they had to be humans, did you? So, the carbons and the silicons—there’s going to be a whole bunch of silicon coders to match us carbons. So, when I multiply that up by 2027, there’s going to be a billion, buddy.
Tim Hwang: Okay, all right. That’s really interesting. I guess what we’re talking about is whether the job of “coder” or the category “software engineer” will make sense in the future. It almost feels like no one says, “I’m a word processor”; everybody knows how to write.
Shobhit, your response seemed to suggest you think the skills needed are going to have to change.
Shobhit Varshney: Yes, absolutely. I think all of us will become pros at writing good grammar and at how you ask a question and describe what you want to get done. A good technical PM does a really good job explaining what they need so the developer can execute the code to their vision. I think that’s going to shift quite a bit.
Let me just spend a minute appreciating how far GitHub has come. You just referred to their annual report. Last week, we were at their GitHub Universe event, which IBM sponsored, just to give you a sense of how far they’ve come. GitHub is the world’s biggest repository; 90%-plus of all Fortune companies use it, 98% of developers, whatnot. We’re at about 100 million-plus developers on GitHub today. Chris, not quite at the billion you want, but in the last 9-10 years—this is their 10th year—they’ve had close to 70 million GitHub issues solved, almost 200 million pull requests, 300 million-plus projects.
The way I look at it, open source is the biggest team sport on Earth. It’s not soccer; it’s not football; it’s open source. It has been crazy growing. When you hear from Tom, the CEO of GitHub, they’re giving you actual stats of what they’re seeing with people developing more and more, and he’s very true to say that AI lowers the threshold of creating code, engaging with GitHub repositories, trying it out, downloading it, contributing back.
IBM has been a big proponent of a very open community and has a really good relationship with GitHub. Now that GitHub is opening up, with Cloud and Google models that can be leveraged in addition to OpenAI models, I think this is just an unstoppable force right now in the industry. More and more programmers will have access to tools we couldn’t imagine a couple of years back.
Tim Hwang: Yeah, one of the most interesting things in the report was that the geography of software engineering is changing. They’re seeing a lot more coders from the Global South come online on GitHub. Shobhit, do you think that’s related to code assistants? I’m curious how you see the role of these assistants in potentially broadening the geographic scope of who gets to be a software engineer.
Shobhit Varshney: Yes. I spend a lot of time with Latin American clients in the Americas, and I see a lot of centers developing where the threshold for economic benefit has been upended. People can create code and contribute to other locations and countries. A lot of my clients are starting to build their Latin American presence; the time zone helps in the US as well. But just the access to tools and being able to create in every language—now I have an opportunity to know Portuguese and Spanish and be able to code and get assistance in those languages while creating code. That did not exist earlier, so the barriers have come down significantly.
One additional thing I would add: we should also look at the way energy movements happen across the world. If you look at countries like Chile or Latin America, there’s a lot of energy created there, and you want AI models trained closer to where there’s energy because energy consumption is going to be so high. I would anticipate more pull towards Latin America or centers with surplus energy production. It used to take a lot to move that energy to serve customers in the US; now, AI models will be created closer to the energy sources.
Tim Hwang: Kaoutar, I want to turn to you. Building on what Shobhit talked about, when you responded to the opening question, you said it’s going to be more about asking the right questions. One thread to pull on is that maybe in the future, we’re going to have a lot more technical PMs than software engineers. It feels like people are increasingly managing an agent that does the coding, not doing software engineering themselves. Is the right way to think about this that we’re just going to have a lot more PMs in the future?
Kaoutar El Maghraoui: Yeah, I see the skills shifting. Copilot, for example, is demystifying coding for people without formal training, turning more people into citizen developers. This means professionals from diverse fields—data analysis, design, finance, healthcare, etc.—can now use code to build custom tools without extensive studying or training in syntax. We’re heading towards a world where basic coding becomes as common as using spreadsheets or presentation software. They’re trying to be prompt engineers, specifically designing good prompts for software engineering.
I think it’s also time to start reimagining developer workflows for experienced coders. AI can handle repetitive tasks, letting them focus on higher-order problem-solving. This might alter the skills expected in software development, with coding transitioning from syntax-heavy work to strategic thinking and architectural design. Those would be good skills to acquire—not focusing on syntax but on how to build and design systems, using Copilots to help with the syntax.
This could also have implications for education. Curriculums now focus a lot on syntax. If AI can assist with coding, should educational systems shift from focusing on syntax to broader problem-solving or collaborative design? As Shobhit mentioned, open source is the biggest team sport, so acquiring skills in collaboration, doing PRs, and learning to work in a team will become really important. Traditional computer science curricula need to adapt, emphasizing creativity, ethical coding practices, advanced debugging, and collaborative coding.
Tim Hwang: Yeah, for sure. Chris, it kind of puts a tough question to you. Your title is “Distinguished Engineer”; you spent a lot of time getting really good at software. If a kid approached me today and said, “Should I be a software engineer?” should I tell them not to? It feels like where we’re headed... is there any more value in actually learning how to code anymore? I think that’s the question I want to put to you.
Chris Hay: No, they should go play soccer or something like that. No, I’m kidding. I think the question is, what happens when it goes wrong? If we think about the history of software programming, it was punch cards, ones and zeros, then assembly language, then C, and a whole bunch of others. But it really took off from C onwards, which is very close to assembly, and then abstractions got higher—Python, Rust, etc. It’s abstraction layer after abstraction layer. We went from hardcore punch cards to assembly to low-level languages to garbage-collected languages to higher-level languages.
All I would say is happening here is we’re moving to another level of abstraction, and that level is natural language. I think it will be better because with agents, we’ll have tools, etc. But you’re still going to want to know the fundamentals because what happens when you get a bug and it can’t fix it? Are you going to be like Homer Simpson, just hitting the keyboard, “Try again, try again”? Or are you going to have to go, “Oh my God, I’m going to have to use my brain”? So, I think the fundamentals are still going to be there. I see this becoming a higher level of abstraction.
Now, don’t get me wrong; if the models become good enough, there may be a different abstraction where models have their own native language, but I see this as an abstraction because we need explainability, we need reasoning. Somebody’s going to have to maintain this and look at it, and you can’t be fully dependent on the AI.
I do want to address one thing, Tim, on that GitHub report. We mentioned Python being the most popular language, but we didn’t talk about that much. I love all languages; I love Python. But when number two and three are TypeScript and JavaScript—which are effectively the same language—if you add the two together, who’s number one again? I had the same reaction. I am a Python die-hard, but I do feel like that was a little funny in the counting.
Kaoutar El Maghraoui: If I might add, I think there are also some risks here. There are potential risks for AI-created code, especially as more code is generated by AI. Quality control becomes a concern. How do we ensure AI-generated code is secure, efficient, and maintainable? There is also the risk of overreliance on tools like Copilot, which could lead to a drop in fundamental coding skills among new programmers.
Of course, there are lots of advantages in democratizing and having more developers, lowering the bar of entry, but we shouldn’t ignore the risks around quality assurance, ethical considerations, security, and when things fail. Can we ensure we have skilled programmers, as Chris mentioned, to figure out what’s going wrong? Or will we have less skilled people in those fields? What’s the right balance here?
Tim Hwang: Yeah, it’s always a tricky balance between democratizing, making it accessible and usable, and reliance on these abstractions. My mom was a coder before her retirement and has a story about carrying punch cards to the computer and dropping them, but having a good enough sense of the program to reassemble it physically. That’s a level of diligence modern engineers wouldn’t have, but we’re happy we’ve moved past the punch card era.
I’m going to move us on to our next topic. There was a great story that follows a sequence we’ve had on MOE for the last few weeks: the application of AI, specifically agents, to computer security. Google did a blog post from their security team, Project Zero, reporting that they have a cybersecurity agent called “Big Sleep” that was able to find a vulnerability in SQLite, one of the most widely used database engines.
This is interesting because, by their accounting, it’s one of the first instances an agent found a genuine vulnerability “in the wild” in a widely used codebase. It’s almost a “hello world” demonstration that we might one day use these agents to identify real-world vulnerabilities and make systems safer.
Chris, I’ll kick it to you. Is this the beginning of a new era where agents play a bigger role in making systems more robust, or is this still in the realm of a toy project? Are we still years off from living in that world?
Chris Hay: No, I think we’re already in that world. There are a couple of things about Big Sleep. If you give agents access to tools and have them follow patterns, they’re going to do a pretty good job. In cybersecurity, tasks like “fix this bug,” “identify this pattern,” “find what ports are open on a firewall”—agents can do that today.
Now, looking at Big Sleep, I want to caution this: when I read the paper, they took an existing vulnerability in that codebase and got the agent to search the PRs and find another vulnerability of that style that matched the pattern and wasn’t patched yet. So, as much as the agent “discovered” a vulnerability on its own, it was pattern matching and was prompted and directed to find a bug of that similarity. That is completely within today’s technology. Agents and models are really good at pattern matching; if you give them access to a large enough codebase via tools, access to PRs and commits, they’re going to be able to do that.
Are they at the stage of finding a whole new class of vulnerability that is completely undiscovered and not prompted and patterned in itself? I don’t know yet; I think we’re a bit off that, but not too far.
Tim Hwang: Pretty interesting. Kaoutar, maybe I’ll bring it to you next. You think about the risks around these technologies. It seems you can use this for security, but bad guys will also get access to these agents. It’s straightforward to say, “I have this vulnerability; find it elsewhere in this codebase,” which is exactly what you’d do to harm systems. How do you see that cat-and-mouse game playing out? Does the defense have the advantage now? Do you think the offense will eventually have the advantage? What does that balance look like as systems become more sophisticated?
Kaoutar El Maghraoui: Yeah, that’s a very good point. Of course, systems like Big Sleep are strengthening defense with AI agents, revolutionizing vulnerability testing, allowing continuous autonomous scanning that adapts to new threats. This is especially beneficial in complex environments like cloud infrastructures, where manual monitoring is inefficient, and security teams can be empowered to act faster on emerging vulnerabilities, reducing the attack window.
However, at the same time, there’s the threat of offensive AI. AI-driven security tools can also be a weapon in the wrong hands. Just as defenders can use AI to preemptively catch vulnerabilities, attackers could use similar tools to identify exploits at scale. This creates a potential AI arms race in cybersecurity where the line between defense and offense is very thin.
Tim Hwang: Yeah, what’s so interesting is it suggests we’ll eventually see a whole dark criminal ecosystem mirroring the public one—a “criminal Lambda Labs” where you can run these agents for criminal purposes. It’ll be interesting to see how that ecosystem evolves because people using agents for bad purposes will need the same infrastructure as those in cybersecurity.
Kaoutar El Maghraoui: Yeah, so I think that’s why some ethical and regulatory challenges will need to be resolved. With this rapid development of AI-based security, there is a call for frameworks to ensure responsible use, to protect these infrastructures and tools. Governments and cybersecurity experts need to create ethical guidelines and regulations to balance the benefits of things like Big Sleep with its potential misuse.
Shobhit Varshney: Let me give you a client perspective on this. We do a lot of work with clients on cybersecurity; we have a whole Security Services team in AI consulting doing an exceptional job. We also partner heavily with partners like Palo Alto, leveraging generative AI and AI models heavily in that partnership.
It’s a two-way street: AI helps drive better security, and the reverse is how you secure the AI models themselves. If you look at the three steps clients go through:
1. Securing the data that went into the models.
2. Securing the model itself from cyber attacks.
3. The usage itself—preventing misuse of the model in production.
Across all three buckets, we’ve done quite a bit of work creating AI models that prevent, detect, and counter adversarial attacks. We recently released our Granite series of models, Granite 3.0. There are public benchmarks and private IBM benchmarks where every model we put into production is tested across all these different attack patterns.
Looking at that class of small models (roughly 2 to 8 billion parameters), we do a really good job. The Granite model scored higher than Llama, Mistral, and a few others across seven or eight different criteria. On securing the usage, every time you talk to a model and bring data out, both inputs and outputs get filtered. I’m much more confident in November 2024 that when we put models in production, there are enough safety guardrails from IBM and ecosystem partners to address these fairly well.
Tim Hwang: Yeah, that’s great. One subtlety worth diving into, Shobhit, is that with Big Sleep, you have an AI model examining traditional software code. There’s a whole separate set of questions about using models to analyze the security of models. Where all this goes is that once you do security on agents, the security of your security agent becomes important. Can you talk about how thinking around that is evolving? The pattern matching of “here’s a vulnerability in code” looks different from using a model to evaluate the security or safety of a model itself.
Shobhit Varshney: Yes, I’ve been really excited about the work the AI community has done in this space. Outside of Google, there’s amazing work by NVIDIA, Meta, and IBM Research on creating models that can detect vulnerabilities at scale. There’s pattern recognition on logs, vulnerability corner cases. You can now create infinite possible combinations of how to break a particular model and stress-test them in real-time.
I think we’re doing a good job as a community sharing these techniques; a lot of the work has been very open-source, so you can compare different models and benchmarks, private and public, that people leverage to test software code vulnerabilities.
Over time, there’s a recent paper on comparing the “LLM judge”—how do you judge the LLM judge? It starts to get very meta, AI monitoring AI. But I think we are just moving the bar on what a human does versus what an AI does. If you think about employing people, you hire a graduate from an amazing school with multiple degrees—like a really nice LLM—and give them some few-shot learning during training, saying, “Here’s how we do this in our company.” Then you give them access to all other vulnerabilities; they read up on new vulnerabilities in real-time and think how it impacts their own code. We’re starting to crunch through those steps a human would do. If you think about bringing a new graduate from MIT or Stanford into your organization for cybersecurity, that’s the exact same pattern we’re following with LLMs.
Tim Hwang: Yeah, that human metaphor of training cybersecurity experts and applying it to the model is interesting. It lands on my final question for this segment. Chris, if I can ask you to make another wild prediction for this episode: The badge of honor for a security person is disclosing a novel exploit at DEF CON. Do you think agents will eventually pull that off? If so, what’s your over-under on the year? Is it 2027 when we have a billion engineers, or how far off is that?
Chris Hay: This is my prediction: AI agents will reveal the first “human vulnerability” in code, and therefore they will say, “This person here is a human vulnerability; they’re doing bad things.” So that’s my prediction. 2028—it’s going to be the other way around; AI agents predicting human vulnerabilities.
Tim Hwang: Interesting. I would love if the agent found a new method for social engineering; that would be perfect.
Kaoutar El Maghraoui: I think also what’s going to be interesting is, as AI finds security flaws faster than ever, the real question is: who’s quicker? Defenders patching them or attackers ready to exploit them?
Shobhit Varshney: It’d be really funny to see the human vulnerability part Chris mentioned. We’re doing this for a big Latin American bank right now, leveraging social engineering techniques. The emails LLMs create for social engineering attacks look so plausible. LLMs are really good at creating convincing content; you can trick people with clickbait into a rabbit hole. It’s working really well.
Some of our clients say, “I’m not sure about putting AI in production; our security teams won’t give the green check.” So, we go pilot LLMs for the security team first. If they’re convinced and put it into production, they don’t have an excuse to bottleneck the rest of the organization. It’s been a good method working with lawyers and cybersecurity teams in large organizations.
Tim Hwang: Yeah, it’s going to be so hard when you try to log into work and it says, “You’ve been locked out because you’re too gullible. We’ve assessed you can’t make it here.” Just, “Okay...” It’s coming, 2028; you heard it here first.
For our final segment, I want to talk about SearchGPT. It goes without saying that OpenAI is the heavyweight in the industry, the big leader. Everybody’s been waiting on their features, and one thing everybody’s been waiting for is for them to finally get into the search space. Long anticipated, it finally launched, and now OpenAI has a SearchGPT feature.
This enters a market dominated and competed over by companies like Perplexity and, of course, Google through Gemini, which really wants to get into this space. This is a big move; the big industry leader has put its marker down for what it wants to do in search.
Shobhit, you looked into this. The question I always come to is: Does this mean Perplexity is doomed? Is everybody doomed now that OpenAI is in the space? I’m curious what you think the effect on the market will look like.
Shobhit Varshney: So, I recently posted on LinkedIn saying that after I’ve had access to GPT Search for a while—I pay $20 a month to try out all kinds of AI; I’m a paid subscriber and was lucky to get access—I was comparing it. The closest competitor is something like Gemini Search, and then Perplexity.
I did a side-by-side comparison across 13 different topic areas, comparing GPT Search vs. Google Gemini. Overall, I don’t think I’ll be switching my search from Perplexity and Google Gemini over to GPT Search quite yet. There are a few things I found; I have a whole article with visual side-by-sides.
Google generally is a lot more visual. They’ve learned from years of UX the best way to represent information. For example, if you’re suggesting restaurants, if I ask GPT Search to find restaurants in a location vs. Google Gemini, Gemini understands it’s logical to put a map and pinpoint restaurants in the response. People want to interact with the graphic. Similarly, for weather, Google has a nice display at the top.
One thing GPT needs to address is they have a proliferation of capabilities not combined into a single UI yet. For example, when I switch to web search, I lose the ability to upload content. I can’t give attachments or use function calling I’m used to with o1 preview or 4o. In the Gemini world, they figure out what I’m looking for.
The simplest example: if I’m standing in front of a monument, take a picture, and say, “Find me restaurants around this,” Google Gemini will identify the place with high accuracy, give nice recommendations, and help fine-tune it. ChatGPT’s GPT Search cannot take attachments; it can’t take imagery, can’t do things like, “Here’s a document; go on LinkedIn and scrape something,” can’t act, doesn’t have access to function calling, can’t handle documents. Certain things are absolutely missing on the GPT side.
The last piece going for Gemini, which is why I favor Google, is the connection to your personal data. I’ve been a big Google user; my email is Gmail from the beginning. All my data—photos, calendars—are inside Google. When I ask, “Find restaurants near the hotel I’m staying in in Mexico,” it can find that quickly. It’s very personalized; with my permission, it can look into my emails. That has huge value add for me.
Tim Hwang: Yeah, that’s so interesting. One way of thinking about this competition is: How much is search about the form of the results vs. the substance? You’re saying that when you ask for a restaurant, it’s great to have the map and pins from Google’s index, even if the response is less conversationally flavored than Perplexity or something.
Kaoutar El Maghraoui: Just to counter some of Shobhit’s arguments: Of course, having that personalization is important, having access to all that, and Google has perfected many features given its long history with search. But don’t you see as GPT acquires more multimodality features and as more people use ChatGPT or SearchGPT, that personalization will come along? They’ll acquire more personal data and can customize things. I think it’s just a catch-up game.
One thing I find nice in SearchGPT that I don’t see in Google Search is the interactive, conversational nature. Unlike traditional search giving a bunch of links to click through, this makes search more intuitive, particularly for complex queries or ongoing projects; users might not need to click through links as the model delivers synthesized responses.
Shobhit Varshney: Kaoutar, I’ll push back a bit. I think it’s unfair; it’s apples and oranges if you’re comparing GPT Search with classic Google Search. The right comparison is Gemini Search with GPT Search. Google’s Gemini is multimodal; I can take pictures. It is personalized; you can tap into Gmail. I can take images, etc. Google acknowledges the blue-link world is dying. Their Gemini Google Search is an incredible product; it works really well, and they’re trying their best, within conservative boundaries as a large company, with personalization, multimodality, looking at long videos and summarizing—they have a very good moat.
The true comparison is not Google Search blue links with GPT Search; a lot of media are comparing the two, and I feel it’s unfair to Google.
Kaoutar El Maghraoui: I agree with you; it’s not a fair comparison. Yes.
Tim Hwang: And the question here is: Are we moving towards a “one model to rule them all” scenario for search, or will it be a competition?
Shobhit Varshney: We always had one model to rule them all with Google because they had 95%-plus market share. So, I think people are asking if that’s shifting to Google’s Gemini or if OpenAI will have a place as it improves its search capabilities.
Chris Hay: I think OpenAI is going to win this one out, but maybe not for the reasons you think. My experience with ChatGPT with search is it works as a true extension to the conversation I was having anyway. Maybe I’m looking at a paper and want something updated; without internet access, it only comes back with limited information. With ChatGPT with search, it extends out, takes its knowledge plus internet knowledge, and gives me back better answers. I found myself using ChatGPT with search more naturally than before. Rather than reaching for Google, I’m doing it within the conversation.
Now, if I bring that in with the o1 capabilities as they start releasing and combine modalities—OpenAI has been leading on modalities for a while; they’re ahead with o1 models, making it more agentic—when they bring all that together, I think Google has a lot of work to do. Are they going after true search? No, but if this is a comparison between Gemini and the o1 models with search capabilities and tools, as it stands today, I think OpenAI is winning that one. I feel that from my experience, and the fact is, millions of people use ChatGPT today, and maybe 12 people on chat use Gemini today. So, that’s my feeling.
Tim Hwang: Yeah, there’s an interesting debate over what the commodity asset is and what’s the irreplaceable, hard-to-replicate asset. Shobhit, your position seems to be that all the data and incumbent advantage is the hard-to-replace thing. Chris is saying that getting the data isn’t the hard part; the initial additional analysis layer will be the unique differentiator. Maybe that’s the right way.
Shobhit Varshney: There’s no doubt Google is under pressure. Perplexity has shown how well they work; I’m a Pro user for a long time; they do amazing work. Generally speaking, yes, they have a lot of pressure to get this right; it’s a hundred-billion-dollar problem, so they’re putting everything behind it. They have to nail conversational search and be more personalized.
The things going in Google’s favor are they have the world’s data to train on from YouTube and search, and decades of patterns people follow to get the right answer—like when planning a trip. They have a lot to tap into that competitors like OpenAI don’t have today. Over time, they’ll try to catch up. Google will always have fire behind them to fix this.
But the fact that my personal data is accessible to Google—that may change, but currently, it’s more relevant for me to have an answer hyper-personalized to me. If I ask to set an itinerary in Italy, it should know I’m landing at 2 p.m. and not start my itinerary at 6 a.m. I have to tell a model, “Understand what’s important to me; the airport is X hours away.”
I’m thinking from an Enterprise perspective. Our clients are focused on their repositories—manufacturing documents, warranty documents—and need to search against those with high accuracy. They need the same experience they get with ChatGPT Search or Gemini brought to their employees to unlock value.
It’s really nice to see Meta getting into this game; there were rumors this week about Meta coming up with its own search, incrementally making progress. I’m excited about the future of getting information in the moment, hyper-personalized to how I consume information and what’s in my emails.
Chris Hay: And I agree with that, Shobhit, but you know what? I don’t want Google having exclusive access to my information. I actually want an open ecosystem and marketplace where I can plug into agents, “Go access my Gmail, go access this,” as opposed to, “Well, Google already has this information, can train its models, and nobody else can play.” So, an open ecosystem is where I am. Yes, I agree, but it’s got to be open.
Kaoutar El Maghraoui: Yeah, there is potential for a centralized AI search model to emerge, potentially monopolizing search. While this could bring consistency and ease of use, it also risks creating an information bottleneck. I definitely agree with Chris that an open system would be better. If one model provides most search answers, it might centralize information flow, reduce diversity of sources, and shape public knowledge in ways we don’t yet understand.
Tim Hwang: Great. Well, that’s all the time we have for today. Shobhit, you mentioned that Meta thing; that was the other part I wanted to get into, so we’ll definitely have that on a future episode. Unfortunately, we are out of time today. Thank you for joining us. If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify, and podcast platforms everywhere. Shobhit, Kaoutar, Chris, thanks as always; appreciate you joining us.
Listen to engaging discussions with tech leaders. Watch the latest episodes.