Doug Lenat on Cyc, a truly semantic Web, and artificial intelligence (AI)

Doug Lenat is a prominent researcher in artificial intelligence (AI) and founder of the Cyc project. The Cyc Knowledge Server is a large multicontextual knowledge base and inference engine developed by Cycorp. The Cycorp Web site explains the goal of Cyc as breaking "the 'software brittleness bottleneck' once and for all by constructing a foundation of basic "common sense" knowledge — a semantic substratum of terms, rules, and relations — that will enable a variety of knowledge-intensive products and services. Cyc is intended to provide a "deep" layer of understanding that can be used by other programs to make them more flexible." Lenat talks about where Cycorp is in pursuit of that goal, and about what he calls a truly semantic Web.

Scott Laningham (scottla@us.ibm.com), developerWorks Podcast Editor, IBM developerWorks

Scott LaninghamScott Laningham, host of developerWorks podcasts, was previously editor of developerWorks newsletters. Prior to IBM, he was an award-winning reporter and director for news programming featured on Public Radio International, a freelance writer for the American Communications Foundation and CBS Radio, and a songwriter/musician.



16 September 2008

developerWorks: I'm Scott Laningham for developerWorks. I'm here with Doug Lenat, founder and CEO of Cycorp in Austin, Texas. So nice to be with you, Doug, today.

Podcast audio

Click to play, or right-click and Save as to download Part 1 and Part 2.

Lenat: Thank you, Scott.

developerWorks: Let me ask you right off the bat about this idea of a brittleness bottleneck that I've seen in some of the things that you've written about. What exactly do you mean by that, and how does that come into play with the work that you're doing here?

Lenat: When I was building programs back in the 1970s and 1980s, I kept coming against the same brick wall over and over again in different areas — in machine learning and natural-language understanding, in speech understanding. Namely, our programs would have a little bit of success, but they just wouldn't scale up. So they'd get to a certain point where, if we were trying to produce journal articles, they would produce one or two or three running examples. And the articles would come out fine. But then we'd turn to an unanticipated situation, the next case. And they'd almost be like Wile E. Coyote and Roadrunner cartoons, where they were sort out off the cliff and didn't realize it and would give the wrong answer because they weren't looking down, as it were.

Guest: Doug Lenat

Doug is one of the world's leading computer scientists, and is both the founder of the CYC® project and the president of Cycorp. He has been a Professor of Computer Science at Carnegie-Mellon University and Stanford University. He is a prolific author, whose hundreds of publications include the books Knowledge Based Systems in Artificial Intelligence (1982, McGraw-Hill), Building Expert Systems (1983, Addison-Wesley), Knowledge Representation (1988, Addison-Wesley), and Building Large Knowledge Based Systems (1989, Addison-Wesley). His 1976 Stanford thesis earned him the bi-annual IJCAI Computers and Thought Award in 1977. He was one of the original Fellows of the AAAI (American Association for Artificial Intelligence).

So the programs didn't have a model of where they were and were not capable. They didn't have enough general knowledge that, when they got into a situation that was unexpected, they could fall back on more and more general knowledge the way you and I do. And they couldn't analogize to far-flung situations and experiences like we do because they didn't have any far-flung situations and experiences.

Essentially — and this will sound almost like a totology — they could only do what they were programmed to do. And that kind of brittleness was acceptable in certain applications. But as we began to build programs that pervaded our everyday life, suddenly, we're in a situation where there's real danger, where you begin to give programs control over people's health, over property and wealth, over things like traffic, over things like financial transactions and stock purchases and so on. So now, suddenly, human life and human property are at risk. Effectively, we're placing more and more power in the hands of idiot savants. We wouldn't do that if they were human idiot savants no matter how good they were at their narrow sub-sub-sub-specialty. And it's almost kind of a blind spot that we as a culture are doing that with program idiot savants today.

So it was clear that what we needed to do was to sort of pull the mattress off the road. When I was driving into work today, there was a mattress in the road blocking one of the two lanes, and it occurred to me that there was a traffic jam forming and everyone would slow down. And as they got up to it and they saw what the problem was, they would merge into one lane and continue on. And it occurred to me that all of us and I was just as guilty as everybody else all of us saw that and said, you know, somebody ought to stop and pull the mattress off the road.

developerWorks: Right.

Lenat: But for my own local mini-maxing, my own local optimizing, I just sort of shook my head and kept on driving. And the traffic got worse and worse and worse. In much the same way, I believe that a lot of software developers, a lot of computer scientists and artificial intelligence researchers, in particular, have realized this kind of brittleness bottleneck as the mattress in the road. It's the reason why speech understanding, speech-recognition accuracy, and precision are not significantly better today than they were 35 years ago. Computers are faster, but all that means you can get to the same 90-something percent level, but you can't quite get at human levels. You can just get there a lot faster. You can get there on a $400 computer instead of a $400,000 computer. You can get there in real time instead of a thousand times real time. But you still aren't talking to your computer because it's those last few percentage points require understanding enough about the real world, understanding enough about the context and the domain that you're in that you can actually disambiguate noisy speech signal.

The same with natural-language understanding. Why is it that we put up with search engines where you type in effectively bags of keywords? Why can't you sit there and type in even a simple question —like, "Is the Space Needle taller than the Eiffel Tower?" — and get an answer to your question? It's basically because the software is just a little bit too brittle. It's good enough to find documents that have these terms in them, sort of like the dog bringing you your newspaper in the morning. But, like the dog, it doesn't actually understand any of the stories it's bringing you. And it doesn't really understand what you're asking when you're asking a question.

developerWorks: So this journey you're on here at Cycorp, is it about, not finding creative ways to get around the mattress, but figuring out ways to remove the mattress?

Lenat: Right. I began almost kind of a crusade, almost starting a movement, back in the early 1980s of, you know, we're mad as hell. We're not going to take it anymore. We've got to do something about this mattress in the road. We've got to come together. And I was hoping that, as a community, we could come together and codify enough real-world knowledge, build what we would call today an ontology, and attach rules and assertions and constraints to that skeletal ontology that basically everybody could begin to use it as a framework. Not so much to establish one standard vs. another but just if there were enough of a framework there, then we could keep from doing this divergence all the time. We could keep from succumbing to this brittleness bottleneck.

So, we did some back of the envelope calculations — "we" being Alan Kay, Marvin Minsky, Ed Feigenbaum, John McCarthy, and myself. And we sat down, and we came about the calculations in different ways. One person looked at the number of words in languages. One person looked at articles in encyclopedias. One person looked at the rate at which you burn concepts into long-term memory and how long you have, essentially, between ages 0 and 10, let's say, and so on. And all the estimates began pointing to a smallish number of millions of things we would have to tell the system in terms of general rules, plus a lot more specific information, which, presumably, it could learn once it had enough general knowledge .... Much like a child: Once it has enough knowledge about the world, it can learn by having experiences by talking with people, by reading things, by watching what else is going on around it, and so on.

Lenat: So we basically looked in terms of knowledge engineering work that would be required. And we sort of decided it would be on the order of 1,000 person years of effort. That was just a somewhat depressing thing. And it was very difficult to harness the community to get behind that. Essentially, academia and industry were and are set up in a way to fight against any large Manhattan Project like effort forming like that. Basically, individuals are incented to be different at all levels from graduate students up through professors. Companies are incented to be different and to look toward locally improving something today or next quarter or next year, not doing something that's going to pay off in 10 or 20 or 30 years.

So it was very frustrating. And just at that time, Admiral Bobby Inman was forming MCC, a research consortium here in Austin, Texas, to combat the Japanese fifth-generation computing effort, because Japan had vowed to do in software and hardware what they had already just done in consumer electronics and the automotive industry — namely, wrest control away from the West once and for all. So the Justice Department granted a dispensation for a consortium to form — 25 large American companies pledged many millions of dollars a year for a decade to fund high-risk, high-payoff, long-term R&D that would help the overall competitiveness of the United States.

So Inman came to see me. I was a professor at Stanford at the time with about seven graduate students. And he said, "Look, professor Lenat — you do the math. You've got like eight of you here. How long is it going to take you to put in 1,000 person years of effort? So do you want to do the first small step in that process, or do you want to come to Austin, Texas, work for us at MCC and maybe live to see the end of this because we' ll have 40 or 50 people working on it, rather than half a dozen?" It was a pretty convincing argument. Bob Inman was actually the most convincing and impressive boss I ever had. And I came to Austin and was principal scientist at MCC for 10 years from '84 through '94, building the first part of the Cyc Project, building that codification, that ontology of hundreds of thousands of terms, and organizing them, and putting in enough information to constrain them, and so on. Then, at the end of 1994, we spun out Cycorp. And since then, we've been operating pretty much just a couple of miles away from the MCC building here in Austin to continue that process and to transition it into commercial applications as well.

developerWorks: Where are you now along the 1,000 person-years curve?

Lenat: We actually have ... it's been 25 years. We've just crossed the 900-something person year mark this year. We're actually going to hit the 1,000-person year mark next year, I believe.

But the good news is that thanks to about five big errors canceling each other out, we're on schedule — namely, we have primed the knowledge pump, if you will. We've got enough of an ontology built. We have enough assertions about those terms that we can begin to push this out to the outside world as something that they can build on, something that they can use. Software developers can basically take this and incorporate by reference all of the work that we've done so far and essentially leverage that. So when it's time to think of a new ontology for some very particular application, instead of starting with a blank piece of paper, you could start with the Cyc ontology. You could start with the Cyc knowledge base and think of extending what's there already into the new domain, into the new area, rather than starting from scratch and building a taxonomy and ontology and a knowledge base from scratch.

developerWorks: How is that working now? What are some of the things that Cyc is actually being used for right now?

Lenat: We're seeing Cyc used for a lot of things we hadn't even expected it to be used for, but we're happy to see that. So, one of those is a kind of semantic searching, where people are putting in what appear to be natural-language queries. They aren't fully understood, but they're partially parsed. They're partially understood. Fragments are produced. And the person looks at the fragments and says, "Yes, these few are part of what I was asking about; these few aren't." And so on. And now, use all the knowledge in Cyc, use both general knowledge and domain knowledge to combine those fragments into one single meaningful query. And much like Adams with valences and folding constraints and all sorts of things, there's often only one unique way that these fragments could all be part of one single meaningful query that the user might have asked at that point. So the system is able to reason its way to figure out in very precise detail even in the case of a complicated question what it is that the user had in mind. They may paraphrase that back in English to the person and they ... usually, they'll say "Yes," and then the system will go off and answer that question. So that's one application.

The second application, in fact, you can think of it as the next phase after getting a logical version of the question the person wants is doing a kind of database integration. Actually, a virtual integration of information sources, structured information sources. If you think in terms of data warehousing, there's kind of a quadratic cost to doing a data-warehousing solution where if you have N data elements you almost have to do an N-squared kind of alignment of them. But instead of that, imagine you yourself as a human being with everything you know reading one book after another. You don't do a kind of N-squared correlation in terms of, you know, every time you read new material. Instead, you assimilate the new material in a linear fashion into everything you already know.

developerWorks: Exactly.

Lenat: So think of Cyc as kind of a inter-lingua, as a kind of growing central hub. And then the spokes are these new information sources that it's being told about. And so data element by data element in, for example, the case of relational database, for each field of each table of the database, we write a Cyc rule that explains the meaning of that field, like hiring date or birth date or whatever. And then, even if the information is coming from multiple sources, you have all of Cyc's ontology and knowledge base sitting there. So, for example, Cyc knows that in general people shouldn't be hired before they're born. And so even though this information came from two sources, there's some contradiction here. Maybe this isn't the same person because it looks like this person was hired at this job before, two years before they were born, which seems pretty unlikely. So that's an example of virtual integration, because the separate sources are still allowed to remain separate. Third-party organizations that caretake and extend each one still do that.

But as Cyc is given a problem and it breaks it down into sub-problems and sub-sub-problems, eventually, the leaves of that tree unify against these little rules like I was just describing, which essentially say, "If you go off and issue this SQL to this particular server, you'll get the answer to this sub-sub-sub-question. If you issue this sparkle to this RDF triple store, you'll get the answer to that question over there, and so on. And then by putting those together logically, you get what from the top looks like data integration, but, in fact, it's a kind of virtual integration. So that's the second use.

There is a third use that, just to give you a very different one, that some people are doing, trying to bring in an extra dimension of unpredictability into game characters. So when you have games where you have nonplayer characters and you're interacting with them, your Avatars are going up and talking with them and asking them questions and looking for them in different places in town or whatever it happens to be. If you're not careful, you end up with the same kind of brittleness bottleneck that we were talking about before, where the characters are very brittle. They might say something like, "You know my mother always warned me about trusting strangers." And you might say something like, "You know, what was your mother's name?" And the character might say something like, "I don't understand you or I don't know what mother is," or something like that, thereby completely shattering the illusion of reality. Thinking in terms of injecting more depth into these characters so you could go and ask them things that were not the three items that advanced the quest or advanced the story line .... But you could ask them anything you'd expect to ask someone who is a conductor on a train, or anything you'd expect to ask someone who's at a counter in a convenience store, whatever it happens to be in that game environment. A lot of games have gone way out of their way to put the player in a situation where everybody and everything else is sort of like dead, and the only thing moving is an enemy that needs to be shot at. And in a way, it's very sad that we've gotten into this mode of video games of like everything is prey because that was just the easiest way to avoid the situation — that if you actually stop and talk to other people, they exhibit this kind of brittleness. So those are just a few of the places where this is being used.

developerWorks: You talked a little bit about the semantic Web. It sounds like in many ways, it (CYC) can be a catalyst for the realization of it. But the overall vision of the semantic Web, is it a vision that you feel realizable soon? Is it too big? Is it too small?

Lenat: I think that the Web in general and semantic Web in particular are a series of false peaks in the way that when you're climbing a mountain, you look at something that appears to be the peak. And as you get closer to it you realize that it was simply obstructing your view of something higher, which is really the peak. But then you get to that, and it turns out that was a false peak as well.

developerWorks: So, a horizon then?

Lenat: Yes. When the Internet started, people believed if you could simply wire up a network of computers together, then they could share applications. People wouldn't even have to be aware of what machine they were running their applications on. It would all fit together. If only we could wire it up. So we wired it up. That was in the late '60s or early '70s. Then, as people did that, they said "I guess we need to agree on things like ASCII and some other things." So they began to agree on various character encodings and things like that. And then they ...

developerWorks: Standards and stuff?

Lenat: Right. Exactly. But every one of those was really a false peak. So a few years ago, people said, "You know, if only we could have bags of XML terms that we could all agree on standardized XML terms, then our applications would talk to each other" and so on. In many ways, this is a lot like what happened one generation earlier with EDI (Electronic Data Interchange), which didn't really succeed. And the problem is in EDI and in the initial semantic Web dream that I just recounted is that having agreement on term sets and even on taxonomies of term sets is not enough. It's a false peak. It's a false peak because without agreeing on most of the meaning of most of the terms, you have the appearance of agreement without real agreement.

We could take a concept like employee. And we sort of understand what employees are, and if you see the term "employee" is here and it has specializations, and it has generalizations, that gives you the sense, roughly, of what employee is. But here's a company for whom seasonal workers are not counted as employees; over here, they are. Consultants over here are counted as employees; over here, they're not. And company vehicles: This company includes its forklifts; this one doesn't. This one includes the cars that are leased for its executives; this one doesn't. And so on.

So you begin to get to the point where if all you're looking for is information retrieval, if all you're trying is to be a good dog running to fetch the paper for its master, then that kind of disagreement doesn't matter too much. But if you're trying to really answer questions like, "Which companies in Europe in the last five years have had the largest percentage increase in employees?" then you really will begin to get genuinely wrong answers if you have this kind of impedance mismatch, if you have these small diversions and divergences in the meaning of these terms. And it's even worse if you ask a question that requires multiple steps because then you're compounding these impedance mismatches.

developerWorks: So it's the data minus the intelligence?

Lenat: That's absolutely right. So it's not just the vocabulary. It's not just agreeing on taxonomies of terms. You actually have to agree on an order of magnitude of more things, namely, assertions about, and constraints among, and definitions of, etc., all of these terms. And it's not just that, you know a term like microwave oven. You know a dozen things about microwave ovens, and that's really what it means for you to understand why "microwave oven" is different from "dishwasher" or something like that. It's not because you know the terms; it's because of the things you know about each one.

And I said there were actually two things missing. In addition to content, the other thing that's missing from the semantic Web dream is context — namely, you need to understand when this was said, and who said it, and were they making a joke or were they serious, or is this what some group wants you to believe, or at what level of granularity is this true? Every year in physics, we learn that what we learned the previous year was just a lie approximating something which is more true which is what we're learning this year. And then next year, of course, we'll find out why that was wrong and so on. In much the same way, the level of granularity is important to state for each assertion, as well. So you end up with a dozen different dimensions of metadata that need to be stated for each piece of data there.

And so having that context and having that content of the meaning of the terms — not just the dictionary definition but the things that people know about them that are just usually true, heuristically true and so on — those two elements are missing from much of the semantic Web dream the way it's currently portrayed. Now, if you look at the leaders and gurus of the field, people like Tim Berners Lee or Jim Hendler and so on, they absolutely get it. And they will tell you that all of what's being done today in the semantic Web, Web 2.0, Web 3.0, and so on. Those are really just small steps toward what I'm talking about. And that, sooner or later, the real semantic Web, the real Web which shares information — sort of the way the dream is of organizations sharing information of you being able to go to the Web and ask a question and get an actual integrated answer that depends on multiple pieces of information that depends on several steps of reasoning and so on — that's more like Web 6.0.

developerWorks: So you would describe that as it's no longer about the data — it's about reasoning about the data?

Lenat: And it's about having the knowledge about the terms, not just having the terms.

developerWorks: Because to be able to reason, you have to have all of this context. The data is no longer the focus, right?

Lenat: Right. So think of the danger of the current semantic Web as producing the veneer of intelligence and the veneer of integration, without having real semantic integration.

developerWorks: I read the article of a talk you gave at South by Southwest (Interactive conference), and you were using the HAL 9000 computer from 2001: A Space Odessy as an example around the stuff you're talking about. Can you just kind of summarize again your thoughts about why HAL was an example of the danger of turning over control, or thinking, or coming to this conclusion that we have artificial intelligence when we don't really have it?

Lenat: One of the things that media inevitably does — because, of course, people like drama — is to have killer robots, and programs go wild and attempt to take over the world, for some reason.

developerWorks: "The Matrix," "The Terminator."

Lenat: Right. Exactly. And that's because, of course, if they simply continue going about the business that they were programmed to do, that wouldn't be very interesting entertainment.

developerWorks: And why are they always so mean? Why is more knowledge equated with the desire to eliminate everything other than ...

Lenat: Right. Well, I think that's in a way a false analogy to human beings, namely the imputing two machines the same sort of drives that have sort of emotionally ruled our lives as animals for hundreds of thousands of years. So on the one hand, there is no a priori reason why programs would have to have those sorts of ...

developerWorks: Greed and fear and all these ...

Lenat: Exactly. In other words, the desire to serve man, to do their job effectively, could be instilled in them. In many ways, Isaac Asimov's three laws of robotics were sort of charming and antiquated. But on the other hand, something like that can and should be part of the programs we build, and really is part of the programs we build. Namely, the programs don't go out and try to do things on their own because they're designed to meet particular goals of their applications. And so, just as a matter of course, they do what they were designed to do.

But HAL is a good example because it does point out a case where the kind of brittleness bottleneck we were talking about is what led HAL in that story to commit murder. A lot of people who have read The Sentinel, or read the 2001, A Space Odyssey book or saw the movie don't realize why it was that HAL killed the crew of the Jupiter mission. They think he just like went crazy or something like that. In fact, what happened was, HAL was ordered early on never to lie to the crew, which seemed like a good rule at the time. And then just before the mission launched, because there was this secret aspect to the mission, HAL was ordered not to tell the crew why they were actually going to Jupiter space. So when somebody asked why they were going, HAL had a dilemma — namely, he couldn't lie and he couldn't not lie. So he found a mathematically elegant solution, which was to kill the crew. If the crew were dead, he wouldn't have to lie and he wouldn't have to not lie.

developerWorks: Right.

Lenat: So the problem is that nobody ever told HAL that lying to someone is better than killing them. And if HAL had understood that, he would have said, "Well, you know, sometimes you've got to break the rules, so ..."

developerWorks: Brittleness bottleneck right there.

Lenat: Absolutely.

developerWorks: Extremely brittle.

Lenat: Absolutely. So you know, it's that kind of knowledge — knowledge that we take for granted, knowledge that people have by the time they're like three or four years old, in terms of the value of, for example, human life vs. the value of, you know, getting a dent in your car by swerving and hitting a tree or something like that. It's just something we don't really think about. You know, or conversely, if you see a McDonald's bag sitting in the roadway, you don't think anything of like running it over vs. swerving and possibly getting into an accident and so on. It's that kind of knowledge that is keeping computers from driving cars today, for example.

A lot has been made about the DARPA grand challenge and about autonomous vehicles and so on, driving even offroad and so on. The reason they drive offroad is because it would be very dangerous for them to drive onroad. You know, their programs are still idiot savants.

developerWorks: So what if they run over a cactus, right?

Lenat: >[LAUGHTER] Well, it's a lot better than if they, you know, run over a ...

developerWorks: Child.

Lenat: Right, or a dog running in the street or something like that. So you end up with a situation there where a task, which is actually harder for humans — namely, driving offroad in the desert — is actually easier for computers to do than driving on normal city streets and traffic and so on.

But anyway, so the problem with the HAL vision is that there are some things which HAL is portrayed as having that are still eluding us, like image understanding, and natural-language understanding and speech recognition, and so on, which we still are just doing a mediocre job of because of the absence of this big body of common-sense knowledge, general knowledge, formalized, organized, and so on. And it's really what Cyc has set up to do. And now that it's available, everybody can use open Cyc for free. They can use the open Cyc ontology completely freely for commercial applications. We also make the entire Cyc knowledge base available for development purposes for free. So basically, if people want to they can take what we have and build on it and hopefully build programs that are a little bit less brittle and hopefully get to advanced functionality faster because they can make use of the Cyc inference engine, they can make use of the natural-language interfaces we've built. They can make use of the ontology we've built so far and build on our shoulders, instead of building on each other's toes.

developerWorks: How far do you feel you're away from having Cyc at a point and its use at a point where the bottleneck is gone?

Lenat: I think that in terms of the numbers, in terms of percent of doneness, it sounds depressing in that we're maybe like 2 percent of the way there. However, in terms of what we have to do of priming the pump in order for this to take off in kind of analogy to Wikipedia, I think we are effectively 100 percent of the way there — namely, there's enough sitting there that Cyc itself can begin to actively help in its own continuing education and building, and policing, and consistency checking, and so on.

developerWorks: So then it speeds up, you reduce that time.

Lenat: Absolutely. So there's a kind of positive acceleration now that we're beginning to see. And software developers can begin to think of this as a new paradigm for software development. Instead of telling the program what to do step by step by step, think in terms of telling the program what to know — what additional things does it need to know that it doesn't know already. And think in terms of having a dialogue with an already slightly intelligent agent, so think that you're doing this dialogue incrementally approaching competence at a task with this piece of software, rather than engineering it in a micro way all along.

So it's a very different paradigm for software development than we've seen so far. In some ways, it's reminiscent of the old expert systems dream, but the difference is that without this enormous substrate of common-sense knowledge of general ontological terms and so on, on the expert systems we're much in this Wile E. Coyote mode of being out standing on thin air in this very brittle environment. So in a way, you can think of this as a quarter-century diversion we've done so we can now go back to expert systems and do them right with actual ground under their feet.

developerWorks: How tough do you think it is for developers, software developers, to make that paradigm shift? And do you see, you know, do you see it happening?

Lenat: I would say that it is a tremendous paradigm ... it is a real paradigm shift, and some developers naturally make it and some don't. But it's almost a kind of predisposition, just like some people are good at hacking, and some people aren't. Some people are good at tennis, and some people aren't. This is really partly a learned skill, but partly a kind of ...

developerWorks: An intuitive thing?

Lenat: Right — an intuitive ability or sense. And some people are, I'm good, for example, at education of others, and some aren't. So this is a new paradigm in which really what's important is the ability to introspect clearly, the ability to articulate things clearly, the ability to see generalizations and connections, the ability to interact with another entity. So I would say the people who are good at pair programming would be good at this new paradigm. The people who do best when they're left completely alone at their workstation will have a harder time in adjusting to this new paradigm.

developerWorks: Well, then, how would you like to see or how do you think that reality should impact the way that people are taught, for example, I mean, in academia? Is there a shift that really needs to go on there to anticipate this kind of skill that you think, you know, some have and some don't have as strongly? Maybe some of it can be taught, some of it can't be taught. But are there things that academic institutions should begin doing now to encourage this?

Lenat: That's a good question, Scott. I think that in the long run, the answer is absolutely yes, and I can give you a few examples. In the short run, it's hard to know exactly what the best thing for them to do is. I suppose that some of the .... There are constantly trends in education. I haven't checked what the latest trend is because my daughter is now grown and out of school. [LAUGHTER] But I believe that some of the trends that involve groups of ...

developerWorks: Teaming and things like that?

Lenat: Teaming together in very small groups to get things done, rather than every student working absolutely independently, would be a good thing. In the longer run, instead of thinking of textbooks as static objects, think in terms of the Web, and intelligence on the Web, and intelligent software as something that you the student and your group of fellows have a dialogue with this engine. So you have a kind of argumentation process. So you think in terms of constantly having argumentation process go on rather than something is being spoon-fed to you, and you're supposed to somehow memorize it, and so on. So that's sort of the main difference I would look to see and hope to see in the way the education process changes.

developerWorks: The kind of work you do — innovation, creativity — obviously are a key element in what's going on in your life, in your experience. What are your thoughts about those things? What have you learned over the years about how you're able to enhance your own innovative thought and creativity and the types of things that really encourage that vs. discourage it?

Lenat: I would say that the real key is being genuinely passionate about what you're trying to pursue. So find something that you can be excited about, that you really believe will make a difference. And then when you talk to other people, people will pick up on that. And people are very receptive to supporting people who are passionate about things because they know they're going to put in, you know, 200-percent effort. They know they're going to try to make it really succeed and so on. And so I found that there's a kind of positively reinforcing process where if you're daring, if you're trying to do something which is really out there, but you really genuinely believe it and you're excited about it, you can get other people to give you enough rope to pursue that sort of foray. And I strongly encourage the listeners to look at their own life and think about, you know, are you really excited about what you're doing? And if not, decide what you can do to change that. Decide what would make a difference in the world and find a way to do that.

developerWorks: The very thing you're talking about brought Admiral Inman into your life, isn't it?

Lenat: Absolutely. So that was just one example of, I have a kind of reverse paranoia, I suppose, of every now and then, I think that the world is plotting to do me good. [LAUGHTER] But I guess I try to ignore that and let it.

developerWorks: Doug Lenat, Founder and CEO of Cycorp. Thank you so much for your time. It's been a great talk.

Lenat: Thank you, Scott.

developerWorks: Learn more about Doug's work with Cyc at Cyc.com. This has been a developerWorks podcast. developerWorks is IBM's premiere technical resource for software developers, with tools, code, and education on IBM products and open-standards technology. Find us at ibm.com/developerWorks. I'm Scott Laningham. Talk to you next time.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=
ArticleID=340075
ArticleTitle=Doug Lenat on Cyc, a truly semantic Web, and artificial intelligence (AI)
publish-date=09162008