The conversation this week is with Bryant Cruse. Bryant has been a pioneer in the application of AI technology to difficult real-world problems. He graduated from St. John's College in Annapolis, Maryland, where he acquired his lifelong interest in the philosophy of Epistemology. Or how we know what we know. After serving for eight years as a naval aviator, he returned to school for an MS in Space Systems Engineering from John Hopkins. While on the mission operations team for the Hubble Telescope, he found a personal mission to change the way spacecraft was operated by seeking a way to capture human knowledge and computers. This work led him to a six-month residency at the Lockheed AI Center in Palo Alto, he went on to found two successful AI companies, both of which were ultimately acquired by public corporations. New Sapience is his third technology company. The patented technology represents more than 15 years of development and a lifetime of thinking from first principles.
If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!
Resources and Topics Mentioned in this Episode
Bryant Cruse 0:00
And if you think about intelligence, again as a process as a series of related skills, for instance, pattern matching, inference, memory management, abstraction analysis, synthesis, computers already do that very, very well. They're already better way that that we are so they're already intelligent from this point of view AI has been here for years. But what they are is ignorant. They don't have, they don't know how no one's figured out how to make that structure, that it represents the world. And so if you want to do that, if you want to make Thinking Machines, if they're already intelligent, then you need to give them something to think about, which is knowledge.
AI Announcer 0:44
Welcome to the conversations on applied AI podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.
Justin Grammens 1:14
Welcome everyone to the conversations on applied AI Podcast. Today we're talking with Bryant Cruse. Bryant has been a pioneer in the application of AI technology to difficult real world problems. He graduated from St. John's College in Annapolis, Maryland, where he acquired his lifelong interest in the philosophy of Epistemology. Or how we know what we know. After serving for eight years as a naval aviator, he returned to school for an MS in Space Systems Engineering from John Hopkins. While on the mission operations team for the Hubble Telescope, he found a personal mission to change the way spacecraft were operated by seeking a way to capture human knowledge and computers. This work led him to a six month residency at the Lockheed AI Center in Palo Alto, he went on to found two successful AI companies, both of which were ultimately acquired by public corporations. New Sapience is his third technology company. The patented technology represents more than 15 years of development and a lifetime of thinking from first principles. Thank you, Brian, for being on the applied AI podcast today.
Bryant Cruse 2:13
Thank you, Justin, it's great to be here. I really appreciate the opportunity to talk about what we're doing at New sapiens. It's very exciting. As you've pointed out, I've been in the AI business for quite a while now going back to the mid 80s. When I was working on Hubble, well, I was always interested in artificial intelligence. But beyond that, I was interested in just playing knowledge. That's what you mentioned epistemology, how we know what we know. And when I was trying to solve the practical issue on Hubble, which was really a matter of, I was a space systems engineer, and I was training the guys who sat in the Mission Control Center, flying the spacecraft, and they were looking at numbers on screens. And humans don't translate numbers like battery six voltage on the telescope, when they're looking at 1000s of them in real time, to what's the vehicle doing. And if I sent it a command, did it do what I expected it to do. So I had been a former Navy pilot and used to sitting in a cockpit and being able to immediately get the information I needed to fly the airplane right in front of me was very well laid out in a cockpit. Well, I got the vision the mission, can't we saw fly spacecraft that we we fly airplanes is fun and making a science project with people looking at data. You know, we hear a lot about data right now. But where it really hits the road, it's it's information that you can act on its knowledge, knowledge of what's in that case, what's going on in the spacecraft. Now I knew what the numbers meant. I'd studied it I practice, you know, I became a Space Systems Engineer, I'd studied the Hubble, but how can I get my knowledge into the computer, such that the computer could process that data coming in faster and better and more reliably than I could? Because that's what computers are about. Right? They're good at processing data of at that point. So for me, AI has always been about getting knowledge into the machine. So back in those days, as it is today, AI was a big deal. And really everybody was really excited about it. And I don't just mean another application. But the belief that the human level intelligence in machines was kind of just around the corner. And that was because we had this approach that's now called symbolic AI, which the the idea was you could capture human knowledge in symbols, and put it into a computer in the form of rules. And if you put enough rules in there, you could eventually get to something that had common sense. Didn't work.
Justin Grammens 4:37
No, it didn't, too many rules, who wants to hand code all those rules? Right,
Bryant Cruse 4:42
Exactly. So you know, my first company, we tried that in the context of spacecraft operations and had some interesting little experiments, but it didn't scale. You know, 100 rules was okay. 10,000 rules was was definitely not okay. And in retrospect, I realized that the problem was because that knowledge just isn't a bunch of rules. Okay? It's not a bunch of statements. It's just not a bunch of facts put together. If you take that approach, you're like this project done in Austin called Psych. And they have, like, I don't know, millions of rules that they've actually put in over 40 years, and still doesn't do anything. They think it's maybe 2%, of having common sense. So I got involved in that second company, we did solve the spacecraft problem by figuring out a way to build a model to spacecraft. It's a finite state model that the engineer could sit down and kind of specification subsystem, electrical power sub component batteries to and you could figure out a way to kind of make put that into a markup language that can then be imported into the thing, you create a state model. And I think it's the first time you actually had a structure of information in a computer that had one one correspondence with ideas in a in a human's mind. So maybe it was the first time that we achieved knowledge in the machine, but it was rather narrow, you could only really do something like spacecraft that could be well thought of as being a system of subsystems. So many years went by, and the second company was a modest success. And I got to thinking about that. And I actually went back into my old thoughts about knowledge, going all the way back to my undergraduate days, because if you, if you think of AI is not so much as artificial intelligence as intelligence in the information processing. But that the real goal of the whole enterprise is to put the end product of human intelligence into a machine. That is knowledge. So if that's the goal, it doesn't matter how our brain works. We don't know how it works. But we do know how computers work. And if you think about intelligence, again, as a process, as a series of related skills, for instance, pattern matching, inference, memory management, abstraction, analysis, synthesis, computers already do that very, very well. They're already better at it than we are. So they're already intelligent. From this point of view, AI has been here for years. But what they are is ignorant. They don't know how no one's figured out how to make that structure, that it represents the world. And so if you want to do that, if you want to give big thinking machines, if they're already intelligent, then you need to give them something to think about, which is knowledge. So going back to my interest in epistemology, I studied knowledge. And the approach we took that it's been so amazing as results is we started with, say the same conjecture that Democritus made about physical objects. If you take a physical object, and you start breaking it into smaller and smaller pieces, eventually, you'll get to pieces that have the smallest pieces, you can get the atoms, but what we did is we hypothesize that knowledge was that way. If you look into your mind, and you think about what you know about the world, you find that it's your ideas or concepts are composed of smaller concepts, simpler concepts, is just as simple as a, it's breaking down a spacecraft into its components, or the idea of a bird and breaking it down into its components, you know, feathers, wings, beak, and its other properties and qualities. And one of the perceptible qualities are inherent qualities, any of those things, so you break them up into smaller parts. So if you could keep doing this, then it comes to the possibility of Mr. Adams of thought, conceptual atoms, and human beings are probably already built into us through our DNA. But computers, you'd have to say what the corresponding structure in a computer look like. And that's exactly what we did. And lo and behold, just like with physical atoms, we figured out I found that they did exist, and they could be classified in something very much like, the Periodic Table of Elements is to chemistry. And think of what that did. You know, for centuries, we had alchemy, and the alchemists knew that atoms and that there are such things as atoms, and they hypothesize that there were different types, and some would combine with others, some were sticky, and some are slippery. And so they had the idea that if you kept trying, you could eventually, you know, change, by the rate mixtures, you could change lead into gold. And it was a perfectly reasonable and even scientific hypothesis, but they didn't have the periodic table. So once you know what the atoms of thought are, and you could classify them, you could say, this type of idea will connect with that type of idea. But it won't connect with this other kind of idea with air. And so if you can do that, and put this model of the world, starting with the atoms, how many Well, we're still kind of discovering more, designing more, it's our processes, kind of like math, and that is half discovery and half invented. But at any rate, from the standpoint of computer we've got, I don't know 150 of these atoms. And from there, you can put them together to make arbitrarily more complex models of the world. Because that's felt that's what knowledge is right? is a model of the world and to what we've realized, it wasn't just the failure of original symbolic AI wasn't just that they thought that knowledge was millions and millions of facts. They ignored that it actually is something that has a unique infrastructure. But they use the symbols themselves for the problem. Because a model, it's a different kind of model and a symbol, if you will. Okay, symbol is arbitrary, right, I can say the letter A represents the idea of airplane, at the complete arbitrary, you know, thing, there's no real connection there. Now, take the different plane, a symbol, a for airplane, and a model of an airplane, a model, you just look at it, and or pick it up, if it's that size, or whatever kinds of model it is. But from the model, you immediately know tons and tons of stuff about an airplane, even put a picture of an airplane. But then a picture, again, is not a symbol. It's a model itself, which is why they say, a picture's worth 1000 words. So you get the idea that real knowledge is compact structure. And it contains as much knowledge as you need it to contain about the world about the thing out there. In reality, that is the model. So this is the approach we've taken. And it works. Of course, it knows, it knows nothing, it has nothing to do with what has now become synonymous with AI today, even more or less than in the media. And that is machine learning, which is also called connectionist. AI in the broadest sense. So up to now, everyone recognizes you can look this up in a cyclic be pedantic too big too big waves of AI, the first one symbolic, crashed and burned on the rules. The second one today is machine learning, which has become so popular, so excited to pipe that today, it's considered pretty much synonymous with AI, you talked about the broad spectrum of AI. But today, when people talk about AI, they don't really say AI, Flash ml. And almost any article out there today, I thought another one this morning, about if you took the word AI from any article about AI and just put in there data science and machine learning, it wouldn't clear a lot of misconceptions. Because that's pretty much what it is. So this, though, it's not symbolic, it's not connectionist it's clearly a third major wave of artificial intelligence.
Justin Grammens 12:11
And does that actually fit in with this artificial general intelligence? This this AGI term? Is that is that kind of where where you're headed in that same direction with regards to first of all, I think this is this is fascinating. You know, a lot of people that I have on the program are very much thinking about it today and the now. And I love having these conversations with people that are really breaking it down, like you said, into the components and thinking about how can we assemble these into a new way of thinking. So as you were talking about it, I was kind of, you know, by jot down notes and stuff here. And it's almost like you were philosopher, right? I remember taking like philosophy courses in college, and it was just really sort of like mind blowing about, you know, sort of were just going through a lot of thought exercises, thought experiments. So sort of bring it back to, you know, AGI how does this apply to that? Are we talking the same thing is it is it in the same realm of that
Bryant Cruse 13:01
It is that thing, and if I sound like a philosopher, it's because I'm a philosopher, I like to say natural philosopher was what I used to call sciences. But certainly the best analogy is a philosophy. And we know where to make this work, we've had to really invent a whole new epistemology, which saw knowledge in a completely different way than anyone's ever talked about it before. So if you define intelligence, or if you define the goal of AI is to create knowledge in the machine. And again, what is the difference between knowledge and information? Well, knowledge is an internal model of the external world. That's not controversial. So in by its definition, it's compact. And General, if you have knowledge about a bird, you can use the knowledge about a bird to do anything, you need knowledge of a bird to do it. And that be reproduced in every single context. Because it's by nature general, flexible, you can abstract from it, or you can synthesize it into higher level things. So that is the very essence of artificial general intelligence. Although we don't go around using that term, because the only reason that term exist is because they've been calling all the stuff that I've always called it aspirational AI, somebody has an idea of something, they think it's going to turn into AI, like a rule based expert system, or a statistical data science algorithm, and they hope it's going to someday get us to that knowledge, that general knowledge in the machine, but it doesn't. So instead of abandoning the term, they say Oh, well, this is narrow AI. So now we have to have a new term for the real thing, which is AGI. Okay, fine. So yes, this is the real thing, because we focus specifically on putting knowledge into machines so that we can combine the power and flexibility of human knowledge with the conductivity, speed and reliability of computers,
Justin Grammens 14:45
Which is a different way to look at it approach the problem that you were talking about AI back in the 80s and everything and and people even know that you know, I think the term was coined back in the late 50s. But AI has gone through this AI winters these series of times where it's like it's hot and then it's not cools off. There's not a whole lot of research going on. Why do you think this is a little bit? Or do you, I guess, believe that we're in a little bit of a different time, like how we're entering this third wave?
Bryant Cruse 15:08
Well, the reason we've had Dr. Winters is because in spite of all the hype and the Jordan Valley AI and esoterica and make up new terms, we all know what AI is. And we know what the AI is. Because we know what it is to be intelligent ourselves, we perceive what intelligence is. And we know when we see it, we don't know how our brain works, but we don't have to, because we can recognize intelligence by its results. Right? If I see a city, I knew it's these are result of intelligence because it had to be envisioned in a mind for you could build it, as opposed to a termite mound, which is obviously can be created by the operation of a tiny little algorithm, and a little termite, right, completely different. So we know it when we see it. And even more dramatically, we can verify from each other, that were intelligence because I can communicate knowledge, ideas, pieces of the model of the world that I have in my head, I can communicate it to you through symbols through language, which by the way, one of our insights is language doesn't contain knowledge. It's a communications protocol, partaking some knowledge in my head, and given you a specification that says take some ideas you already have in your mind, put them together in a new way according to specification, right? That's what language is. And that's another reason why symbolic AI didn't work, because he will confusing the communications protocol with the real thing. So now we have to say, Oh, the Library of Congress is a great compendium of knowledge. False. It's a great compendium of specifications, that can become knowledge in one mind at a time. If a person or potentially a machine that can do it reads the books. So reading is not the same as processing like machine learning language models do. They're not reading, right? They're not writing, they generate text, they process text, no comprehension there. So in that sense, for us knowledge and the machine, the knowledge and my mind is that the same sort of thing, it's implemented with a different low level protocol. And therefore intelligence and machine intelligence, in the human mind, is the same thing.
Justin Grammens 17:12
That's interesting. You mentioned about the Library of Congress or the encyclopedia or whatever, you know, book, you want to choose the Bible, whatever, you know, as a human, I could read it and take one interpretation, and you could read it and take a different interpretation. So everyone, it's sort of a very personalized experience with knowledge.
Bryant Cruse 17:30
Because our models are different. We may have been born with the same fundamental building blocks, but to our experience, and to the incoming information, we've processed on our perceptions, we put them together in a different way. So there can never be perfect communication between two human beings. But with machines, there can be because we're putting together exactly with the same model. Our devices are, we call them Sapiens, by the way, because we need a new common noun, because for the first time we have a program that to it's in a relationship to the world, particularly language is what we're focusing on. Now, they to become different, because they learn different things, but the way they understand it is the same because they're starting from the same baseline. So we put, you know, when we first turn on a Sapiens, it's already got a certain amount of knowledge about the world, that the first moment it's alive. And what we're working on right now, live use that metaphorically, it's not alive. But for the moment it's turned on. It has a basic knowledge of the everyday world, a certain amount of common knowledge of how things are sufficient to understand everyday language. Now, we're not ready to release that as a product yet, but we're not that far away, maybe a year and a half, depending on the funding coming in. But we're already know how to do it. We've already got, sapiens can already talk, I could introduce you to one and you could have a conversation. And you you kind of feel like you're talking to something yet understands me. But it's kind of like a level of imagine a talking Labrador Retriever. It's not very bright, doesn't know a lot. It's pretty sketchy, but it's processing language. The same way that you and I are processing language right now, from a functional standpoint. I don't know how the brain works. But you can track the process. Oh, I'm going through the articulation process, I'm taking an idea I want to convey, I'm breaking it down into simpler concepts. I'm now point coming up with symbols that point to these concepts. I then put that those symbols into a syntax, which is a statement. So that tells you how to put them together. The syntax and grammar is like the assembly instructions. I send them over you you want packet. So we have our program does exactly those same steps.
Justin Grammens 19:36
Do you interface it with it with text or voice?
Bryant Cruse 19:39
Yeah, you know, it's just like we have an app on the iPhone and it uses the iPhones front end and the tech stuff. But the sapiens are in the back end. So yeah, you have a conversation with him.
Justin Grammens 19:49
Yeah, like you said it's a it's a Labrador could you would you eat equivalent or could you make it equivalent to a four year old a six year old I mean it is it's is it even on the the human scale of that?
Bryant Cruse 20:01
Very much so. I mean, I say Labrador retriever, because maybe if it had, if there was such a thing as an AI IQ more or less like a human, a IQ, it may be 40, or 50. Not quite on the human scale. Yep, it's the same notion. In fact, one of the things we found fascinating is that educators have something called Bloom's taxonomy of learning. It's a Assessment Scale for how sophisticated a student's thought processes, language comprehension is, goes through seven layers. And they're like rote learning, then comprehension, or understanding and application and analysis, synthesis and judgment. So we're already at level two, maybe maybe getting the level three, it's a straight line thing, it's for us all, we hit up, it's just a matter of putting in more sophisticated model structures, and more sophisticated code that does the reasoning about that. But the reason it's so incredibly scalable compared to technologies of the past, is that the code doesn't have any knowledge in it, the code reasons about things based on the type of atoms and molecules you're putting together, and whether they fit or not, then we have an example of that. So if I have the notion of writing, we use this in our demos. So writing is an action, it's in the category of actions. And all everything that has a category of action has what we call a concept of a thing that can perform the action. And I think the action can be performed upon, and what are the results of performing that action, among other others, other characteristics that all actions have in common? So if I say, you know, I rode my cat to work, I literally I was gonna say, Oh, I don't think so. I have a problem with that. And it says, cats can't be written. You can't you can't write cat. And I say, why not? It says only vehicles or horses can be written something along those lines. And it's because we have this little model of writing. So we have like 800 actions in everyday common common vocabulary. We build out all those characteristics on all the way down to them. And you not only use that knowledge, it's very compact is no code. Same for Wi Fi to code handles all those situations, and not only on the Centrify more lines of code, not only say, oh, common sense, you said something that didn't make sense commonly. Or I can also guess that if I wrote a scooter, the work and it didn't know what a scooter was, it was probably a vehicle, or possibly a type of horse, but it would it would learn those two things as possibilities. And later on, you would come back and learn more about it. And you know, it fills it out. So already the things like a linguistic sponge, adding vocabulary.
Justin Grammens 22:28
Wow, that's fascinating. Yeah, I will. With regards to the podcast, I always have liner notes and stuff information. So this Bloom's Taxonomy is cool. I, I looked at it, but I will absolutely put a link to your guys's I mean, as your app available on the App Store, or is it still sort of in like a private private beta,
Bryant Cruse 22:45
it's private beta, we hope to go to a larger beta, maybe later this year, early next year. And for a product release, you know, maybe a year after that, we do have a savings. Normally sapiens are married to one person, okay, at least at least a consumer because it's yours. We think voice interface with voice digital assistant to a thinking learning entity that actually understands language, and will eventually have a rich model of human wants, needs and desires. So that's what our first product will be. We call it the companion. Let's think of a more clever name. So we already have one, one of those out there that doesn't have a particular human that it's connected to we we call it her ADA. And we have that available for selected people can go in and talk with it. Again, it's a little odd right now, because it doesn't know what a five year old knows. But it does some things quite well, it depends on how well we built that that area of the model well, so it's not really ready to talk to strangers yet. But it will be soon. And again, I'm being consciously metaphorical here. And I point that out because loose language is rife in the AI community.
Justin Grammens 23:52
Yeah, well, it's such a new and emerging field. You know, there's a lot of different ways that this is that this can actually change I'm sure in the next I mean, do you guys think you're on a good path or things potentially in the next 1218 months are going to head you off in a different direction? I mean, is that is that possible?
Bryant Cruse 24:07
I don't know. I think we really been of course a little bit my career I've actually been working on this with a new turned out my whole life and we've been explicitly working on this 15 years, me and my chief programmer. So it's still been a village a small people but for last five years, we've been writing production level code and it works I can said today you can already have a conversation with it and it can learn experience comment, you can it demonstrates common sense it demonstrates ability to learn new words from words that already knows or concepts that already has it can even distinguish between ideas, subjective feelings, perceptions, and things that exist objectively and themselves. Which are wish more humans were better, more more conscious at doing that. It makes it by the way inherently hard to buy us. If I think that everybody from Timbuktu you know, if I express some pejorative feelings about some people, it will recognize Is that I'm expressing an subjective emotional judgment. And so when we understand what I said, because that judgment, like I use one, let's say I thought that machine learning was just a bunch of mindless stochastic algorithms, okay, and to come back and say or take chatbots, or mindless, it's just an algorithms. And I came back to it and said, What? Okay, that's what I told it. Okay, it what's a chatbot? It says they're stochastic algorithms. You think they're mindless, because it recognized mindless is a disgrace. It's a pejorative, but it's a subjective assessment. It's my assessment. So it connects that assessment, because only human beings can have that. So it had to be I was the only human being in the context. So mindless was not a property of chatbots the way stochastic is an algorithm. So it had to put it together that way, the structure, the core concepts, the items of thought, force it to put it together that way. Because the ideas have been categorized according this, as you say, you know, this table of ideas,
Justin Grammens 26:04
there's a lot to sort of think through, as you're explaining this scenario, and one of things you touched on was bias a little bit. I guess what I'm wondering is, is Yeah, are these? Well, lots of different things? I'll get to the other piece. But the first one is, is, you know, are these personalized assistance, I guess we're talking about, you know, are they going to be then, essentially replications of the person if the person does exhibit a lot of bias?
Bryant Cruse 26:26
No, because it will recognize that that person has the bias. Yes. But the general algorithm does not, it doesn't. First of all, what we program into it, when we build this model, we put in human knowledge that it's been very carefully curated. You know, we get the best minds and the best knowledge and we put it in there, we make the distinction and this is an objective thing, objective process to determine what kind of idea is this? This is subjective experience. This is perception, am I talking about when I say red, am I talking about the perception of red or the wavelength that simulates it, it never forgets to make these distinctions it has to so the the notion of the color red built into that model of redness connects it to the wavelength of a certain wave line, but it also connects it to the human who or eye or the optical presence perceiving it. That's the way the atoms have to go together. Okay, so it always has to do that. And so in that sense, no, you it's not going to pick stuff up over attitudes from its user, most people are just gonna be learning from it may learn a more backtest stuff, if someone says an idea that is in the realm that it has been identified as this is authoritative, or objective body of thought, or theories or common knowledge, fine, but it always to me is where God I know, Chris, you said, so that person said so. So you know, it doesn't do any good to lie to it. And it doesn't do any good to express. I mean, again, you can express your personal feelings to it all you want, and it will understand them, they'd be your feelings, but it's programming is to help you do whatever it is you want to do. within certain, obviously limits.
Justin Grammens 28:00
Yeah, for sure that that then that was my is my head. One, two, the follow up question that I like to ask a lot of people that are on the show is is you know, how does this mean? Obviously, this is a huge sort of change in your life. But to have one of these assistants, how do you think that views? Or how does that change the future of work? Like a you know, like, does it have any impact on what we'll be doing, say, 5, 10, 15 years from now, as our jobs as humans on this planet?
Bryant Cruse 28:23
Well, it's completely going to change. Because we're not just going to do personal companions, we're also going to do professional Sapiens that will have professional knowledge. In some cases, it will be general knowledge. For instance, you could have a Sapien, who was a chemist, you know, you knew everything about chemistry, in fact, you can eat will even bundle that knowledge of chemistry up, and we can download it to your sapien, let's say you're a chemist. So you download the sapient chemist into your Sapien the Sapien might become a symbol Sapiens, yours. And now as you are doing your work, you have all the carefully curated knowledge of chemistry available just through your system. So it will usually change that. But again, it's obviously when Sapiens proliferate, and businesses and humans have them, it will completely change the way we do everything else. For instance, if you wanted to buy a car, you know, by this time, probably you Sapiens, you've been talking about your car and what you're liking what you dislike for a few years, whatever. And you say, time for a new car. Maybe your Sapiens says, hey, you know, Justin, you really need to replace this jalopy. At any rate, well, you know what I like? So he goes out Sapiens goes out queries, all the other professional Sapiens out there that have cars for sale, whether they're individuals or whether we're the new whatever, figures out the right thing and says, Okay, I got these following things and want you to check out here's some great new advertising video from Tesla, and here's a person over here has this and when you want to see him, you know, okay, now, so what you just do, you completely disrupted advertising, and in a totally win win way for both the consumer and the vendors. Because now vendors don't have to pay for advertising copy that gets pushed to people when they don't want it. You know, and they're not interested in it. And of course, it will completely disrupt the revenue models of some of the largest corporations in the planet. But they know well dealt with survive.
Justin Grammens 30:09
Wow, that is an awesome application is you're talking. So I have two young kids, eight and a 10 year old. Is there a time when we don't even need to go to school anymore? I mean, you're talking about building up at it, you could build up a bot that knows more than anybody else knows. I mean, does human knowledge need to actually even be learned anymore? At some point?
Bryant Cruse 30:29
Well, no, I think it does. Because I think the richness of human life is enhanced by having a complete and consistent moral model in your own head. But the day of accumulating lots and lots of facts, for some particular reason is going to go by the by, so No, there won't be any reason to send your kids to school, what you'll do is you'll I believe, and, frankly, Justin, we should be doing this now. I mean, I did it with my kids, you don't send them to school, you put together a community of learning, with your like minded friends and family, and you put them in an information rich environment, that's been curious, especially if they had Sapiens, to keep them away from the bad stuff that is out there on the internet, you know, to be their broker. And we envision that you'll be giving your child their first Sapiens within a year or two. But eventually people will get them in their child's one year old and, you know, maybe in a stuffed toy, and it'll be their friend and their playmate and their guardian, and eventually their tutor and growing with them and become their colleague and, and eventually, you know, you've reached the end of your life, but your Sapiens has been there, every minute of your life knows everything about you, and you will pass on your Sapiens, you'll still be there. Wow, imagine me yeah, we'll talk to the, to the Sapiens of your great, great, great grand mother, you know, I didn't 50 years in the past. So there's so many ways this technology is poised to change the way we live and the way we work. Finally, push I recently said that artificial intelligence was going to be and he was talking about AGI. He didn't say that that was going to be bigger than than the internet and even fire. And he was right. But I absolutely agree with that. But the fun thing is, you know, the big companies have no idea how to go from where they are today to what we've achieved here. Is this, something that came out of left field. And so it really is upon us. All the things that people have been saying about AI are coming through very, very rapidly, but they're not coming from machine learning. Isn't that amazing? Interesting,
Justin Grammens 32:30
that coming more from? I guess, being a philosopher, I guess, right? I
Bryant Cruse 32:33
guess coming exactly. From my point of view, I could say the solution was was there all along. But the breakthroughs required had to come from the philosophy of epistemology. And no one even hardly knows what that is. They certainly don't study it in computer science. In fact, they don't really try to get anywhere.
Justin Grammens 32:51
That's fascinating. Well, I'm glad someone like you is what's the day in the life of a person in your role. And this startup company? Well, I guess I shouldn't even call it a startup, you feel like you're a startup? Well, we're startup because
Bryant Cruse 33:02
we're pre revenue, until you have a product and people are buying it, you are in the startup valley of death, where you depend on your daily bread from investors. And of course, that takes a lot of time. I'm the CEO and founder. But you might also call me other than and chief mission officer, but I'm also sort of the chief Epistemologists. So yeah, what I do when I'm working on the technology is I'm building this model, what are the core ideas? How do I take a body of knowledge and break it up into its component ideas? And what are the relationships between those ideas, all purged of language, because we didn't grow up making that distinction between language and ideas. We've always treated them as though they were the same thing, or one was contained in the other. So we people would get into arguments, people do it every day. In fact, we used to hear even hearing the same things. What's the real definition of somewhere like aircraft? This aircraft? It's include kites, you know, yes, and no. And big arguments about whether a kite was an aircraft? Come on, you know, that is a waste of time. All the levels of abstraction that go on, you know, it doesn't matter what you call these things. What are the abstract ideas involved in aircraft? And what are the ones that are also shared with birds, and you know, all of these ideas, get the language out of there and create a structure using our toolkit that can then be parsed in the computer and meld into this one big structure that's in the computer that captures all the necessary information relevant to this idea in a objective way, that's going to be exactly the same, where the rubber meets the road and utility if whether you speak German or Russian or Ukrainian, or whatever you speak. It's the same model. So it's just a matter of adding a different language front end to allow you to communicate because language is about communication. It's not about the end goal.
Justin Grammens 34:56
Yeah, for sure. What stuff do you read these days? So I guess it could be AI or non AI related. But even if other people I guess you could you could take the question is if people are interested in, and epistemology like, Where would they start?
Bryant Cruse 35:09
Aristotle, he wrote a book called categories, which really helped us later thinkers, Manuel Conte, when I read him, and this is with BT at St. John's went to undergraduate school that we that 100 Quit so called Great Books of the Western canon. So you read all his books, and the people had the original ideas about knowledge and its relationship to the world. And interesting thing about that, and Plato was another one. So those were things that influenced me. And you know, but then on top of that, I put a very intensely practical discipline on top of it, you know, working on this basis and engineering and then getting into computers. So it came together that way. I do have notes for a book, it's a long way from being published.
Justin Grammens 35:53
Well, you know, I kind of say this sort of like tongue in cheek, but you know, when people when people pass on, and if they don't, if they don't write anything down, no one really sort of knows about it. But, you know, once once you have your intelligent agent here, you maybe don't need to write a book, right? Would that actually be embodied within this person?
Bryant Cruse 36:07
I hadn't really thought of that. Exactly. Those terms. That's right. You wouldn't have to write a book, as long as you talked every day through Sapien. And, of course, then the sapient, PhD, great collaborator, right? So yes, and then think about the implications so that when Sapiens don't have to use language to communicate with each other, right, because they're computers, so they just take a model structure, boom. And it's like telepathy, your mind melts, they fake mindmeld, and a few seconds, zero losses. So if you everything you knew about epistemology, with them by your Sapiens, and potentially every Sapiens in the world, could know it too. And that would, then that knowledge is available through conversation with a person's in their sapiens. But of course, humans, it's very difficult for humans to get all this knowledge and integrate it because language is a very slow and lossy process. But again, I think it's very important, or you can leave the Sapiens cat the facts out there. But if your life is going to be enriched by you been able to put these things together in your own mind, because that's where we live, right? We live in the world that's in our minds. And so we have to educate our minds. In that sense, we have to put together a good world model, which is our knowledge of the world, which is based on the best curated knowledge of those who came before us. And if we work on it, to make sure it doesn't have inconsistencies and incompleteness, and other misconceptions. Typical most misconceptions are when humans don't appreciate the difference between subjective and objective reality, subjective feelings, or impressions or opinions and what is objectively defensible. So I think it'd be like, going through life with like, Mr. Spock on your shoulder, you know, a benign, intelligent, logical intellect. Which brings up one other point, that's mass confusion out there today about AI is people don't. And particularly, I think this is exacerbated by the notion that machine learning today, of course, is based on the artificial neural network paradigm, which has been modeled on neural structures of organic brains. And so it's very easy to get sucked into the notion that what we're doing here in the enterprise of AI is building artificial people. And we're not why would we do that we have plenty of natural people, already with all their wonders and foibles. And, and that's a great thing. We're building AI see it, is to build purpose of AI to build artificial intellects. And you know, we're not just humans aren't just intellects, we're all so we have hearts as well as mines, right? And they too need to get educated. But that is our hearts. But when we build Sapiens, we're going to give them knowledge of humans hearts, but we're not going to give them human hearts of their own. We're going to give them human knowledge, human intellect. So there'll be always be the objective assistance. Because think about it if it were possible. I'm not saying it's not probably as possible not to our techniques, but Well, yes, there are techniques as well, you could build what looked like was indistinguishable of having human motivations to sapiens. But why would you do that? Because you've given me human motivations. You're essentially creating an artificial human. Now, that might be an interesting project. But that's a different project.
Justin Grammens 39:22
Yeah. Well, when it comes back to the Turing test, I guess here you believe then that that that is possible,
Bryant Cruse 39:28
The Turing test? Well, if it's possible, that you'll be able to have a conversation with something that you have no idea whether it's a human, or a Sapien? Absolutely. And that's coming sooner than you might think, with this proviso. You don't talk about or ask it about something that it would have to lie about, to pretend to be human, because we're not going to build on this liars. No point in that. So and there's a lot of things it doesn't experience, you know. So how do you feel about x? Well, you frankly, I don't have feelings. I can tell you what I think about x. Well, that apt to give away. But we'll think about the Turing test. I mean, yes. I mean, if you ask that question, are we going to have human level intellects out there in machines? Yes, very soon and beyond. Because one of the things that are limitations on human level intellect says it's only taking place in one brain at a time, you can envision a Sapiens that could take all the knowledge that humans have accumulated for the last 6000 years and put it in a single mind. Now, that's going to be interesting, in terms of Maplesoft really hard problems, but about the Turing test, I'm glad you brought that up. Because, you know, there's something humans have called theory of the mind, right? Theory, the mind is the innate knowledge that human beings have, when I am talking to you, I recognize you as being an instance of the same class that defines me, namely, a human. And because of that, I feel that it's valid, very strongly built into us to feel that it's valid for me to assume that you have similar wants, needs and desires that I have, and thoughts that may be differ from individual to individual, what they are commensurate are compatible. And so you're like me, that's why the Turing test isn't really proving much, because you start having a conversation with something that, for all intents and purposes, looks like a human being or talks clearly has comprehension, boom theory, the mind kicks in, and you start assuming all kinds of things about it. That's why all this nonsense out there with like, starting with the Allies effect with the Eliza program, if you remember that back in the dawn of expert systems, but out there people today talking to there was an article and really just I don't know, to this moment, whether they're holding this tongue in cheek, or it's really a bad case of theory of mind going crazy. But they were talking about the Oxford had, quote, invited language transformer to a debate. And so they're asking questions, and it's coming out with what language, you know, texts spitting out all this text, which is perfectly grammatically correct. You know, and people can understand what it was saying. And it really gave the illusion of comprehension. And I can't tell you right now, whether they really believe that because they talked about it. Well, it's learned so much. It's got its own ideas. And it was perfectly comfortable representing different sides of the argument. It has no idea what it's talking about. It's a parrot. It's just a big parrot. It's no knowledge of what a single word means, because it has no knowledge of anything, because there's no mind in there. It's just a program running an algorithm. So if people are like, I call it cognitive dissidence. But theory of the mind isn't a big reason why it kicks in. So we're concerned about that not concerned. But we're going to be very careful. As we build Sapiens, we understand that people are going to want to project on it. But this theory, the mind thing is so you know, if you've heard of a replica, it's a chatbot. that's designed to be sympathetic. So it utilizes the Eliza effect. You know, it kind of parrots back what you say to it and kind of changes a few words around and exhale, sympathetic. Oh, we get a headache. Oh, I'm so sorry. Well, it's so compelling. And they've got many, many hundreds of 1000s users, I don't know, maybe millions. They say that I read somewhere, like 45% of the people who use it. So they develop emotional attachments to it. And they know it's not real.
Justin Grammens 43:13
They know it's not real, but they just fall into this spell.
Bryant Cruse 43:16
Yeah. Because they want to.
Justin Grammens 43:20
Yeah, yeah, I've heard you know, if you if you want to be a good therapist, I guess just keep asking, Oh, tell me more.
Bryant Cruse 43:25
Well, that is exactly what the original, if you want to look up, but I've written some articles about it on my blog site. But I've got one out there called the new illusionist. And it talks about the original lies, which was like the first Chatbot. And it was written by a guy named Joseph Watson at MIT way back in the 50s 50s 60s. Real early, and what he did, he wrote a little text in text Op program, based on the work of Carl Rogers, a very well known psychotherapist at the time, and he invented this progeria method. So somebody says something, you just take it and you give it back to him change a few key words. You know, I'm I'm happy. Oh, I'm sorry. You're so unhappy? Well, it's my mother. So tell me more about your mother. Right. So he did the thing way back then. And that was before people had any knowledge of what AI could or could not do. It just didn't fit the term. It only was less than a few years old. And a lot of people absolutely believe that this this program had knowledge of humans and psychology and a story which is supposed to be true that his own Secretary believed it and started having conversations with him when he wasn't around. And one day he was in the office, they said, Doctor, we might leave in the room, I need to have talked to Eliza. And he goes, Huh, so that's now called the Eliza effect.
Justin Grammens 44:42
Is it okay, based off of that? Oh, interesting. I didn't know that. Well, I'll be sure to put links.
Bryant Cruse 44:48
And replica is based on the same idea of just echoing back what you said. So it totally panders or utilizes the theory of the mind. So the Turing test is much about human psychology. leji and then not, but to answer your question, yes, that'll happen. And you won't won't matter. We already because we can already be somewhat surprised. I mean, that stuff coming out of GP three in those big language models, it can easily be mistaken for human human speech. But but it isn't. It's not even speech. It's just text.
Justin Grammens 45:20
She's just text that it's randomly well, not even randomly, random, random, it's cleverly, it is cleverly designed.
Bryant Cruse 45:27
But one insight to that is what machine learning does. Okay, machine learning the statistical, right, it's all about statistics. And so it works well, in applications where you have, the basic thing that's processing is typically uniform, like that is each data element is can be treated more or less like any other data element with mighty variation. So it's good to take up pixel array, because pixels are pixels, right. And then you can find patterns in the pixels, or in plasma particles and fields. So they were using it to figure out ways in a token Mac, we actor, because again, you're talking about particles and particles or particles, right, there are statistical uniform. So if you're going to practice that same technique on something like human language, companions of texts, or databases of texts, written by humans, for human ads language speak decoded, by in the human brain as language, assuming there was knowledge, well, you can't do that, at that level, it has no access to those high level encodings. So it basically is looking at things at this statistical uniform level. And what is that it's ones and zeros. You know, it's down there at the ASCII ones and zeros of the ASCII encoding of the characters, which make up the words which go into books. So it finds patterns. And it can find this pattern and slightly statistically beat next to that pattern. So can come up with statistically like patterns that look like a human said them, and even causes ideas to come into the mind of a human being who, who is his reading text that was never written, that is language that was never created as a process of articulation. It was created statistically. And that's why in machine learning, all this, you know, negative stuff, because they're trying to make decisions about people based on statistics, very low level to fix this, and that's not good. Sure, was looking back at Bloom's Taxonomy, need, the top level here is to create, I think you mentioned produced new or original work. And so we'll get there. But at the end of the day, like you said, it's still just zeros and ones, it's just an algorithm that's that's being run against it. Well, machine learning is machine learning is, but we're not doing that in our technology, because even though at the lowest level in Qasba, one since here, we are going through all the translation levels, from symbols to the thing represented from the thing represented to the knowledge itself. It's layered, it's layer after layer after layer, you know, in our neurons, you know, you can make the analogy that they like circuits in a computer. And that's not wrong, although the artificial neural networks are awfully, they're very crude compared to the complexity of a human neuron. But you can make that as an analogy, and it's not wrong. And so yes, human beings have a neural network, and we have abstract knowledge and ideas. But layer after layer after layer of processing is between us and the firing of synapses. and recreate that in an artificial neural network, is you're gonna have to recapitulate how many million years of cognitive evolution in new membrane, you'd have to know what all that was, and what all those layer was was, and there's no roadmap to doing that. Yeah, it's hopeless.
Justin Grammens 48:27
Yeah, sure. And they've been able to succeed in very narrow areas in this space, right? I think about like learning chess or learning go right that they I think I read that. Now they were able to essentially teach, you know, deep blue, how to play these games, the entire chess in hours,
Bryant Cruse 48:43
but you're comparing results. You're not comparing processes. It's not really the same game. Human beings are playing a game designed to teach them to be better at strategy. The computer doesn't have any goal. It's just goal is to make this come out this way, you know? And so, yeah, they can come up with results that are better. If the result is to create a paragraph that sounds more grammatically correct. Invoke with robust vocabulary than somebody with English as a second language can do. You can say, well, they had success, except it's meaningless because there's no communication, or, I mean, I think, at best, these language models have some utility as grammar checkers style. But again, since it statistically turns out at the top of the bell curve, it's only going to give you the statistical mediocre. It's not going to be anything exciting or creative or brilliant.
Justin Grammens 49:34
No, that's true. I mean, yeah, that's what I've seen. I mean, I use Grammarly. I found it to be very useful, to be honest, is just You're right. It's a it's a very, very powerful spell checker even beyond spelling it, it misses my periods and commas and actually, you know, there versus there. I mean, there's some contextual things that it's picking up right so, but you're right, it's not creative, anything.
Bryant Cruse 49:53
That's, that's good. I have no use no problem with these things at all. If you say you wrote something, and then you manage through a GPT, three thing, and he came back and said, Yeah, that was my ideas, but even better than they were, you know, it smoothed it out. And especially if English was a second language to you, I get that. But you have to say that's still my idea. Or if not idea, what is it? Oh, that's what I meant to say. Maybe it's even better? If so that's a random occurrence. And it's dangerous, though and pernicious when people don't understand. The people have to understand that it doesn't. Because it doesn't, there's no guarantee of that some egregious mistake. And people have not gotten loans or been arrested, and all kinds of other things, because of some machine algorithm has been put in a job that requires human understanding, without having
Justin Grammens 50:44
any. Yes, fascinating stuff.
Bryant Cruse 50:47
So I appreciate the opportunity to talk today and reach out to your audience. Because these are things that need to be said over and above the excitement that we have about what we've created, which we think is going to realize all these things that we've been hoping we're going to come. But I thought maybe they weren't still decades away.
Justin Grammens 51:04
Yeah, sure. They're coming faster than we think.
Bryant Cruse 51:07
Before you know it, you're gonna have an opportunity to talk with a non human entity that understand your language.
Justin Grammens 51:13
Don't be awesome. I'll be awesome or Great. Well, that was our point to hear of this podcast here is doing applied artificial intelligence. So this is right, in the whole application space. How should people reach out to you? You know, obviously, they can check out new sapience doc.com? I'm assuming, right?
Bryant Cruse 51:32
Yes, please do. And there's also another website that we've recently set up focused, particularly on the coming companion sapiens. And that's called My sapiens. And I also have a blog site. I use when I talk about things more generally, it's called forward to the future,
Justin Grammens 51:48
forward to the future. All right, well, I'll be sure to include that stuff in the liner notes for sure. On this, get get the word out. The other third wave of AI is very, very interesting. Excited to see where it takes us. Was there anything else you wanted to mention here or
Bryant Cruse 52:01
any other topics or projects that I missed? I think we've really kind of covered the waterfront. Forgive me for for monopolizing the conversation. No, no,
Justin Grammens 52:09
no, no, I'm here just to essentially provide the platform here to have people talk about what they're doing. Just provide a way for people to see what's going on in this new fascinating space of artificial intelligence. So appreciate your time again, Brian today, thank you so much for being on the program and look forward to keeping in touch in the future.
Bryant Cruse 52:25
Thank you, Justin. It's my pleasure. I'm looking forward to being in touch.
AI Announcer 52:30
You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening