Conversations on Applied AI
Welcome to the Conversations on Applied AI Podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of Artificial Intelligence and Deep Learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real-world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI.MN. Enjoy!
Conversations on Applied AI
Peter Scott - What AI Means for Your Life, Your Work, and Your World
The conversation this week is with Peter Scott. Peter is the author of Crisis of Control: How Artificial Super Intelligences May Destroy or Save the Human Race. He holds a Master's of Computer Science from the University of Cambridge and worked for NASA's Jet Propulsion Laboratory for 16 years. But today actually spends his time as an author, futurist and business coach focusing on assisting clients through exponential change. He has appeared on radio, TV and podcasts, in addition to giving a TEDx talk. He has a new book out called Artificial Intelligence and You: What AI Means for Your Life, Your Work and Your World.
If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!
Resources and Topics Mentioned in this Episode
- Crisis of Control by Peter Scott
- Artificial Intelligence and You by Peter Scott
- Neuro-linguistic programming
- C.P. Snow
- Artificial Intelligence and You podcast
- Moore's law
- Artificial general intelligence
- Capsule neural network
- HumanCusp.com
Enjoy!
Your host,
Justin Grammens
Peter Scott 0:00
We're seeing AI surprised us in ways it does things now that we used to think would require human level general intelligence. It doesn't mean that we have that in AI yet, but we see AI, doing things that crack a boundary we thought was a lot further away. And so it prompts us to ask well, what is this thing and and who are we?
AI Announcer 0:23
Welcome to the conversations on applied AI podcast where Justin Grammens and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.
Justin Grammens 0:54
Welcome everyone to the conversations on applied AI Podcast. Today on the program, we have Peter Scott. Peter is the author of Crisis of Control: How Artificial Super Intelligences May Destroy or Save the Human Race. He holds a Master's of Computer Science from the University of Cambridge and worked for NASA's Jet Propulsion Laboratory for 16 years. But today actually spends his time as an author, futurist and business coach focusing on assisting clients through exponential change. He has appeared on radio, TV and podcasts, in addition to giving a TEDx talk. Finally, he has a new book out called Artificial Intelligence and You: What AI Means for Your Life, Your Work and Your World. So very interested to dive into that a little bit more and talk about those topics, Peter, during this podcast. So thank you so much for being on the program today.
Peter Scott 1:36
Thanks, Justin. I love doing this. Thank you for welcoming me.
Justin Grammens 1:39
Awesome. I mentioned you working for JPL there, you know, how does somebody working for JPL? I guess it was maybe in the 90s, early 2000s, or whatever kind of get into artificial intelligence, and maybe what have you been doing? You know, since then?
Peter Scott 1:50
I've continued to work for them as a remote contractor since then. But being self employed means that I've been able to branch out and do these other things that you see with artificial intelligence, which was really born of the convergence of my work in technology, and understanding artificial intelligence, and having children that prompted me to use the background that I acquired in soft the topics which was sort of a reaction of being a computer programmer all day, and becoming a coach, as you mentioned, and doing lyric neuro linguistic programming. And then realizing there was a necessary convergence of those two worlds, like CP snow, talked about that we have to bring those together ultimately, for successful future with artificial intelligence. And I owed it to my children to do what I could about that. And I was in a position to actually communicate about that. And so that's what I've been doing.
Justin Grammens 2:50
That's fascinating. That's fascinating. You know, I read, you said, in one of the quotes, as I was kind of looking and researching wherever you are, and you said, I believe a partnership between the human development and software development fields, sort of is the key to our future. Could you elaborate a little bit more on that I thought that was an interesting quote, you know, this this partnership, human development software development fields
Peter Scott 3:09
well, just to how AI is being applied, and researched right now to see that all over the place in my podcast, which is also called artificial intelligence. And you're I've been interviewing lots of people over a huge range of topics that intersect with artificial intelligence. And I've been struck by how many of them people that combine cognitive science, neuroscience, and machine learning, that seems to be a common pairing of topics at university research and teaching levels. Ditto for philosophy and artificial intelligence, ditto for ethics and artificial intelligence. 20 years ago, those did not intersect in my world. And here we are seeing psychology and AI being talked about in the same breath by many, many people, lots of applications. There are business capable and monetizable applications right now being used for therapeutic benefit, like a therapist that you can talk to her three o'clock in the morning when your regular one might not answer the phone. And so many examples of this that we're finding, well, we could talk about why that's happening. But the fact that it is happening is just undeniable, the moment you start poking at the ways that AI is evolving on the leading edge right now.
Justin Grammens 4:33
Uh, yeah, let's actually talk about my is there some sort of a convergence, you feel that that's going on? Like, what are the major shifts that are happening in the technology to, to sort of put us in his position today?
Peter Scott 4:42
Sure. Well, I think one of the reasons for this is that AI is evolving phenomenal speeds driven by Moore's law, among other things, and infusions of huge amounts of money. billion dollars will make a big difference. We're seeing AI surprise us in ways does things now that we used to think would require human level general intelligence, it doesn't mean that we have that in AI yet, but we see AI, doing things that crack a boundary, we thought it was a lot further away. And so it prompts us to ask, well, what is this thing? And and who are we, in that respect? Why do we not understand this thing called intelligence or consciousness or even sentience? As much as we thought? And what is it and suddenly those questions seem a lot more important. It's funny how you can be an AI developer, and you're trying to get a machine learning model to work and it's misclassifying cats. So your reaction to some of this existential type dialogue is that's overblown. But increasingly, a lot of serious researchers and prominent people central to the field of AI development, are saying, we're heading in this direction that we need to explore further. And it's generating issues right now, obviously, with ethics and questions of bias and so forth that are immediate.
Justin Grammens 6:14
As you start thinking about intelligence, then it feels like now we're putting on a lot of our philosophical hats, right? And maybe, is it causing us to sort of step back and say, Well, what does it really mean for us to be intelligent as a race? And how are we really any different from all these machines?
Peter Scott 6:27
Yes, that comes up a lot, because we don't have a definition of intelligence that is adequate to perhaps measure where AI is, it's fine for putting a number to human beings, although it's hard for the people that actually have to do that to develop those instruments. But when it comes to looking at AI, and and saying where he's at on that scale, that's different. It's clearly a lot spikier compared to us, we are measuring general intelligence in human beings. But in artificial intelligence, we have Narrow Intelligence that sometimes looks like general intelligence. I mean, we're talking around this but specific examples of what the large language models are doing right now. Creating conversation, answering general questions, creating these amazing images, two prompts that stunned me and how consistent they were for their realism. Yeah,
Justin Grammens 7:29
you know, you got me thinking about just yeah, this whole idea of generative art. And that seems to be a really a hot topic these days, I think we've had natural language processing, we've had computer vision, kind of, you know, text, text to speech, speech to text, we've had some of these other things. But I think what's really hitting me I think, over the past couple of months is is you know, now we're generating all sorts of poems, we're generating music, we're generating artwork, what do you think about that? I guess, you know, I have we sort of crossed into this new new threshold are things different now these days because of this
Peter Scott 7:59
Well it provokes very heated conversations about creativity. And what we thought that was because for humans to do this kind of thing, it requires something that we often characterize as a moment of divine inspiration, communing with something higher than ourselves, to come up with something that moves us the way that Beethoven, Mozart and Van Gogh do. Now we can train large language models to do what they do, or did and produce things that evoke the same effect. And that makes people angry, because it did not stem from a human being being so transcendent of the human condition to do that, well, there's all kinds of philosophical conversations about that. But now we can get into Well, where's the utility of creative works, sometimes we experience them in and of themselves for what they are. So you go and attend a concert, you go to an art exhibition, and then you want to know something about the composer, you want to know something about the artist, if it's an AI, that's going to color your interpretation of that. But if you're watching a movie, and there's music behind it, that music is going to shape the way you react to that movie, but you do not have those questions. And AI could easily generate that music and probably figure out the emotional beats that it needed to meet in order to to generate that as well. Which has obviously, implications for people that do that kind of work for a living. Yeah, no,
Justin Grammens 9:36
that was gonna, that's great. You're here to kind of touching on the next thing that I was gonna maybe ask about, because with regards to you writing and publishing these books, and you're doing TEDx talks and stuff like that, the other thing you do a lot with is consult, right you consult with businesses and sort of coach them on how they can be relevant here with this new technology. What are what are some things you're talking to businesses today about related to artificial intelligence?
Peter Scott 9:58
Well, we are living in a A world where there's some fragility to our psychology these days for being under assault, so much from change that we don't understand is propelling us into the unknown. And so we feel fragile in that respect. And businesses can feel fragile. If they're people think, what's the chance of my job going away or being automated? What's the chance of this industry being superseded? So the people in charge of those businesses need to understand what they can do about that? How is the ecology that they're in involving the likelihood of there being some sort of existential upset in their sector? What does aI mean? And what can it really do and not do over what sort of timeframes in their business? And then what do they need to do to allay the fears of their people to understand how they live? Where should they be looking to use AI? And where should they be looking to use the human capital? What sort of things are under threat there are, which can be really hard, as you know, to understand where AI is, is really leverageable. And where humans have the edge, and the way we should capitalize on that. So it's about giving people ultimately agency to counteract that feeling of encroaching helplessness of going into an unknown, that where we don't know by definition, what's happening. But that's also true of the people that are taking us they are right, the people that are driving these high technology companies that they have a sense of what they want to do with the technology. But an exciting day is when it surprises them. The difference is that they're in charge, they are the ones driving the car, and the people who feel like they're not driving the car, that passengers in the back are the ones that have the most fear here. I like to say that exploration is when you visit the unknown. Disruption is when the unknown visits you. Big difference is agency. How do you feel about that? So a lot of what I do is about helping people feel a sense of agency, like they have some control some way to bend the need, or some understanding of where this car is going. Yeah,
Justin Grammens 12:21
and I guess you talk about how people can utilize capital or how businesses can utilize capital appropriately. What's the advice you're giving to companies based on certain skill sets based on certain job titles roles
Peter Scott 12:32
depends upon the sector. So if you've got someone that's a traditional product maker, for instance, something like detergent, for obvious example, or there's a supermarket chain, then AI is something that's leveraged at scale by people that have a lot of data. So if your competitor is Walmart, you're in trouble. If you don't have as much data as they do, because they can afford to develop one piece of AI that scales across their entire enterprise to leverage all that data. So you have to find where can you develop a large body of data that is unique within your industry, if you can't, then you may want to look for an exit plan may be your most valuable asset is your real estate, then there's a traditional service industry like insurance. Now, the actions of people like adjusters can be replaced by AI their insurers where someone can take pictures of their car damage through the phone, and an AI assesses it and adjudicates the claim on the spot. Think of who that's replacing in traditional businesses right now. But those people are very experienced at dealing with humans as relationships that's useful. They have internalized and learned customer relationship, put them in roles where that matters, where that continues. And you can develop that that's useful capital looking at a car and figuring out how big is that thing? No, not any longer. And then we have disruptive ones, you might have disruptive products, sort of think iPhone, that sort of thing. And your best asset is the creative thinking that came up with that because you've got some understanding of markets and people and what will people use that they don't think they need or don't don't know that they need and that understanding they're applied to the creative thinking is useful. You can use AI in your development and manufacturing, but you've already got the best human capital and then you can have obviously a completely quantum you can have disruptive services, going back away to things like search engine optimization, right now I guess disruptive service, hot off the press would be large language model prompt generation. You seeing people selling In this service in constructing the prompts to get the best kind of image out of large language models, someone says I want a picture of a dog juggling watermelons on a unicycle, and they can do that. So it's, you know, there's no competition from anyone else or a i. So you want to leverage your first mover advantage. And probably in this case, you're actually using AI to do it. Just like Uber, Airbnb, were a disruptive services that did that to disrupt service industries, where they disrupted lodging without owning any property or transportation without owning any vehicles.
Justin Grammens 15:41
Sure. Well, and I was interesting, you started by saying it's all about the data. Right? That's that's, that's really sort of the linchpin in all of this is making sure that whatever, well, I guess, you're not intelligent, unless you actually have some data to run things off of. So it kind of all starts there.
Peter Scott 15:55
While AI isn't intelligent without data, humans can infer things from much less data. And that's leverage. So if you can find applications where they depend upon humans, inferring conclusions from limited data, AI does not know how to deal with that at the moment, it needs hundreds of 1000s, preferably millions of data points. But when it's got that, then it will do better.
Justin Grammens 16:23
As interesting. He right. Yeah, so if you're running an industry where you don't have a whole lot of data, but you can infer things like you were saying with regards to customer service, the more human side, the more human touch side of things in my mind. So I've shared this on the program before, but my dad was a physician for, you know, 40 4050 years or so. And so I always kind of like to joke with them, you know, hey, look at all these AIs now that are replacing your job. And he always tells me well, but they don't have the human touch. Right? They don't have like the sympathy and the empathy that somebody can can have as a doctor to treat them to truly treat them. Right. Yes. And AI can go ahead and find cancer faster than AI. But that's not what the whole game is.
Peter Scott 17:01
Right. And so we need to use it appropriately.
Justin Grammens 17:04
Yeah. You know, I was thinking about artificial general intelligence, right. And I just didn't know nothing. I'm thinking about his like, What's your perspective on that? Are we are we getting close to reaching that? Do we ever think we're going to reach that, like, what does Intelligence look like to you AI looks like to you in 5-10-15 years?
Peter Scott 17:19
Wow. Thanks for the easy one. Yes. You know, artificial general intelligence is an unknown distance off. And so there's a lot of people thinking, okay, maybe that's five years, let's pour a few billion dollars into this. And so there's in incredible amounts of money being sunk into many projects to develop artificial general intelligence where something could be as smart as a human of any level. Frankly, if it was a five year old, that would be enough, because we know how to teach a five year old and you could do that with an AI. The consequences of having that in a computer. Just clear, I'm do someone who's already in the news. Vladimir Putin said, Whoever figures this out will rule the world. Obviously, he's got a sort of mindset that leads to that sort of conclusion. So the big question boils down to when is that we're soon? Well, great. But even I think John McCarthy said, way back, this could be five years, it could be 500. And we still don't know. But we all file it finding that we managed to do things that we thought would require Artificial General Intelligence. Without it, just like when Deep Blue beat Garry Kasparov chess, became a champion. Douglas Hofstadter said, My God, I used to think that playing chess required thinking, Now I realize it doesn't, which meant, he said, after that, yes, it does require advanced thinking for a human, but not for an AI. So we see a microcosm of this dilemma in self driving vehicles. Right now, how many billions of dollars have been spent on those over the last eight years, we still don't have the level five that we have been led to expect, was imminent, and looks further away than ever, at this point, as though that really does hinge upon having general intelligence. And if it does, then it is probably more than five years away. So we could see some sort of, I mean, I could have kept thinking that when the other shoe drops with that, we will see an AI winter, everyone I've asked about this says no, we'll see.
Justin Grammens 19:33
Yeah, you're talking about the self driving car. I just You probably saw on the news, the Tesla robot that they released last week or so which I think was not very impressive, in a lot of ways, in my opinion.
Peter Scott 19:45
But I thought it was impressive in how light it was. It's more visually appealing than Boston Dynamics, which is threatening.
Justin Grammens 19:55
Yeah, yeah, yeah, it is i That reminded me to go back and watch the thing that they did and They were like running, it was running around a curve. And it was doing a flip off a table and stuff like that. But you're right, it did actually felt like I was out of like, episode of Black Mirror, you know, this the show, but the Tesla robot, it seemed like it needed it was kind of walking very gingerly, it didn't seem like it was a very early prototype, you know, but, you know, and again, not to not to knock them too much, it's a hard problem to solve, like you're saying it was these things, probably a couple of years ago, people were saying, Oh, he's got robots gonna be walking around, or they're gonna be all over us and everything within the next step, you know, within the next, you know, three to five to 10 years. And it's like, I don't know, it still feels like it's a ways off, in my opinion,
Peter Scott 20:37
it's a problem to solve that is, I can't count how many orders of magnitude harder than driving a car. And if we can't drive a car, the idea that this robot is going to be able to do things that are useful. I don't know how to quantify the likelihood of that. One of the things that I thought was notable was that they didn't say what you would use this for. They showed it being used to carry packages around an office and water plants. Now, granted, it's early days and fine. But you haven't told us anything about what it might do in the future. Even last year, when they had a human on stage. All it did was breakdance.
Justin Grammens 21:19
Right? Right. So where's the end? And there needs to be some sort of business value in probably any one of these propositions for, I think, for entrepreneurs and other businesses to jump in and start developing the technology? Probably, it's usually usually not done for the sake of fun, I guess, I'm just trying to think like, why would somebody actually go through this and develop it to somebody that could break dance and just sort of leave it there, I feel like it would just sort of die out,
Peter Scott 21:41
I think that they've got to give us the application, in my go to example is a plumber replacing the P trap onto a sink, you get something that can do that, then you've got a useful application that's really, really hard. And to get a robot to do the number of different problems you have to solve. And, and the fact that no one has suggested that these might be able to do that says, well, then what are you going to use them for? Give us some idea?
Justin Grammens 22:13
Yep, for sure. For sure. Well, you know, how do people kind of get into this field? As someone want to questions I'd like to ask people that are on the show, because they they usually have sort of have spent a number of years thinking about artificial intelligence are even applying it. You know, if I was an up and coming college student, for example, or just starting this field, like how do you suggest people get into this? You know,
Peter Scott 22:33
I've asked a lot of guests that myself. And it's fascinating because I go really broad on the podcast. So I've talked to philosophers and CTOs and computer scientists and teachers, and ethicists, and anthropologists. And wherever you are, there's vector to being involved in AI. It's kind of like Kevin Kelley of Wired said once the next 10,000 startups, the model is going to be take X and add AI, well, whatever you're in, there's an intersection of AI with that, for instance, microbiology, and take the example of alpha fold, having decoded all 200 million known proteins into their structures. Recently, the number that we had before that was 190,000, obtained at great expense, and then went and did the rest, essentially, overnight, essentially, for free. Now, there is no way that that doesn't create entire new industries, I don't know what they are. But you can't increase the size of human knowledge in some space by a factor of 1000. And not expect that to happen. It's got to in that's in some areas of medicine, and there's that alone. So you could be in psychology, and you could be looking at how AI can understand and help psychologists, you could be in literature, and you could be looking at how AI can find connections between authors and works that you didn't think about before. Or you could be using it to help with your writing prompts. You could be in sports, and it could be analyzing competition, or how to be more than give anything old. And I can show direct, useful monetizable connections with AI.
Justin Grammens 24:21
Yeah, well, that's the beauty of it. You know, that is the absolute beauty that I that kind of got me into this whole space is as you write it can actually be used everywhere. I've even heard people that have been on the program said You know, it's even as a software engineer, if you're going to be doing software engineering, which is kind of my bread and butter. That's That's what I came up through that. He's like, you're going to be expected to understand this technology and in no matter what you're doing going forward, it's going to be a piece of what you do. Something to consider
Peter Scott 24:46
and if you're a kindergartener, I mean in Finland they teach AI now, starting from grade one, I don't know what that curriculum looks like, but I know how I would do it. In China. They do it from from kindergarten so you could be A teacher. And there is another intersection with AI. And it's important to understand these tools in the same way that you need to know how to drive a car, operate a computer, then we should be teaching people we should be teaching our children, here's how to use AI. By the time they leave school, it'll be a different kind of AI. But in the same way, it's useful to know how to use Google. Now to search for things, it's useful to know how to use artificial intelligence to get answers.
Justin Grammens 25:32
Well, how do you there's another question I like to ask because it's so broad and huge, like, you know, how do you define artificial intelligence? Do you have a sentence or two description that you typically use? Or
Peter Scott 25:42
if you look at a technical one that computer science uses, I don't think it's useful to the majority of people, you know, it's about software that learns even then that's probably more of a subset. That's machine learning. I say that what's useful to most people is artificial intelligence being something that can do something a human could do, but couldn't really write down the steps for couldn't describe now, that's a subset of what artificial intelligence really is. But I think it's the one that means most of people who are not in the field to think about AI doing things that that we know how to do. But we can't describe because it illuminates this paradigm shift away from the old adage that computers can only do what they're programmed to do, which is no longer true. Because if that were the case, computers couldn't recognize faces, because he can't tell how you recognize faces. And we still don't leave it really how our computers are doing it. We just know that we can train them to do it.
Justin Grammens 26:48
Yeah, no, that's definitely true. You're right. The traditional programming is, you know, basically, there's there's inputs that are put in and there's some logic that happens and outputs happen. And kind of the sort of what machine learning and stuff talks about is just it's the other way around. It's the data that's put in, and the output is the logic. And you're right. There's a lot of sort of black box stuff going on with regards to Yeah, it's it's crazy Google Photos will recognize faces of kids that are, you know, 15 years later, right there. They're a toddler. And then 15 years later, it knows it can pick it out. Yep, that is Jimmy. And it's like, wow, that's amazing. You know, they can do that. And you're right. No one could possibly programmed the logic, the if then else statements to make that happen. It's fascinating, fascinating. But But But you're right, you're right. We we know that it works on a lot of different scales. And some of it is a little bit of a little bit of a black box. And I think maybe that's why we start getting into this philosophical question of, well, now, does this thing have consciousness or not? Right? Is it doing things that that a conscious being would do? And there's a whole rabbit hole there? You know, again, you and I have talked to people that can pretty much talk about for hours about about those sorts of things. And I think it's, I think it's needed, I think, you know, this is a whole ethics, you know, the whole ethics of artificial intelligence, are you are you talking to companies a little bit about about, you know, the AI that they use, if it's, you know, if it's if it's being used responsibly or not, are companies even thinking about this?
Peter Scott 28:08
Oh, it's thinking about it a lot. There is an entire sub industry now about providing AI ethics services to companies who want to ensure that they are using it ethically and responsibly, and to know how to do that. So there are people whose business model is providing that service. Yeah, well, cool.
Justin Grammens 28:30
Yeah, I think I, I talked to a guest recently, and and I just kind of set off handily. Well, we kind of got to get this right here the first time. And she had agreed she's like, Yeah, that's, that's the whole thing that we're trying to really push us is, if you're using bad data, and you're training bad algorithms, there's going to be bad outcomes.
Peter Scott 28:47
Sure, AI will let you make the same mistakes only faster and at scale.
Justin Grammens 28:52
Yeah, yeah. Well, so your book artificial intelligence in you what AI means for your life, your work and your world? Where can people pick that up?
Peter Scott 28:59
Amazon, if you want to get it from the bookstore, they can order it from the Ingram catalog that will probably take a little bit longer. But every country has got Amazon, you can get it there.
Justin Grammens 29:09
What was the what was your process? I haven't written a book. It's kind of on my bucket list, though. I really think I'd like to at some point, but always kind of like to ask people who are published authors like was it? Was it hard, was it How long did it take? Is it something you're going to do again? I mean, obviously, this is book number 10. For you. So you've already done it again. But yeah,
Peter Scott 29:26
and there were other books on other topics before that. But I need enough time after writing a book for the amnesia to set in to think about writing another one because it's kind of like, well, analogy might be for women giving birth, you ask one while she's in the process of doing that. Hey, do you? What do you think about doing another one? She will like break something. Yeah. A little while later, it seems like a good idea. And so with a book, it's much the same way because the process of creating one is the exponentially rising work and initially thinking, you know, I think I can do this in just like a couple hours a week, or maybe an hour a day, just, let's do a little steady work here. And that will just sort of eMERGE gradually. And there may be some books that come out that way, not mine. And so eventually it took setting a deadline, and then it turned into a whole lot of work. So it just ramped up and up and up. That happens when you get to the point where you have a commitment to someone to have a result by a particular time. And maybe that's an editor, maybe that's reviewer was, maybe it's something else.
Justin Grammens 30:35
Gotcha. Well, is it safe to say the next book, which I would say, when you do it not? If you do, and I think, as you've been writing, I think you'll do another one would still be in artificial intelligence. Are you dabbling in some other areas now as well,
Peter Scott 30:47
if not imagine being another area than AI at the moment because it covers all of them. And that unlikely, but I just don't know what the angle the next book on AI might take.
Justin Grammens 31:01
Gotcha, gotcha. Well, yeah, I mean, I'd started this podcast, because I really liked applications of artificial intelligence and really having conversations. So conversations on applied AI just sort of fit. You're right. It is such a broad topic. And it's been so fun for me just to interview people like yourselves and others that are doing just some really, really, really good thinking and working in the space and opening up businesses eyes with regards to how they can use the technology. Is there, you know, are there other things that I maybe missed that we haven't really covered that you wanted to talk about specifically?
Peter Scott 31:33
Well, I think one thread worth picking up on was when we were talking about, you mentioned how the Google Images can recognize someone 15 years later. And then we're talking about these things conscious. And one way to sort of that balloon is to look at how the image recognition can be fooled, or even to ask it to identify the areas of an image where it's inferring most of its data from and find that it can be looking at a picture of a dog. And yet, its biggest cue comes from somewhere that's not even part of the dog, it's not even learned that and that you we can add noise or what looks to us like noise to an image. And the AI goes from thinking something as a poodle, to think it's an ostrich with much greater confidence, we can game it that way. It's not thinking about these things, the way that we do, despite the fact that we can look in the network and identify things that are that are doing like parts of our neural network, our optical cortex does where they recognize diagonal lines and certain features. It's just not processing the same way. So we can achieve the same results. And we can achieve results that humans can't do like you gave the example but they are in many respects fragile, you can train an AI to play breakout, the game Pong or you know, many video games, that you then move the paddle a few pixels a difference that's imperceptible to a human makes no difference to that play. But it now AI is stuck. Yes. To train all over again. It's that fragile?
Justin Grammens 33:13
Yes, yes, you're right. You're right. And that's the I guess that goes to the point of we're in this this narrow AI space, right. And I mean, ultra narrow, it's one thing to be like, Okay, we're gonna we're gonna train things that can see images and train things that look and listen to music. And they're totally different, you know, sets and I, I do know, you can do some transfer learning, right and get get that but you're right, if things go kind of off the rails a little bit for a lot of these AI models, they don't really know what to do, they can't really adjust very, very narrow versus general.
Peter Scott 33:40
Yeah. When we when we see changes in, in that I think it will be the the first indications that we're on a road that leads to artificial general intelligence.
Justin Grammens 33:50
And how do you think we're going to accomplish that? Or is that just remains to be seen?
Peter Scott 33:54
I don't know. That's way out on the bleeding edge. I mean, Geoff Hinton talks about things called capsule networks that may be robust with respect to things like transformational changes, rotations and things in images. But I do not know where that research is.
Justin Grammens 34:11
Crazy. Crazy. Yeah. And I wonder if it borders back into or I'm in. So my head started going, Well, maybe you gotta get away from machines. And you got to get back into organic chemistry and human biology, again, to make this happen,
Peter Scott 34:26
which is why so many people, as I mentioned at the beginning, I find working in neuroscience and AI, trying to find that crossover, they say, Well, we know this now about the human brain. We're learning all of these things about what's inside there. Can we do that in a machine?
Justin Grammens 34:42
Yeah, for sure. Well, how do people reach out to you Peter, find John, on LinkedIn, is that the best place?
Peter Scott 34:47
Sure, you can find me on LinkedIn and you can also go to humancusp.com and links to the books and the podcast there.
Justin Grammens 34:59
Excellent. Well, Peter, thank you so much for being on the program today. It's been a true treat. Thank you for sharing all your knowledge with our community. And I thought we had a really good conversation. So I look forward to keeping in touch with you in the future and I guess together we'll sort of explore this area and see what see what comes here in the in the coming years.
Peter Scott 35:15
Oh, thanks, Justin loved every moment of it.
AI Announcer 35:18
You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening