Conversations on Applied AI

David Espindola - What AI Means to Us as Human Beings

February 13, 2024 Justin Grammens Season 4 Episode 3
Conversations on Applied AI
David Espindola - What AI Means to Us as Human Beings
Show Notes Transcript

The conversation this week is with David Epsindola. David is an entrepreneur, futurist, keynote speaker,  award-winning author, consultant, and advisor to businesses, nonprofits, and academia. He founded Brainyus, a company that applies the principles of transformative purpose, lifelong learning, And servant leadership to guide human AI transformation. David has been featured in the U S and international media, ranging from Fox News to TV Global. He is the award-winning author of Soulful You in the Future of Artificial Intelligence and the Exponential Era and an advisory board member of the Technological Leadership Institute at the University of Minnesota. David continues to get back through his work with Feed My Starving Children and other nonprofits.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

Enjoy!

Your host,
Justin Grammens


[00:00:00] David Espindola: I like to think about the deeper questions, right? So I, I try to think about what AI means to us as humans, right? As, as human beings. What's fascinating to me is that, you know, for the first time in the history of humanity, we now have this entity of sorts that could become just as intelligent, if not more intelligent than we are.

And I wonder about, you know, what does that mean to us as a human being? What, you know, what's, what's the significance of that? How is that going to change our role in the world?

[00:00:41] AI voice: Welcome to the Conversations on Applied AI podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning.

In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI. mn. Enjoy!

[00:01:11] Justin Grammens: Welcome, everyone, to the Conversations on Applied AI podcast.

Today, we're talking with David Espindola. David is an entrepreneur, futurist, keynote speaker, award winning author, consultant, and advisor to businesses, non profits, and academia. He is the founder of Brainius, a company that applies the principles of transformative purpose, lifelong learning, And servant leadership to guide human AI transformation.

David has been featured in the U S and international media, ranging from Fox news to TV global. He is the award winning author of soulful you in the future of artificial intelligence and the exponential era and advisory board member to the technological leadership Institute at the university of Minnesota.

David continues to get back through his work with feed my starving children and other nonprofits. Thank you, David, for being on the program today.

[00:01:57] David Espindola: Thank you, Justin. It's an honor being here with you. I appreciate the invitation. Awesome.

[00:02:02] Justin Grammens: Well, man, you've done a lot and you continue to do a lot, as I had mentioned here during the intro, writing multiple books, being a board member and working with nonprofits and whatnot, maybe you could give a quick, I guess, background with regards to sort of how you got to where you are today.

[00:02:16] David Espindola: Yeah, I'd be happy to do that. So just a little bit of background. I. Come from the technology industry. I started my career in Silicon Valley. I had an engineering background, so I started as an engineer. I worked at two very fast growing tech companies in Silicon Valley, one of which did an IPO. So that experience was really tremendous for me because I could see firsthand the power of technology and the power of entrepreneurship.

I still carry that with me to this day, just, you know, the power of that experience. And then after that, I decided that I wanted to go work for a large company. I definitely wanted to be in software. I knew software was going to be the place for me, but I wanted to work for a large company. So I went to work for Oracle.

I like to say that I survived 39 quarters at Oracle. So anyone that has worked for a large software company, you know, It's understandable that every quarter is a battle. And so that was a great experience for me. Love working at Oracle, worked with the large. Number of clients, usually, you know, four to 500 companies that were going changes, business transformation, software implementations, and so on.

And then I decided that I wanted to pursue a CIO career. So I actually went to work for a boss. that I had worked for at Oracle. He had pursued a CIO career. He became a CIO, and he was very generous in helping me get the experience that I needed to become a CIO. So, you know, before that my experience was primarily with business systems.

I had no infrastructure experience, but he gave me responsibility over the infrastructure at a couple of different places. So I got that experience, and then when the opportunity came knocking, I became the CIO of Flexera Software out in Chicago. And then after that what happened is there was a change in the organization, the CEO left.

And I had been traveling for 20 years in my career, you know, 10 years as a consultant at Oracle. And then the next 10 years, I was actually working in Chicago for this boss that I mentioned. He's based out of Chicago and I didn't want to relocate. So I was traveling back and forth. So that was 20 years on the road.

And I said, you know, that's enough. I don't want to do that anymore. And I decided to just do my own thing. So that's, that's how I got

[00:04:35] Justin Grammens: here. Nice. Nice. Good. And so how long have you been doing your own thing then? It's been about seven years. And years. Congratulations. Yeah, no, and you talk about being an entrepreneur.

You've been living it here for seven years. And I mentioned during the intro here, you've actually authored two books. What drove you to want to write

[00:04:51] David Espindola: Yeah. So, you know, one of the things that I started to realize was that things were changing at a speed that I had never seen before. Just the pace of change was really extraordinary.

And I started to think about what the impact would be to, to business and society in general. And that's when I decided to write my first book. And through my connection at the University of Minnesota, IEEE is was creating a new series. They wanted to get authors to talk about technology, innovation, and management.

And so I wrote a book with my partner, Michael Wright. Called the exponential era and the whole idea behind the book was to really help companies and executives realize that the pace of change was really becoming exponential and a lot of it was driven by Morse law, which basically says that, you know, computing capacity is doubling every 18 months to two years.

And with the digitization of everything, right, there are several other platforms that were also growing exponentially, you know, things like blockchain and robotics and biotechnology and all of these platforms were converging and the conversions was just creating transformation at a pace that we had never seen before.

And the question that I was trying to address is how are companies going to deal with these changes? Because, you know, having. Spent decades dealing with change management and knowing how hard it is for companies to change. I started to think, you know, if companies don't embrace a faster change management process, they're going to be left behind.

So in that book, I offer a suggestion for a methodology that companies can use. I didn't try to reinvent the wheel. I was just leveraging things that were already out there that were known, you know? So in the world of software, we're very used to the agile methodology. In manufacturing, we have lean. In design, there's design thinking.

You know, the military has the OODA loop. So we combined all of these things into a methodology for companies to deal with these changes. And a lot of it revolves around, you know, becoming a learning organization, right? Being constantly in that learning mode and doing experiments and seeing what works and what doesn't work, but also being ambidextrous, meaning, you know, you have to run the business as is you have to dedicate some resources to continue running the business.

But at the same time, you've got to be looking to what's coming next and then dedicate some resources to be working on the next opportunities. So that was the first book. And what I realized at that point in time, as I was talking about these different platforms, was that there was one that was really, I thought, was going to explode very, very, very quickly.

And that was AI. So this was like two or three years ago when I had that realization, you know, this was before. Chat GPT came to four and, and for folks like us that are in the industry, you know, we, we knew about AI, AI is nothing new. It's been around for a long time, but back then it was only known within, you know, tech companies or maybe in academics, the general population really didn't know anything about AI until chat GPT exploded.

So the timing was, was really. You know, luck on my part that I, I took two years to write the book and the book came out in May of last year, which is really at, you know, when, when AI was, was hitting the media like crazy. So,

[00:08:27] Justin Grammens: so walk me through it. So the exponential era came out first, right?

[00:08:31] David Espindola: Yeah. So the exponential era came out in 2021, 2021.

[00:08:35] Justin Grammens: And then you're a soulful.

[00:08:36] David Espindola: And then this year I wrote, I published soulful you in the future of artificial intelligence.

[00:08:42] Justin Grammens: Great, great. Are you working

[00:08:43] David Espindola: on another one? Not currently. So, you know, the reality of book writing is such that it's a lot of work and it really requires a tremendous amount of dedication.

I took advantage of, you know, periods of time in my life when I had the time and I had, you know, the ability to focus on those things. But it's, it's a lot of work and I, I need a break now. Yeah.

[00:09:06] Justin Grammens: Well, I have a copy of it right here. I think it's fabulous. Thank you. And you know, I have a huge amount of respect for you for taking the time to put your thoughts into words.

And I maybe mentioned this to you at one point. It is an idea on my bucket list. You know, I've written small chapters, little pamphlets and stuff like that, little ideas, but to bundle it into one. Cohesive story, I think is, is fabulous. So nice work on that. And, you know, I, I love how you sort of talked about, you know, what the exponential era was kind of about, because it, it made me think about the singularity and this idea, I remember seeing Ray Kurzweil speak at the university of St.

Thomas, actually, I want to say it was maybe boy, I was in graduate school. So it was probably like mid two thousands or so. But I was completely blown away by this idea of just, you know, how fast things are moving and that there's Moore's law. Sure. That's around computing power, but I feel like AI is moving even faster than, right.

And I wanted to get you like maybe your thoughts around, you know, are you concerned, do you think it's moving too fast? What are some of the things that you're seeing out of the industry as you're talking with, with businesses?

[00:10:06] David Espindola: Yeah, so. Absolutely. You know, I don't think we've ever had anything move as fast as AI.

This is unprecedented. And so obviously it raises a number of issues. You know, one of the concerns that I have is the technology is moving at a pace that's much faster than our ability to You know, deal with the, the ethical ramifications to come up with reasonable regulations and things of that nature.

And there are, you know, several ethical concerns regarding AI. But I lean towards being optimistic and I think the benefits that we're going to get from AI outweigh, you know, the risks and the concerns, which are, are real concerns. I'm not a proponent of, you know, the idea that AI is going to. Take on, you know, the Terminator, high net kind of thing.

I just don't, don't think that is a concern that we should have at least at this point in time, but I think there are concerns around privacy. There are concerns around biases. There are concerns about, you know, misinformation, deep fakes, all of those things are things that we need to, to consider and really be able to, to balance, you know, regulation so that it does the job of avoiding some of these negative consequences, but at the same time, not.

You know, slowing down the progress because there are other countries, other people that are going to be moving full speed ahead with AI, and there's no turning back. I mean, we've got to keep going.

[00:11:41] Justin Grammens: Yeah, yeah. I mean, to, to maintain our position in the world. And one of the things that I think is, is. This tool, in some ways it feels like third world countries could use this tool.

It puts us all on a, on a level playing field in some ways, right? I mean, you know, you can take somebody who maybe hasn't grown up with, you know, the years and years of software development experience and put this tool in front of them. And they can kind of overnight not become an expert per se, but they definitely have a huge advantage, right?

So it feels to me like countries that, again, it's just sort of off the cuff a little bit, but countries that maybe would be laggards in a lot of these areas. Now it can essentially put a lot of intelligence in the hands of, of a lot of people that maybe wouldn't have, have had access to it in the past.

[00:12:24] David Espindola: Yeah. I think AI has a huge. Implication to education. And to your point, I think it's going to democratize access to education, to information in ways that, you know, we haven't seen before the challenge. I think with the developing countries is that a lot of these countries in the last several years or decades.

They've really relied on using human capabilities at a lower cost to be able to market, you know, just labor as an arbitrage. And with things becoming more and more automated, I don't know that we're going to need all those human resources anymore. So, you know, this raises the question that, you know, a lot of people are asking.

About regarding what's going to happen to the job market, right? Is AI going to take people's jobs? And I think that initially, I actually believe we're going to create more jobs than takeaway jobs. But I think in the long run, you know, I'm sort of aligned with the way Elon Musk thinks about this. I think in the long run, there'll be fewer jobs available.

But on the other hand, so again, this is my optimistic side. I think we're going to create new forms of production that could potentially create abundance, right, so that everybody can have their material needs met. The question is how do we distribute that in a way that is fair and that is allows everybody to live a life of dignity and comfortable lifestyle.

True,

[00:13:55] Justin Grammens: true, true. Yeah, so when you go into an organization, you know, they hire you to maybe come in and talk with them. I mean, obviously you have a wealth of knowledge that you bring in, you've written these books, you know, been looking at the industry for many, many years now. What are some of the initial conversations, I guess, that you have with them and kind of what's your strategy going into

[00:14:13] David Espindola: it?

Yeah, so I'm actually spending a lot of time focusing on education around AI. And so the conversations that I'm having are all about how do we help upscale the workforce so that they are better prepared to collaborate with AI and be effective in their job because AI is going to impact all of us. And the sooner we embrace it, the sooner we start to learn how to work with it, the better off all of us are going to be.

So I think a lot of organizations are still in the process of trying to understand what AI means to them, how is it going to impact their business. You know, some companies have made good progress in working the fundamentals things like getting their infrastructure, you know, in place or moving to the cloud, getting the data governance process figured out and getting, you know, the data classified and cleaned.

So those are all fundamental steps that you need to take. In order to benefit from AI. And I think a lot of companies are working through that process right now. But at the same time, you know, you need to prepare your workforce to be able to deal with AI because jobs are going to change drastically and companies are worried about that.

So I'm actually working partnership with the university of Minnesota to develop educational programs for local companies here in Minnesota. That one trucks go to work for us. And we are talking to a lot of executives here in town about working together with them on this. That's

[00:15:41] Justin Grammens: fabulous. I'm an adjunct at the University of St.

Thomas and, you know, students, I have them write a term paper, you know, I have them summarize articles that they find and I am certain they're using ChatGPTI. That's something I would absolutely expect that they would do because I guess if I was in their shoes, I probably would as well. And I think there's huge value in that, right?

If you have a blank page, how do you actually start getting ideas? So there's nothing wrong with that, but you know, that only goes so far, you know, it can help them write their code. But we just actually finished off our class with a capstone project. And so last Thursday, everyone had to come up with a capstone project.

So this is a machine learning AI and IoT class. So students needed to get some sort of device out in the field, get sensor data from it, train a machine learning model, and have it basically interact in a real world setting, you know, at the edge. So one of the students actually used a camera to notice if you were falling asleep at the wheel, right?

So, Coding that isn't actually too hard. There's a lot of platforms. There's a lot of drag and drop stuff you can do. But it was the idea that came up and also the presentation. I mean, there's a whole sort of aspect and then being able to explain like, if you had another 16 weeks, what would you work on next?

And where are some of the holes in the system? Obviously this thing isn't production ready, right? So to me, I really try and nip at the edges around like, what are some things that I'm going to kind of stretch you on and you're going to have to think through and go ahead, use the AI for what it's good at, which is kind of, it'll generate code, it'll give you ideas to write stuff, but you still need to sort of have the human aspect of presenting and working through.

The scientific method in a lot of ways, right? So, it was an interesting test.

[00:17:15] David Espindola: I really applaud you for doing that, because, you know, I remember when ChatGPT first came out, a school district in New York prohibited the use of ChatGPT. And I'm thinking, well, you know, we can't keep these kids in a bubble, right?

Obviously, you need guardrails. You want to be able to help them work with the tool in the right ways and avoid some of the negative aspects of it. But, you know, you can't just prohibit it. I mean, what do you think they're going to do when they go out in the real world? Right. I mean, they're going to work with these tools.

So the best thing that we can do as educators is to really prepare them to work with these tools and leverage the tools. And I think, like you said, you know, if professors are worried that they're going to use chat GPT and, and cheat on their papers, they're just not being imaginative enough because, you know, like you said, it's about the ideas, it's about, you know, presenting, it's about working in teams and collaborating.

Those are the skills that we need to develop in our students, because that's what they're going to find when they go into the real world. So I would change the emphasis. I think education, you know, we can spend a whole other hour here talking about education, but I think education is. Prime to be disrupted.

I think AI is going to be very disruptive to education. We really need to rethink the pedagogical methods that we use and the approach to education in general.

[00:18:35] Justin Grammens: Yeah. Yeah. So it sounds like you're working with the university of Minnesota on some issues or some, I guess, some initiatives, I guess, there.

[00:18:42] David Espindola: Yeah. So like you said, I'm on the advisory board of the technological leadership Institute. And so one of the things that we did as part of the advisory board that I thought was really outstanding. Was to help TLI introduce more ethical discussions within their program and the curriculum. And I think that was very timely because now we have, you know, AI, which is raising a lot of ethical issues and I think a lot of executives.

We really need to think through the ethical use of AI. So just having more of that ethical conversation in education, I think is absolutely critical. So I was very pleased to see the progress that we made from that standpoint. And now we're, you know, starting to think what are the next steps. And I think, you know, it goes without saying that AI.

Is going to be one of the most important, you know, technology innovations that are going to hit all organizations. So how can we. As a, an education institution, help local companies become prepared for that. For sure.

[00:19:49] Justin Grammens: For sure. Yeah. And I think we need to get ahead of the curve in some ways, you know, I feel like Minnesota is an interesting place.

I feel like we have a lot of universities, a lot of schools. We're really, really known for the amount of money we spend on education. But yet I feel like we're not so good at entrepreneurship and startups and helping companies sort of get going here in Minnesota. I don't know if you have any thoughts on that or experienced any of that.

[00:20:14] David Espindola: Yeah. And I think, you know, that's so unfortunate, right? Because we have this tremendous history. You know, Minnesota used to be the hub of computing, right? Before Silicon Valley, even. And we lost that. But I mean, we have so much potential here. You know, we have so many. Highly educated, very smart people.

And I think we really need to focus on turning that around and helping Minnesota become a leader in startups and innovation and technology. The potential is here. We're just gotta be able to, you know, get it to the next level. Yeah, for

[00:20:50] Justin Grammens: sure. Well, that's another plug for the Applied AI group, I guess, is to have more people come out and engage with the community here can only help to further the discussion.

That's one of the reasons why I think this group is so important here in the Twin Cities.

[00:21:02] David Espindola: Yeah. And you know, once again, I want to really thank you for your leadership in that area. You're doing a tremendous job. And bringing the community together and providing education, providing a forum for people to talk about these innovative technologies, A.

I. So I'm really appreciative of all the effort that you've put into this. And I think you're making. Such great progress. I'm really excited about 2024 because I think there's a lot of great things coming.

[00:21:30] Justin Grammens: Yeah, for sure. Well, thank you, David. I appreciate that. No, I, I'm really, really happy, happy to do it.

And I get a huge energy boost whenever I get a chance to have people on this program and talk about what they're doing. What sort of content, where are you looking these days? Are you reading a lot of books on this, on these subjects? Do you subscribe to articles? And some of this sort of leads into the next question.

Like, how would you recommend people maybe start if they're brand new to this space, where would they go?

[00:21:55] David Espindola: Yeah, you know, there's, there's no lack of information. There's no lack of, of resources, right? I think that the challenge sometimes is curating these things. And figuring out, you know, what's most relevant to you.

What do you want to learn? Where do you start? And that's why, you know, this program that I'm working on with the university of Minnesota, I think is going to be very helpful to organizations because a lot of them are struggling with the exact same question is, you know, what does this mean to me? How do I start?

Where do I go? Where do I find information? There are tremendous, very, very good online. Classes, education, resources that people can tap into. Deeplearning. ai is one that I really like with Andrew Ng and his team. They're doing a fabulous job. They're always coming up with, with new classes on the latest developments in generative AI and LLMs and fine tuning, all that stuff.

I've taken a couple of those classes and all the vendors are coming up with classes as well. Amazon and Google and Microsoft, all of them have fantastic resources that are Available, most of them for free universities are coming up with courses. MIT has excellent executive level courses, and I think people should take advantage of these resources to become educated books.

You know, I wrote my book realizing that there are going to be a million other books out there about AI every, every week. There seems to be a new one that gets published. So tremendous amount of resources out there from that standpoint. So I think people need to really spend the time looking at these sources of information, finding the ones that are most relevant to them, working by engaging with the community, you know, participating in activities like all the things that you're doing with Applied AI, learning from others, experimenting.

You know, this is back to lifelong learning, right? This is we all need to be in this growth mindset, figuring out how we can spend more of our time learning, because the learning is going to be essential to our future. So we need to embrace that. Yeah, for

[00:24:06] Justin Grammens: sure. You know, you were mentioning about books being published, boy, there have been a ton of books now that AI has published, right?

Authors have basically just. Brought online and they're very transparent. They're like, I just, we just wrote this in AI. You know, you talk about replacing jobs per se. I mean, what, what does that make you feel like as an author? You know,

[00:24:25] David Espindola: that's an interesting question because that's the opening of my book.

That's how I opened the book. I say, you know, cause there was a guy that I connected with on LinkedIn. He's an author. He has written 12 books, very successful keynote speaker. And one time he posted on LinkedIn. I said, I'm not writing books anymore. You know, cause AI is going to write all the books. So I'm not doing this anymore.

So one of the worries that I had when I was writing my book is I was thinking, gosh, I'm going to write this book about AI by the time I publish it, it's going to be obsolete because things are changing so fast. And, you know, the publishing industry is a very old fashioned. It's a, it's one of those industries that's been around for.

You know, decades, and they're still pretty much work the same way that it worked 50 years ago, but now we have these other sources, you know, so people that want to self publish Amazon offers, you know, tremendous capabilities for people that want to do that. If you want to use AI to help you write books, then, you know, that's a great way for you to get ideas to get started.

Now, if you're just going to have AI. Write everything for you, then I think it, it loses its meaning because, you know, what is it that you're trying to do? Are you trying to contribute with your own ideas or do you just want AI to provide its own? And so if everybody takes that approach. Then it becomes meaningless, right?

It's, it's indistinguishable one book from the other. It's all gonna be the same thing written by ai. So I think ai, the way I think of it is, is the way I think of it for everything that we do, our jobs, AI is a tool that you can use to help you be better at your jobs. So if you're an author, definitely use AI to do your research to find other sources of information.

You know, sometimes as an author, it's sometimes you. You have the blank stare, you look at a blank piece of paper and you don't know where to start. AI can be great at helping you get started, you know, but don't have AI write everything for you. You do the writing, right? And you do the editing and you do the tweaking and, you know, make it your work.

Be proud of what you're producing. Instead of having AI write it for you.

[00:26:39] Justin Grammens: Yeah, yeah, for sure. Great point. Great point. And I typically ask people, well, how do you define AI? You actually answered that for me. Yeah. You said AI is a tool that you can use to be better at your job. Is that pretty much as simple as that, if you wanted to sum it up?

[00:26:52] David Espindola: Yeah, I think so. I think, you know, technology is neither a bad nor a good, right? It's neutral and it's really up to us how to use it. And this goes back to, you know, I see a lot of negative commentary, you know, on social media and so on about the dangers of AI. But, you know, we can go back to the beginning of humanity where, you know, we had fire and fire was a new discovery.

And you can use fire to, to cook your, your meals, or you can use fire to, to destroy things. Right. So it's the same with AI. Totally.

[00:27:25] Justin Grammens: Totally. Well, yeah, I'll, I'll put, so we have liner notes during this thing, so we'll have links off to where people can get your book, your website, some of the stuff going on with deeplearning.

ai. How do people reach out to you

[00:27:37] David Espindola: specifically? So I am active on LinkedIn. So if you just look me up on LinkedIn, David Espindola, that's one way to connect with me. I do have my personal website, davidespindola. com. If you're interested in learning more about some of the things that I do, and if you want to connect with me, you can send me a communication note through their website.

And then for people that are here in Minnesota, you know, I'm, I'm very interested in engaging with the community here around AI. So just being part of all the things that you're doing with the applied AI activities, the conference and the meetups, the podcast that you're doing right now. If people want to, to connect, I love meeting people and talking about technology and talking about AI in general.

Yeah,

[00:28:25] Justin Grammens: for sure. For sure. Yeah. And I see you, you know, you're speaking at a couple of different conferences coming up here. Sure. We'll see you more and more out in the community. Is there anything else that you wanted to talk about? I guess that maybe I didn't, I didn't touch on it. It was

[00:28:38] David Espindola: anything in particular.

So I, I try to think about what AI means to us. As, as humans, right, as, as human beings. Sure. And what's fascinating to me is that, you know for the first time in the history of humanity, we now have this entity of sorts that could become just as intelligent, if not more intelligent than we are. And I wonder about, you know, what does that mean to us as a human being?

Or, you know, what's, what's the significance of that? How is that going to change our role in the world? So that's something I like to think about or something I wrote about in the book. And I have my own thoughts on that. I think, you know, as we move towards a world that's more automated and where AI may be doing a lot of the things that we do today, I think the.

Question of purpose is going to become huge because I think we're going to really perhaps have more time available to try to understand what's our purpose. What are we here for? What is it that really matters? What's important to us? So I think being purposeful in what we do. And, you know, I, it's interesting because I think COVID in a way was sort of, you know, the silver lining there was sort of a wake up call for a lot of people because, you know, we were getting into this mode of, I like to, to say, you know, robots are becoming more like humans, whereas humans are becoming like more like robots in the sense that, you know, you're just busy, busy, busy without really thinking, why, why am I doing this?

What's my purpose? Right. And I think with COVID where we had to stop, people had more time to really reflect on that. And I think a lot of people really changed careers, changed the way they, they approach to work. And I think this new generation. Is really a lot more purposeful than my generation was, and I think that's a good thing.

And so my expectation is that in the future, we're going to be. More focused on the big questions, the deep questions that we all need to to think about.

[00:30:49] Justin Grammens: Wow. Lots to take in there. I mean, I guess my first question is, is what does it mean to be more intelligent? Have you thought, have you

[00:30:57] David Espindola: thought about that?

Well, you know, so that's a really good question. And, you know, I try to keep things relatively simple when it comes to that. In the sense that, you know so we're currently at a stage with AI known as artificial narrow intelligence, right? So it's, it's narrow. It's also known as weak AI. And the reason why it's, it's narrow or weak is because today AI can do a few things better than human beings can, but AI doesn't have the general cognitive capabilities that we human beings have.

And so we are on this quest for the next level. AGI, right? Artificial general intelligence. And there's a lot of debate out there as to, you know, how soon we're going to get to AGI. There are people that think, you know, with Gemini coming out and the demo that they did, that people are going, Oh my gosh, we are here.

It's AGI. But, you know, you have people like Kai Fu Lee, for instance, who is the ex president of Google in China, and he thinks we are nowhere near reaching AGI because to get to AGI, we need some breakthroughs that we're not quite there yet. You know, we had this breakthrough with, with LLMs, with generative AI, but, you know, we don't.

Really understand consciousness. We don't really understand some of the other cognitive processes that human beings have that make us unique. And so, you know, one analogy that I like to refer to is, you know, if you think about just picking up a jar of water in the refrigerator and pouring water into a glass, you know, a three year old can do that, right.

Without even thinking about it. But if you get a robot, an AI to do that in an unfamiliar environment, it's extremely difficult. We haven't been able to accomplish that yet. And I think it will be decades before we get there because we don't fully understand all the intricacies of intelligence. So it's debatable whether we're going to get to AGI or not and how soon that's going to happen.

Now, I do think that if we do get to AGI. That's where it gets really interesting because from AGI to ASI, which is artificial superintelligence. I think that's going to happen super, super fast. And that's when AI start to create new AIs and becoming, you know, a self learning entity. Now, whether it's going to become sentient, I personally don't believe that's going to happen.

But I think we will have general purpose AIs that are going to be super intelligent. And the consequences of that are beyond our ability to imagine what's going to happen.

[00:33:34] Justin Grammens: Yeah. Agreed. Yeah. You're basically standing up another race, you know, of people, right? People. I mean, you know, organism, I guess, in some ways, and they're not a carbon based, they're silicon based, I guess, is what I've, is what I've heard it called as, but they can function just as well as we can in our society.

So how does that go back to what you're saying? How does that affect our role and why we're here and what we are doing?

[00:33:59] David Espindola: Yeah. Now, the one thing that I, that I tend to lean towards in this conversation is, you know, there's something about being human that's very different than a machine, and that is the fact that number one, we have free will, and number two, we have intrinsic motivators, right?

So a machine doesn't really have an intrinsic motivation. Everything that a machine does is based on objective functions that we impart on those machines. Now you could argue, you know, so it goes back to that paperclip problem, right? Obviously, if I tell the machine, your objective function is to produce as much paperclip as you possibly can, and you don't put any guardrails in there, the machine is going to go crazy and use up all the resources in the universe to create paperclips, right?

Yeah, yes. And you could interpret that as motivation. But again, somebody had to establish that objective function. Otherwise machine just wouldn't do that. Right. So I think that's a big difference between machines and

[00:34:57] Justin Grammens: humans. Good point. Yeah. And you believe that sort of this next generation, I guess, is going to be more thoughtful, I guess, more purposeful because these AIs are going to be hopefully, I guess, taking out a lot of the mundane work.

I love what you said there about like, who is the machine now? Right. Because. You know, when we first started creating these machines, they were supposed to handle all these tasks, these minute tasks, and now it's becoming, we're using AI to actually do some pretty creative work, right? AI can spit out some, some great visuals now, you know, with Dolly, it can create out some music, it can create, it can create poems, it can create poetry, and, and, and now we're the dumb humans that I'm sitting here replying to emails at, you know, midnights to, to people going back and forth.

So it definitely feels like the roles are reversed, but I'm wondering, I guess when you were saying about, you know, being able to sort of see what our purpose is. The other variable that I feel like I've seen in my life is it, it takes age. It takes wisdom for me to get to a certain point where I'm like, you know, and I think it maybe happens once you're over 40 years old, but people hit this sort of thing where it's like, okay, now I want to create legacy or I want to understand why I'm really here.

Like I've, I've achieved something in life. So I, I don't know, I'm just kind of ranting a little bit here, but I feel like there's also an age component to this as

[00:36:09] David Espindola: well. Yeah, I think you're right. I think, you know, we all go through different stages in life. And as we get older and get more mature and understand better who we are and what we want to do with our lives, you know, and there's also the sense of mortality, right?

So we become more aware of our own mortality. And that triggers some of these, you know Thought processes of, well, you know, based on current life expectancy, I only have so many minutes left, right? So what do I want to do with each minute that's available to me right now? So we start to think more about those, those deeper questions.

But I think one scenario that I find fascinating for our future is, you know, imagine if through AI. And through what's known as the creation based production system, which is different than the extraction based production system. And this is something that Tony Seba and Jay Arbib have written about and have researched.

And you can look them up and they've done a great job researching this. But their whole hypothesis is that we are moving towards this production, creation based production system, where instead of extracting things from the Earth, We're going to be able to create, produce things on a molecular level, right?

So if things like precision fermentation, stem cells, and other things where we can produce organic and inorganic materials, you know, there's just a study that came out that talked about the fact that AI has helped us. uncover thousands of new materials. So think about the possibility that we can now have much stronger, more flexible, cheaper materials that doesn't require, you know, doing things that could potentially be detrimental to the environment.

So if we get into this future where we have sustainable abundance, right, what does that mean to us as human beings? What are we going to do with our time? If there's no work, but there's plenty of, you know, our mature needs are met. Let's say now we have all this time in our hands. What are we going to do with it?

You know, what are we going to turn into? Are we going to become, you know, addicted to drugs, to other things, to pass our time, or are we going to become more introspective and think about these big questions? And think about our purpose. So I think those are the kinds of things that I think about that. I think is

[00:38:32] Justin Grammens: really fascinating.

Yeah. Yeah, for sure. Super fascinating. Awesome. Yeah. I think there's a, that movie, I think it's Wally, I guess, but these people end up just basically becoming, you know, overweight and lazy and they just relay in their chairs all day long. And it was just, it was this big sort of. Commercial scheme to get everyone to do that.

But that's where the society ended up going in that particular move. Hopefully we don't go there.

[00:38:55] David Espindola: Yeah. I mean, just think how easy it would be, you know, with mixed reality, right? If you don't want to live in reality, just put on your headset and go into a mixed reality mode and just do that all day long.

Right. But, but then it goes back to why am I doing this? Right. What's my purpose? It goes back to the fundamental question of what is your purpose? Yes,

[00:39:13] Justin Grammens: for sure. Sounds like a episode of dark mirror or black mirror, I guess, right? If you've seen that show before, that's crazy, that's crazy. Mind bending stuff.

Well, awesome, David, I appreciate the time today. So thank you so much for being on the program and sharing your thoughts and feelings and experiences and your book, Soulful, You and the Future of Artificial Intelligence. Definitely. We'll be promoting that here on the program and people should definitely go out and buy a copy of it.

And yeah, thanks again. I

[00:39:37] David Espindola: appreciate it. Thank you, Justin. It's been a pleasure. I had a lot of fun doing this with you. And again, congratulations on all the great work that you're doing in bringing the AI community together. Thank you.

[00:39:49] AI voice: You've listened to another episode of the Conversations on Applied AI podcast.

We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at appliedai.mn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at appliedai.mn.