Conversations on Applied AI
Welcome to the Conversations on Applied AI Podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of Artificial Intelligence and Deep Learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real-world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI.MN. Enjoy!
Conversations on Applied AI
Pete Edstrom - Using AI in Your Product and Educational Journey
The conversation this week is with Peter Edstrom. Peter is a proactive, methodical leader and team player with 24 years of software experience, adept at building teams and fostering organizational alignment to solve complex business and software problems. He's skilled in product management, agile development, and process improvement, with a track record of collaborating with various teams. To create meaningful change in the organization.Throughout his career, he's had a diverse, wide range of experiences from designing a public media multi touch book for the iPad and also working on a government bioterrorism alerting platform. He's currently a Director of Technical Product Unified Enterprise Search at Optum.
If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!
Resources and Topics Mentioned in this Episode
- Optum
- Behavioral Grooves
- Sora
- Pomodoro Technique
- Unconfuse Me with Bill Gates
- Exhalation by Ted Chiang
- Stories of Your Life and Others by Ted Chiang
- Accelerando (Singularity) by Charles Stross
- The Age of Spiritual Machines by Ray Kurzweil
- Never Eat Alone by Keith Ferrazzi
- Atomic Habits by James Clear
Enjoy!
Your host,
Justin Grammens
[00:00:00] Pete Edstrom: Certainly you can read books, I think. In today's age right now, the cycle time for an idea to be written to publish might be a little bit too long, especially for things as that are changing as fast as they are. I do love the idea of using AI to help you learn a new thing, you know, so act as a professor and give me a syllabus for, you know, whatever this thing is that you want to learn.
Okay, I'll break it down into chapters. What are the first things that I need to learn? You know, doing some self study, but then coming back and saying, Okay, I've done some study on this. You know, give me a, uh, a ten question quiz. You know, multiple choice. And then grade me on it, right? That back and forth with the AI is going to be immensely valuable and it will go at exactly the speed that you're ready to go at.
[00:00:51] AI Voice: Welcome to the Conversations on Applied AI podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and AI. and deep learning in each episode. We cut through the hype and dive into how these technologies are being applied to real world problems today.
We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI. mn. Enjoy.
[00:01:22] Justin Grammens: Welcome everyone to the conversations on Applied AI podcast. Today we're talking with Peter Edstrom. Peter is a proactive, methodical leader and team player with 24 years of software experience, adept at building teams and fostering organizational alignment to solve complex business and software problems.
He's skilled in product management, agile development, and process improvement, with a track record of collaborating with various teams. To create meaningful change in the organization. Like myself, he's curious and an explorer of new technologies. In fact, he reads about 30 books a year and loves learning, exploring, and understanding new things.
Throughout his career, he's had a diverse, wide range of experiences from designing a public media multi touch book for the iPad and also working on a government bioterrorism alerting platform. He's currently a director of technical product Unified Enterprise Search at Optum. So thank you, Peter, so much for being on the Conversations on Applied AI podcast today.
[00:02:14] Pete Edstrom: Man, lots of laughs from the past, that's great. Thank you so much for the kind intro.
[00:02:19] Justin Grammens: Good, good. Yeah. Well, I love to have people sort of get a sense of, maybe it's a little bit of a background where you are today. Obviously you're doing a lot of really cool stuff at Optum, but over the course of the years, I think you and I have been kind of working in this field for more than 20 years or so.
Maybe you can give us a short sort of summarization, I guess, of how you got to where you are today.
[00:02:37] Pete Edstrom: Uh, let's see here. Well, so today I'm a director of technical product management, and as any product manager will tell you, the path to get there is a lot of zigzags. So I suppose the cliff notes are, I started with a computer science degree from the U of M and did some software development for a while, did some project management, some strategy work, and eventually kind of worked my way into product, so leading teams and doing that kind of thing.
There was a point where I had small kids at home and learning the next new language in software had to happen at eight o'clock at night when all the kids were finally down to sleep and that was just not sustainable for me. So I'm like, I need something that I don't need to learn a brand new language all the time on.
And yeah, so that drove me here.
[00:03:20] Justin Grammens: Gotcha. Gotcha. Well, it sounds like you came from a technical background. Is that a good path you think for people? Do you think it's a valuable to maybe start out with a computer science degree and work your way into the path that you did? Or do you think, I mean, I guess I would say nobody really needs it per se, but I'm always curious because I, I see people who are CEOs and they kind of came through a technical path or people that are CIOs kind of came through a technical path.
So just kind of curious to see what your thoughts were. Are you happy that you sort of followed that path?
[00:03:46] Pete Edstrom: It's a place in time kind of a thing. So I think the technical path worked really well for me. I mean, I wrote my first line of code on Commodore 64 when I was six years old. So it's pretty deeply embedded in who I am.
But you know, today with all the ability to write code with an AI assistant, there's definitely some good things about a computer science degree that you wouldn't want to lose. You know, the ability to critically think and to think through a particular process. You know, those are valuable, but I don't know that I would, uh, recommend that for somebody that's just getting into the field at this point.
[00:04:18] Justin Grammens: Gotcha. Gotcha. Well, it was awesome that you had a Commodore 64. I, I had a VIC 20, which was like the sort of smaller, much smaller step brother, I guess, of, of that. But yeah, those early days of basically writing basic code, I think is interesting. Now you mentioned Writing code. Yeah, I mean, it's been really interesting over the past 12 months or 16 months or so since the release of chat GPT, and that's just one of many tools, obviously, but, you know, it's just becoming better and better.
You know, these large language models are starting to understand things better and better. And, you know, it's almost like you don't need to understand the underlying infrastructure at some point, right? At some point, we stopped actually writing Assembly, right? We kind of leveled up.
[00:04:57] Pete Edstrom: Yeah.
[00:04:58] Justin Grammens: So is that what you're sort of seeing in, in your role in the team that you're working with?
Are you kind of seeing this AI kind of allowing us to level up?
[00:05:05] Pete Edstrom: It's letting everybody level up regardless of what level they're at, which is I think bringing democracy to all of the, uh, folks. So if you're a fantastic programmer already, it's really just going to make you all the more so, and if you're, um, you know, a junior engineer and just getting started, it's going to help you, you know, be all the better.
I don't usually write a whole lot of code, uh, these days, but, you know, over the last few months, I, you know, I spired it up and put together a couple of scripts I was actually, uh, trying to get one AI to talk to another using OpenAI's, uh, API, fascinating process. 95 percent of the code was written by the AI.
You know, I, I came in and I, I helped with some debugging to figure out, oh, you're not understanding things quite right here, but it understood a lot of things that I didn't understand how to make the call to the API, how to process the results that it's getting back. And that, that really made it go fast.
[00:05:55] Justin Grammens: Yeah. You've shown off some of that work. I think the project that you're talking about at some of our Applied AI meetings, maybe you want to tell the listeners a little bit about, you know, what was the impetus behind it? And there was a true use case here. You hadn't really. Like just made something up for fun.
[00:06:09] Pete Edstrom: But I mean, yeah, you get the use case to get you started and then you have fun with it. Right. So the use case here is can I build an agent or an AI that has certain behaviors and qualities to it? Can I get it to achieve a task through the course of a conversation? So, uh, You could say, I want it to understand your schedule and to schedule an appointment.
I want it to ask you, you know, a series of 10 questions and, and synthesize that answer, um, back to you. So, you know, you can build a prompt that talks like, you know, so the agent is talking to you and tries to guide you through those questions and get to an answer. But these AIs aren't a hundred percent deterministic, right?
So they all behave a little bit differently. And frankly, people aren't deterministic either. Right? So people are going to come in and, you know, I'm going to ask you, what's your favorite color, and you're going to tell me lawn chairs, right? And it's like. Hold on. That's not a color, right? But we got to figure out how to recover from that and keep working forward.
So if you build this agent that goes through and tries to ask these questions, the next question is, well, does it actually get through all those questions? And does it get through them as far as friendly and kind? And I answered the color question with an actual color. Does it get through the questions?
When I'm aggravated and hurt and really don't want to be on this phone call to begin with, does it answer the questions if I just have lots of questions back? You know, what do you mean color? What do you mean? What's my first name? I don't, I don't understand what that means. I don't call it a first name.
Right. And so like building these AIs to talk back and forth against each other really helps you test that out because I can't, you know, To put on my QA hat for a little bit, quality assurance, I can't necessarily go and test a hundred or a thousand or 10, 000 of these scenarios by actually having a conversation with this AI.
I
[00:07:56] Justin Grammens: mean I could, it would just
[00:07:57] Pete Edstrom: take me weeks. Well,
[00:07:59] Justin Grammens: and it's kind of all about efficiencies and you know, what, what can we have computers do that humans either aren't good at or kind of don't want to do or shouldn't be spending time at.
[00:08:07] Pete Edstrom: Yep. Absolutely. Yeah.
[00:08:08] Justin Grammens: So getting these two things to communicate with each other, like you said, you, you were able to explore and find out sort of how they were able to respond.
And are you creating a product around this thing, I guess, putting on your product management hat? Do you think there's something here?
[00:08:23] Pete Edstrom: I think there's definitely something here. I will refrain from talking too much about that at this point.
[00:08:27] Justin Grammens: Yeah. Yeah. Gotcha. Gotcha. That's great. It's amazing to see what people have started putting out as far as GPTs go out on the, uh, GPT store, I guess I always want to call it the app store because it feels like we're in the early days of the app store, but yeah, there's just, there's all sorts of interesting stuff out there.
And what I'm seeing is people maybe aren't leveraging those tools as much as they should, at least, at least I'll raise my hand and be like, Yeah. You know, I'll just use regular old chat GPT and ask it the same type of questions. But like, for example, there's like Vizio and another charting example, you know, basically diagramming tool that I've used that allows you to essentially generate diagrams really, really well, like much, much better, of course, then chat GPT would.
And. You would kind of expect it to do that because it knows how to generate images and knows how to generate flow charts and knows how to connect all the dots. So as people are starting to build more and more in some ways focused or narrowly trained models around a specific subset, I think it's just going to be really, really interesting because that's where.
You're going to start getting things that you're like, wow, this thing is really starting to understand the richness of the conversation that I'm trying to have with it.
[00:09:31] Pete Edstrom: Yeah, absolutely. It's easy to forget that we are in the bell curve of adoption. We are still in the early adopters phase. You know, when I talk with other engineers, software engineers that could easily make use of these tools.
There's still a lot of them that are not doing it. They're not firing it up. They're not curious about it or they're, they will be curious about it at a later date.
[00:09:53] Justin Grammens: Yeah.
[00:09:53] Pete Edstrom: You know, I talked with somebody actually just yesterday, different field altogether. And, uh, he told me that he was going to wait until like it hit critical mass before he really started firing it up and playing with it.
And it's, you know, I, I totally get it because, you know, you can get burnt out by all the new things all the time. But it's, it's one of those things that's accelerating very fast. And, uh, you know, are we going to get to the point of like a singularity or artificial general intelligence or any of those things, who knows what that will actually look like, but I think we can all tell by the changes that happened just in the last year that it's coming fast.
I'll also just say on the, uh, on the point of agents, I love the idea of agents, like the GPTs and kind of where those things are going. I'm really excited to see like what happens this year, uh, in terms of agents because they're neat today, but they're not super useful. You know, you'll get an agent that can help you write a thing better.
Or the one that I played around with, which was kind of cool, is it hooked into the bland API. And would make a telephone call on your behalf, and then, you know, use whatever prompting you gave it to kind of guide that conversation. And that's cool, but, you know, those things, like, individually need to get connected into, like, that bigger, it doesn't need to be AGI, but it needs to be connected into something that's a little bit bigger, so that, you know, as you're navigating down this road of having an assistant do assistant things for you, it isn't, like, these little micro tasks, you don't want to be a micromanager, you want to give it a bigger task and let it figure out all the steps in between.
[00:11:18] Justin Grammens: Yeah, yeah, for sure. Well, lots of, lots of stuff to unpack there that you were talking about. Software engineers not using some of these tools. I mean, more and more as I'm presenting, that is more around my conversation. Like I'm, my mission now here is to actually, you know, empower people to realize what these tools can do and start using it within their team because there is just so much potential.
And so. I'm starting to build a lot of presentations and slides really sort of showing everything from ideation to implementation to ongoing support, like anywhere within an organization, you should be starting to explore and use these tools. And it's, you're right. It's still kind of early stage. My story is we're in the days of like Newton and like Copernicus, like people are experimenting and trying stuff, trying to figure out what's going on under the hood.
And it's still very early. Like we don't know what the right models are. We don't know What the right training set is on some of these things, we have ideas, but imagine if we're back in that type of world, that's really where I feel that we are right now. And it's super exciting, but it's also, things can go a lot of different directions and AI, and especially, you know, you mentioned AGI is kind of like, when are we going to reach that?
Are we going to reach it? How quickly? It's certainly probably going to happen more quickly than we can possibly think. People were saying decades away and who knows, but a question back to you is like, how do you define artificial intelligence? Sometimes like to ask people on the program here, like.
Someone says, Pete, what do you, what do you do in AI? How do you, uh, sort of like structure your response?
[00:12:42] Pete Edstrom: Well, what I do and how I define it are entirely different. But so, uh, over on the Behavioral Grooves podcast, Tim and Kurt, they were doing some background on where did behavioral science come from? And, uh, there's this tug and pull about how do people, you know, how do human beings behave versus how do they not behave?
And the question is, you know, are they rational or not? And what they were finding is that everybody was very resistant to this idea, you know, especially if you're talking in economic terms, are people behaving rationally? And it seemed like they were not behaving rationally when you start to do kind of the analysis on how people behave in different situations, you know, whether it's, you know, you're talking about loss aversion or anything else.
And so they came up with this phrase that they latched onto, which was people behaved as if they were not rational. We didn't want to concede that people were irrational, but we could say they behaved as if they were irrational. And so for us here, when we think about AI, you know, I think the question would be, well, does it behave as if it were intelligent?
I can probably get, you know, Deeply into the whole AGI question and have a lot of fun with it. But at the end of the day, I honestly, I don't care because what is intelligence? How can you confirm that? Are you intelligent? Am I intelligent? Am I an AGI? Like we can't really define that, right? We know that we passed the Turing test with flying colors last year with the JGBT.
So is it behaving intelligently? It might not be intelligent, but it certainly is behaving as if it is intelligent. And so for me, that's kind of good enough. You know, it's like it behaves intelligently. So I'm okay calling it intelligent.
[00:14:21] Justin Grammens: Sure. Yeah. So you're kind of more focused on, I guess, what's the impact going to be?
Because once you sort of put that question, is intelligent or not just sort of out, like, Hey, I wouldn't really care about that or not. Like, what's the impact going to be on us as humans and how we work?
[00:14:34] Pete Edstrom: Yeah, the product manager and me, you know, it says, Hey, let's talk about the outcomes and like, let's not worry about the process.
We don't care about the details behind it. Like, did we get to the outcomes that we want? You know, and when I chat with chat GPT, it gives me the outcomes I want, you know, and it's not too hard to get there.
[00:14:52] Justin Grammens: Yeah. Do you experiment around with some of the other ones, you know, Claude or some of the other ones that are out there, even, even open source models?
[00:14:59] Pete Edstrom: Yeah, I've played around with them a little bit. There's a neat piece of software called a jam. ai, and it lets you run them locally, and you can download a bunch of the different models. It's made me sad to only have a computer with eight gigabytes of memory, but I can still run, run a bunch of them because it, yeah, it does take a lot of memory to run these things.
Yes, I've played around with some of them. I have not tried out the new cloud three opus. I'm curious to try that. I've kind of put all my bets behind ChatGPT, OpenAI at this point. They're kind of a forerunner and they're in the top right now and they're trying to clear that. So at some point, you know, they will and, and maybe I'll switch.
But for the most part, I've been spending most of my time with OpenAIs, their APIs and ChatGPT.
[00:15:40] Justin Grammens: Yeah, I remember you saying that some months ago at one of our meetups. You're like, yeah, just, I'm just kind of staying focused. Cause you can get so scattered. I mean, it's just like the industry has got, and of course we just named a few of them here, but there's, there's literally even another dozen more out there if you wanted to start looking around, everyone's getting funded and they're always doing their own thing.
And I think, you know, your point is at the end of the day, you're getting good results with the tool that you have. So let's just keep working on that until you aren't getting good results.
[00:16:05] Pete Edstrom: Years and years ago, there was, so if you remember the days of like 43 folders and. Some of these productivity hacks out there.
One of the first things that every software engineer seems to do is as soon as they figure out how to like deploy an app or write a piece of code and they create a to do app,
[00:16:20] Justin Grammens: how to manage your to do. Like a Trello board or something. Yeah,
[00:16:24] Pete Edstrom: exactly. And during that time, there was always this push for, you know, Hey, can we figure out the coolest to do app?
And Hey, you know, you're, You're not the cool kid if you're not using this new one. And so it was like constantly changing and it was fun for a little bit. But then I realized I was spending so much time like chasing down this new productivity hack of this new to do app. It was like, well, why don't I actually just Pick one and call it good and stop spending all my time searching.
So, so I did. And then you don't have to think about those other things. And so that's kind of what I've done with AI here as well. Like I've, I've picked open AI and not that I'm close to other options, but I'm mostly close to other options at the moment until something else kind of pops up and says, you know, not only is this better, but it's worth switching.
It's worth the effort to switch.
[00:17:12] Justin Grammens: Well, we talked a little bit about. I guess, you know, being intelligent, not being intelligent, one of the things I do like to ask too is like, you know, what are some cool projects maybe you're seeing that are floating around out there related to artificial intelligence that you maybe stumbled across or even stuff that, well, I guess you probably can't talk much about your internals, but I don't know if there's, there's any interesting tools, but then also, I mean, it's really interesting because people are scared in some ways about how this is going to change the future of work.
And so, you know, that's a topic that maybe we could explore a little bit around maybe your perspective on. Uh, you've seen a change and where maybe you think it's going to be changing even more in the coming years.
[00:17:49] Pete Edstrom: You know, there's some cool ones like Sora from OpenAI. I'm sure you've talked about those in past shows already.
It's interesting to see how that's going to come about and, uh, when that's going to be available and what people are going to do with that. Months ago, I made a prediction that 2024 is going to be the year that we get a feature length movie made entirely by AI. We're in the March and we're not there yet, but there's still a lot of year left.
One of the other cool ones that I was playing around recently with was, so you're familiar with GitHub copilot, of course. So that's what the software engineers use. There's another one called, Oh, what is it called? It's just called like GPT pilot. I think is what it's called. It's an open source thing.
It's kind of a proof of concept. It's, it's along the same lines as auto GPT, which is. I guess a year old at this point. The idea is how can we take an AI and not just give it a single small task, but give it a really big one that it has to decompose and then plan for and then step through. So I was playing around with that one for a little bit.
And again, it's still not quite there. But the promise is like close and it's super interesting, right? Yes. What if I have like a literal AI agent, an assistant that literally is a programmer, that literally I can just say, Hey, I want a website that does these three things. You know, maybe have a Pomodoro timer on it.
I think that's what their demo was. I mean, can you build it out for me and make it work? Installing node or whatever other tools it is that you need to make that happen.
[00:19:19] Justin Grammens: Yes.
[00:19:19] Pete Edstrom: There's a lot of problems with that one right now. First off, it's clunky. It asks you for a lot of, you know, can I do this? Can I do that through the process?
You can't take an existing project and plug it in and say, Hey, I want you to like fix all the bugs. Although I've, I've heard people are using some of like the larger context windows, uh, Opus, cloud three to like. You know, bring in a whole code base and, and fix things. But in any case, this, this particular one isn't quite there yet, but it's a fascinating tool.
Love the idea of where it's going.
[00:19:51] Justin Grammens: Yeah. Yeah. As you were talking here, I brought up the website here, so we'll be sure to have links to it as well.
[00:19:57] Pete Edstrom: Did I get the name right?
[00:19:58] Justin Grammens: You did. You did. Yeah. GPT pilot. And it's interesting. There's a number of different agents. There's like a product owner agent.
There's a specification writer agent, architect agents, you know, developer agents sort of work this way all the way down. So you sort of built these experts in their own domains, which I think is fascinating that that's the way that I think we're going to see a huge explosion with regards to productivity.
Is training these agents that know something really, really well, rather than trying to boil the ocean.
[00:20:24] Pete Edstrom: Yeah. I want to get back to your question on, you know, how does this impact people's work, but before I do that, I just want one other side on the auto GPT from a year ago, so if, if you happen to be playing around with that one and I'm sure it's changed since, but.
I had turned that one on and put in the 11 labs, uh, the text to voice API keys for it. It was absolutely fascinating because as it took each problem and tried to decompose it, it would then spin off separate agents to try to solve different things and their implementation would actually change the voice of each different sub agent.
So as you're listening to this, you know, it would be. You know, it handed off to Sally or to John or to somebody else that had a totally different voice. And they would like, I'm here to like browse the web and solve this one thing. And then they, they, okay, I've done that. Here's the answer and hand it back.
And it was almost like listening to a room full of agents talk with each other. It was, it was kind of delightful.
[00:21:18] Justin Grammens: That is awesome.
[00:21:19] Pete Edstrom: Yeah. But in terms of like, how is this going to impact work? I don't know, like, I don't even know where to start on this one. So. I don't want to be all doom and gloom because to be sure, I am super excited about this technology and I think it's absolutely fascinating.
I don't think we can stop it. And I frankly, I don't know that we want to stop it, right? Because the idea of all these key gems being able to do all this work that typically we would have done ourselves or handed off or hired somebody to do like having that ability and that power is fantastic and awesome.
And I Can't wait for having all of that at my fingertips, right? But it's also terrifying, right? Like, I look at my own job, I look at my friends jobs, and I'm like, how much of their job could be replaced by an AI? Maybe not today, right? But in a year from now, in five years from now, like, it starts to make you kind of worry.
So, I think ultimately there's going to have to be some kind of government involvement here that makes sure that there's stability for people. I worry that, you know, That transition from where we are today to realizing we need that, and then having that finally implemented is going to be maybe a rough handful of years.
So that part I'm not looking forward to, but once we kind of clear that, then I'm, then I'm pretty excited about where we're going.
[00:22:42] Justin Grammens: Yeah, I would agree with pretty much everything you said here. It's just, it's just advancing so fast, right? So as I'm sort of reading a lot of these, you know, books on the subject, which will be my next question to you, what's some of the cool stuff that you're reading on?
Yeah. What's so fascinating is just, you know, like how long it took. A lot of these prior technologies or industrial revolutions to sort of make their way into society, right? Things, things didn't happen in 18 months. Like all of a sudden you didn't just didn't flip a switch and 18 months later everyone had electricity, for example.
Or all of a sudden everyone's driving a car. So these things sort of took time for them to, you know, kind of be sort of immersed and tested and used and the impact sort of known. And so what's happened in the past, you know, 16 months or so, whatever it is, has just completely, I think. Got people worried because it's happened so fast.
And like you're saying as well, the next 16 months are going to be just as fast, if not faster, right? I mean, who would have thought that Sora, in my opinion, who would have thought that Sora would have come out here in March, right? That or may or like, you know, February, I mean, it's, it was, it was just literally, you know, I would say, you know, a number of months ago, six months ago, people were saying, ah, there's no way there's going to be.
Really, truly video generated off of a text and we all of a sudden, we already saw that, which is already way ahead of schedule. So yeah, I think that's kind of what people have people like has us a lot of worried, I guess.
[00:24:02] Pete Edstrom: I don't know if you listened to, uh, Bill Gates's, uh, on Confuse Me podcast with Sam Altman, but in that episode, Sam said at one point that they're seeing somewhere around three X improvement in a productivity from their software engineers.
And if you run the math on that, you know, that three X productivity increasement increase, it's insane. Just a few iterations on that. So I ran the numbers on that one. Like, okay. So when is the singular you're going to happen if you actually have three X improvement in a productivity. No, no, keep in mind.
Like, so this is, so if in 2023, it took a year to write a piece of code, what that three X means is that you will write. An equivalent piece of code by the end of March. So in three months after March, you're going to do something again. So where it'll be a month and a half,
[00:24:56] Justin Grammens: right?
[00:24:57] Pete Edstrom: And then shorter and shorter and shorter, right?
If you run that map, you hit the singularity right on July 1st. Now, is that a real date? And well, it's a real date, but is that really when it's going to happen? Totally made up, but you know, the math is, is reasonably sound on it. So now whether or not we're actually getting 3X out of that and maybe more to the point, like, are we able to build on top of each 3X that we hit, that's going to be a kind of a key factor in, in whether or not, you know, or how quickly we get there.
[00:25:27] Justin Grammens: Yeah. And it, it really has that compounding effect is what you're talking about here. And so, yeah, once you start doing 3X on top of 3X on top of 3X, it just. It compounds really, really quickly. So yeah, that's, that's July 1st of this year, right? So we're, we're just like this year, it's just a couple of months away.
[00:25:42] Pete Edstrom: And it's, it's totally fictitious, right? Like I said, we start with one year and in 2023, and then on the January 1st, we started the doubling and totally making things up, getting it out of the air. But it was, you know, it was a fun thought exercise to see like, well, how quick. Could we actually get there?
Honestly, if you pick different numbers, so like, obviously if you pick like 10x or 100x, which I've also heard, you know, credible people talk about, you know, that puts us at the singularity back in January. Obviously that didn't happen. But even if you take a, like a more modest, like a 20 percent improvement with each iteration, I think that was just under five years.
So it still happens really fast.
[00:26:22] Justin Grammens: Yes. Yeah. And, and then again, it, it depends on, in some ways it depends on adoption, you know, people that are actually starting to use these tools and stuff. So you've got a lot of people that are pushing back. It was interesting as you were talking, this was just something that came across my desk that was on YouTube that this high profile guy was kind of talking about.
His thinking is, is that actually OpenAI already has GPT 5 and they're really not going to release it. They're, they're literally holding back because it's such a game changer, even on top of what 4 did. You know, it made me sort of sit up and take notice. I was like, interesting, why would a company do that?
But based on kind of what you're saying and his conversation with Sam Altman, there might be some truth to that. I have no idea.
[00:27:04] Pete Edstrom: It's a wonderful conspiracy theory. I've definitely heard it many times myself. Only people in, uh, OpenAI's company are actually going to know whether or not that's true or not, but I would say a lot that makes sense there.
Not necessarily with OpenAI, but in terms of like who builds that first AGI computer that, let's back up and not say AGI, but. Something that's autonomous enough that you can give it a pretty big problem and it can go and solve it without like going off the rails and doing something irrelevant or unhelpful, right?
You know, if, if OpenAI had a tool like that to allow them to build GPT 6 or 7 or 8, you know, yes, I think they want to ultimately release that, but also they might want to make use of it first for their own progress before, you know, Make sure that they've got enough of a head start before other people are going after it.
[00:27:54] Justin Grammens: Yeah. Yeah. And it's tough for them because they're in this sort of dual relationship type thing, right? Like they are a commercial company, but their mission was really to make AI safe and have it do good, right? So it's an interesting sort of dichotomy with them in particular being the leader in the space, but also having, frankly, a lot of responsibility.
[00:28:16] Pete Edstrom: You know, and I think they willingly taken on that responsibility, which, you know, I applaud. And it's, it's fantastic to see the, the leadership that they're, they're attempting to kind of drive forward with it. Certainly they're getting pressure from others to release things faster. And sometimes that's, you know, good for humanity.
Sometimes that's, you know, maybe questionable. But I do like the idea that it's maybe not a hundred percent, uh, driven by the market because I don't necessarily trust that as being, you know, good for humanity. Yeah.
[00:28:46] Justin Grammens: What stuff are you reading these days? What, what keeps you busy?
[00:28:49] Pete Edstrom: You know, you said 30 books a year and I have not done that in the last little while here, but some of them that I've read more recently.
That are fun. So just in terms of like folks that like short stories and sci fi, I'm kind of thinking about possible futures. Ted Chiang wrote Exhalation and Stories of Your Life. So two, two separate books. For those of you that saw the movie Arrival with Amy Adams, a few years ago, that movie was actually originally conceived as an idea to, it was from a fan that wanted to promote Ted's work and get that a little bit more well known.
Delightful stories in there, kind of thinking about what if scenarios and, and other things. The other one that's, that's also kind of fun that I read last year was called Accelerando. And this, this one's actually going to be probably pretty interesting to your audience. It follows a family, a series of people that bridges the singularity and describes kind of a what if scenario.
What would that world look like? You know, how do people go from being normal people to being normal people that are augmented with technology? They are. Next. Thank you. Now moving more into the digital space and then what does like an all digital world look like? It was a very fun kind of a thought experiment there as well.
And it's, it's a fun ride.
[00:30:08] Justin Grammens: I was just going to say, I took a quick look. Yeah, it was written, it was published in 2005. And so it's, it's so interesting to me that some of these people have such Good insights into what the future would look like. And some of them like really can write stuff that's kind of stands the test of time.
[00:30:23] Pete Edstrom: Yeah. That book leans a little bit into, I think, economics. And so like this idea of economics 2. 0. So what, what does it look like when you are creating instruments or laws or contracts or other things to generate money in a way that you can like fire them up in milliseconds and then, you know, drop them and instantly and that kind of thing.
That's a side of it that I hadn't really thought about before. And it was, uh, um, definitely very interesting. For anybody that wants, uh, a historical read that honestly, I don't even know if it would hold up still anymore. What I'll say is it was a part of the inspiration for me and being interested in AI is, uh, Ray Kurzweil's books.
So the age of the spiritual machine, the singularity is near, those kinds of things. I know one of those was really, really long. So don't know if they still hold up today, but then they were definitely very impactful when they, at least to me, when they came out quite a while ago.
[00:31:20] Justin Grammens: That's awesome. Yes. Yeah, yeah, for sure.
So we'll, we'll be sure to put links off to. Off to these books. I remember Ray Kurzweil actually coming to the university of St. Thomas, uh, when I was going to school there at some point, early 2000, they had a event and I was like, I, again, I hadn't really heard of them, you know, I was like, Oh, interesting.
You know, whatever, whoever this, this guy who's sort of a. Uh, sort of a futurist, I guess what he calls himself. So went there and listened to him speak and left there. And I was just like, this is fascinating. You know, just what he was talking about, essentially the compounding effect of these technologies, right.
And how, you know, it took us thousands of years to come up with fire and hundreds of years to come up with the bow and arrow and tens of years to come up with some breakthrough in new technology. So it's just the fact that it's speeding up. And so, yeah, his books are, his books are amazing. So for sure, I will for sure put links off to those.
And so I guess books is a good thing is a way that you sort of got into this industry and kind of the way that you sort of hop around with regards to being an explorer, exploring new technology or whatever the new thing is interesting. What advice would you give to people? I mean, obviously pick up a book and start reading it, but do you have other things that maybe if someone's coming out of school, things that they should attend, or people they should talk to, or classes maybe they should take, or things maybe even, as I'm kind of riffing here off the cuff, like maybe things they should avoid doing.
Like you talked about maybe spreading themselves too thin and trying everything that's under the sun. I kind of cast your mind back. I was like an intern. What, how would you maybe direct me?
[00:32:47] Pete Edstrom: There was a great book called Never Eat Alone. I'll give you another book here. Yeah. Yeah. Never eat alone. And the whole point of that book was to really kind of reach out and do networking, to not necessarily sit at your desk and do everything in person.
I'm a big believer in that. Like I love my introvert time and sitting down and like learning a new thing. But whether it's career or just personal development that you're, you know, kind of striving for getting out and talking to people, getting out and talking to people that you're maybe uncomfortable talking to, you know, so going to a meetup or a conference or an unconference or any of these things and spend time to actually like intentionally like talk to other people, I think is a fantastic way to grow.
a career. Certainly you can read books. I think in today's age right now that the cycle time for an idea to be written to publish might be a little bit too long, especially for things as that are changing as fast as they are. I do love the idea of using AI to help you learn a new thing. You know, so act as a professor and give me a syllabus for, you know, whatever this thing is that you want to learn.
Okay, now break it down into chapters. You know, what are the first things that I need to learn and then, you know, doing some self study and, but then coming back and saying, okay, I've done some study on this, you know, give me a, uh, a 10 question quiz, you know, multiple choice and then grade me on it, right?
That back and forth with AI is going to be immensely valuable and it will go at exactly the speed that you're ready to go at. So leaning into like those types of tools, you know, I set myself reminders to say, Hey, don't forget to spend an hour, you know, learning about new things, you know, once a week, right.
One hour doesn't seem like a lot, but if you try to do it like every week or even a handful of times every week, like it can really add up and just keeping things moving forward a little bit at a time is going to be great.
[00:34:51] Justin Grammens: Great advice. Great advice. Yeah, for sure. I love that. I love that idea of actually using the AI and having it.
Question you on stuff and you're right. Books can take a little while. So that's why we do podcasts, right? This has been so fascinating is, is people can publish stuff. There's literally, of course, you know, this podcast is one of about a billion that are on AI out there. So I encourage people to take a look because there's always, always some awesome conversation about the greatest thing, um, by audio.
And I personally find audio a really, really great way because I can just sort of have it on. Well, I go about my day, so I, I oftentimes I work out in the morning, so I just listen to podcasts pretty much or audio books, but I've been able to ingest a lot of more information just using audio than I would have to do if I would have to read stuff.
[00:35:34] Pete Edstrom: Yeah. You know, that's, that's a great point. You know, if you think about all the different ways that you can like learn a thing. So whether it's, yeah, subscribing to some podcasts and listening to them or reading a book or, you know, chatting with the AI or even showing up in person to, you know, an event.
Finding ways to kind of work that into your regular flow, like the podcasts are a great example. Yeah, listen to them when you go out for a walk or a run or when you work out, right? Or like listen to them when you're making dinner. And if you can like get into that habit of just having those things going and helping you kind of step forward a little bit at a time.
I'm going to give you one more book here. Bonus book here for you. James Clear wrote Atomic Habits. You know, so when you think about like how you want to build a habit and learn a new thing, there's all sorts of behavioral science techniques that you can pull in to, you know, chain these things together.
You know, when I make dinner, I'm going to put on a podcast about AI. Right. And so then that starts to trigger your brain. Every time you start making dinner, you're like, Oh, you know what? I was going to do that thing. Yeah. Another great book for kind of helping you figure out how to do the things that you want to do.
[00:36:44] Justin Grammens: Yeah. And I love his stuff. He is amazing. Yeah. And in fact, he, you can actually buy his journal that he has. That basically breaks it down where you actually choose the things that you want to do, the habits that you want to change, and then you actually have a grid, it's like a grid paper, and you just literally check, did I do that today or not, did I do that today or not, and it holds yourself accountable over time, you take a look at this, and this is a part of my morning ritual is, Basically, what are the things that I'm going to get done today in a small number?
Here's the three things that are the top things, but I actually forces me to go back and say, look, I actually, I'm not checking these boxes. So yeah, he actually has, if you go to jamesclear. com, you can order his little journal, which is fascinating. It's really, really good. It's kind of a. Yeah. Kind of a improved upon, you know, technique I think that a lot of other authors have, but he he's, but his is really, really good.
So can't recommend that book. This is
[00:37:32] Pete Edstrom: fantastic. I think he's got one of the biggest newsletters in the country too. Definitely worth subscribing there. Cause he's, he's got great advice, uh, just little quotes and things to help inspire you to do new things.
[00:37:45] Justin Grammens: For sure. Cool. Cool. Well, Pete, was there anything else that maybe we didn't cover that you want to?
Talk about is kind of top of mind. I think it's important.
[00:37:54] Pete Edstrom: You know, I think my, my main advice to anybody listening here would be get out and try it. I've been doing some prompt engineering classes at work. And my main message there is, is, is that it's not as hard as you think. You don't need to be an engineer to write a good prompt.
You can get in there, you can start, try it out. You're going to find that a little bit of feedback back and forth. You're going to get exactly what you want out of it. Yeah. Just get out there and try it.
[00:38:21] Justin Grammens: Awesome, awesome. Love the advice, Pete. Well, thank you so much for taking the time to be on the program today.
It's been great conversation and sure we'll see you around at all sorts of future Applied AI events and stuff. And yeah, keep, keep doing what you're doing. You're, you're sort of, you know, trying new things and experimenting, which I think is a great role model for other people. So they should, Oh, and actually, how should people get ahold of you?
Is it best to find you on LinkedIn or your website?
[00:38:44] Pete Edstrom: Yeah. LinkedIn is going to be good. And my email, I don't have a website. So my email would be good Pete at Edstrom. net. If you find me on LinkedIn and you want to connect, just put in the message that you heard me, heard about me on this podcast. I, I tend to only accept invites and are coming from people that I know.
So, so if we've got a good connection there, by all means, let me know there and then we'll, we'll get connected.
[00:39:07] Justin Grammens: Awesome. Awesome. Cool. Well, great Pete. Thanks again. And I look forward to keeping in touch.
[00:39:14] AI Voice: You've listened to another episode of the Conversations on Applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization.
You can visit us at AppliedAI. mn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at AppliedAI. mn if you are interested in participating in a future episode. Thank you for listening.