Conversations on Applied AI

Jim Wilt - Using AI to Make a Difference in How You Work

April 09, 2024 Justin Grammens Season 4 Episode 7
Conversations on Applied AI
Jim Wilt - Using AI to Make a Difference in How You Work
Show Notes Transcript

The conversation this week is with Jim Wilt. Jim is a technology executive focused on innovations in aerospace, manufacturing, health, fintech, and retail, just to name a few. Today he focuses, though, on generative AI, augmented software, and platform modernization through engineering empowerment on cloud-native platforms leveraging automated code generation.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

Articles & Books

Enjoy!
Your host,
Justin Grammens

[00:00:00] Jim Wilt: I think one of the things I try to discern when I'm working with organizations and engineers, especially, are those that are the get rich quick schemers, if you will, versus those that I want to make a difference to the way I work. Help me with that. Those are the ones you want to invest in. And I think you can see that there's going to be An onslaught of flurry of everybody's got a GPT to do anything, probably to how can I best feed my dog and cats, you know, and things along those lines.

And the reality is, get into the technology yourself, build your own, don't worry about what others are building. I mean, if you want to go ahead, fine, but there's easy enough to build your own. And to learn how to build them to give you better insights into yourself.

[00:00:46] AI Speaker: Welcome to the Conversations on Applied AI podcast where Justin Grammons and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning.

In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI. MN. Enjoy.

[00:01:17] Justin Grammens: Welcome everyone to the Conversations on Applied AI podcast.

Today we're talking with Jim Wilt. Jim is a technology executive focused on innovations in aerospace, manufacturing, health, fintech, and retail, just to name a few. Today he focuses, though, on generative AI, augmented software, and platform modernization through engineering empowerment on cloud native platforms leveraging automated code generation.

With all that being said, though, I think this quote from your bio, Jim, actually sums it up really well. You say, what I find most rewarding is when I empower others to achieve goals beyond their own expectations. I think that's awesome. I love that perspective and one that rings true with me as well. So thank you, Jim, for being on the program today.

[00:01:54] Jim Wilt: Yeah, great. I'm kind of excited about it. I've had the opportunity to interact with you two weeks in a row now of the live sessions that you've been putting on. So they've been really

[00:02:03] Justin Grammens: quite exciting. Good, good, good. Cool. Yeah. And hopefully you'll come back to attend more sessions in the future. Sure. So, yeah, I gave a quick sort of brief synopsis, I guess, on maybe like what you're doing today and what you're sort of focused on.

But one of the things that I'd really like to ask people is, you know, kind of how did you get here? You know, maybe you could enlighten us a little bit on, on the trajectory of your career and then kind of what made artificial intelligence kind of light you up and kind of, you know, feel like this is the next, the next thing that you should be investing time and energy into.

[00:02:30] Jim Wilt: That is a great question. It does not really get asked very often, and I appreciate you bringing that in, Justin. When I look at the trajectory that I've had, you know, right out of college, I have a degree in physics. I actually left Michigan and moved to California to write operating systems for Burroughs.

And In that process, I learned so much about writing code, serious writing code, and in depth and the criticality of it, the many aspects, it just was totally entrenched in and engrossed in it. And I found that among my peers in the group that I was in, I was in the data communications module, the operating system.

I found that the Interactions of engineers amongst themselves can create some of the most creative and innovative incubation pods in the world. And so there are many things that we would talk about, joke about, bring up in conversation that were just, You know, what's going on in the industry. We, by no means expected that we knew everything writing an operating system.

We were not the experts in the industry because the industry moves too fast. But just as an example where I'm going to say the creative juices started, we were talking about app nap protocols and things along those lines. One of my colleagues said something about a piggyback app. And I go, well, what? And he just said a piggyback app.

This is where you app that you got it, but then you ask another question on the back of it. And it's faster. And I thought, well, several years later, we were in a crisis where Burl stopped making the front end processor that allowed terminals to connect to the mainframes that we built. And we needed a way of doing that.

And I was put on a crisis team. Six people, three in Pasadena, California, three in Pennsylvania to build this intermediary unit that would allow terminals to connect to the mainframe. Cause we couldn't sell a single mainframe without that. And we needed a protocol and I took the piggyback app from Conversation and turned it into so much more.

And I invented a protocol that allowed not only piggyback apps, but it allowed the concept of what we would call like RDP communications today, remote This is all before any of this existed. This is before Unix existed, essentially. So it was one of those things where, you know, that conversation spurred onto something and ended up turning out to be a really cool thing.

It's just a hallway conversation, or you're at your whiteboard and you're talking about it. That then drew into this product, B974 Load, and then I've gotten an award. On the wall now for it's kind of a achievement of excellence, their version of lifetime achievement for inventing this protocol that actually created what I'm going to say a solution to a very, very big crisis in the organization.

So that was the spark you talked about and moved into aerospace from operating systems. And I had the first PC in the company because PCs were brand new. And then I integrated. And I think by the time I left one portion of aerospace, there were 10, 000 PCs and 15 local area networks that I had set up.

And then I was getting into composites and building out composites and everything that I did in building out composites, you know, the tape leg for the B2 bomber, the fuselage for the IFA fighter, the 787 fuselage. All these things were building out technologies that were yet to be built. They had never been built before.

I remember we were testing the tape steering technology for the IFA fighter on a weekend and the owner of the company, Edson Gaylord, came in to Ingersoll and he wanted to know, do you know what this machine does? And we're like, Oh yeah, we created it, but we didn't know if he was testing us because we were I mean, they had guards and it was in a black situation where no one was allowed to know what we were doing.

We thought he was testing us. So my colleague and I, Andy and I are like, No, we're just kind of testing some software here. And he got so excited. He just goes, let me tell you about it. And he went and told us about everything that we were doing that we had invented. And it was so cool to see someone else's excitement.

about what you're working on. So I'm going to say from the very first operating system component that I worked on, through aerospace, through consulting, through working in large organizations like Microsoft, T Mobile, I've had the blessed career of being on the bleeding edge of everything. And that's my comfort zone.

And I say that because I think it's, it's a gift. It's something that I was drawn to, but it's something I've been able to participate in through everything in my career, it blows my mind. The things that I've been able to work on only to find out that they become commonplace at some point. And I find that very exciting.

And I feel very rewarded in that, that part of my career where. You know, when cloud wasn't called cloud yet, and you're building Azure for the first time, when, you know, local area networks didn't exist and you're putting in the very first ideas around it, only to find out that this thing called the internet would come into play.

And I remember I was writing Morgan software, which maybe is not the most exciting sounding thing in the world. The irony of what I'm going to say. You know, seeing something for what it's going to be. I don't know that I have a crystal ball, but I get it right more than I get it wrong. And. When the internet came out and HTML came out, I went to the director because we were writing applications for PCs to do mortgages, I said, I want to take an HTML class and my director laughed at me.

He goes, why do you want to do that? I said, well, this internet thing is going to be the next platform we're going to want to be building on. He laughed and said, that's just a fad. That'll never go anywhere. Yeah. So I ended up paying for the class, taking it myself. And then the teacher of the class I was taking was starting up a company to build internet applications.

And he found out about us. There's still three of us. I left the mortgage company and went to this consultancy, and it just was a life changing career experience because that was the new platform, as you can remember when the internet was a year old.

[00:08:55] Justin Grammens: I must be old. Yes.

[00:08:56] Jim Wilt: I absolutely remember that. But the reality is when you see these things for what they are going to be, the impact they're going to have.

Definitely. That's where my sweet spot is. And so generative AI, it hit everybody's blindsided. Everybody so fast, you know, it's just amazing. I think that's the coolest thing. When a technology comes out and can do that, it's hard to do that today. So it just, it's so, so, so exciting. I can't get over the excitement I had when chat GPT 3.

0 3. 5 turbo comes out. And nobody's got anything that can touch it. You know, nothing from Apple, nothing from Microsoft at the time. Google's thrashing to get something out there. A year later, guess what? Everybody's got something, and it all keeps getting better. Yeah. If somebody were to tell you, Justin, this generative AI is just a fan.

It's going to go away in a month. Would you believe that? No. Absolutely not. Exactly. Exactly. And there are things that are followed. Matt in the group does the talk with David and Michael has worked in quantum computing and his brain harvests so much knowledge. He's the person to watch because Quantum computing is going to take off at some point.

It's got a long runway. Okay, it's not in the area. It will be as big, if not bigger, than anything we're talking about right now. It's just, it's going to take a lot longer for that albatross to get off the ground. But it's going to just blow things away. Generative AI is just a precursor to what computing will be in the future, right?

I get excited about that. I think you did too, based on the investments you're

[00:10:34] Justin Grammens: making. Yeah, you know, I think computing itself is just speeding up. You know, the new technologies that are coming out, the pace at which they're being multiplied upon is becoming more and more, you know, scrunched, I guess, for lack of a better term, right?

It's just not taking this linear approach where like, A year from now, we're not going to be talking about, you know, the next GPT version. It's all going to already have leapfrogged, you know, exponentially in a lot of ways. So that to me is super exciting. And then, like you said, there's convergence of other things that are sort of happening at the same time.

And my background has been really around in the internet of things. I mean, I was doing IOT to, you know, and hardware and sensors and stuff like that. Some. 15 years ago now, right? And before it was even really called IoT. And, you know, in some ways I would, I'm a little bit cynical because it feels like it was a fad.

Some companies, you know, sort of made the leap and actually have sensors out in the field and actually doing some very interesting stuff. But a lot of companies did a lot of proof of concepts and a lot of things, and I got wrapped up in a lot of those with these companies that would spend a little bit of money here and there and expect the world to change and they don't realize actually how hard it is to get into production.

That's another aspect of this. You can do a lot of really cool stuff on a Raspberry Pi, but how does that actually work its way into production? And I feel like AI is still in that phase. It definitely, in my opinion, yes, there are, you know, I went to an event last night. It's the INCOSE, International Systems Engineering Group.

And the guy was talking about, you know, that I think, I don't know how many billions of people, I want to say it was like 8 billion people or whatever have used ChatGPT now, I guess. And obviously that's probably not unique people, but that's like the number of visits to their site per se. And yes, so people are knowing about it, but I still feel like it's not going away, but I'm also a little bit skeptical with regards to like, what is going to be the next phase of it, right?

Like, how are people going to be then productionizing it? How's the average Joe or Jane going to be impacted by this? And I see it. I see jobs being changed. I see kids education being changed. I see a lot of things that are going to be changed along the way, but I still feel like it feels a little bit fuzzy right now.

I don't know what you see if you have a more of a crystal clear view of where this will be in, you know, six to 12 months or so.

[00:12:40] Jim Wilt: It reminds me of the last windows at my first dentist office that I went to where it was kind of these Ribs in the glass and like you could see things a little bit in focus and everything out of focus a little bit in focus out of focus and it kind of waves like that I think there are waves of clarity or gender of AI is currently got what I'd say early adoption success.

And I think there are unknowns that we're yet to learn what it's going to be able to address in my own experience. I've seen several, several avenues where it's actually taken root and doing extremely well. In one case, I have a colleague, you know, I were bantering with this over a year ago, and he actually started creating a business proposition out of this.

He's actually doing very well writing about a hundred thousand lines of code in his company a day using generative AI to make his engineers 50 percent more efficient. And the way that works. is he uses multiple models. One model will read the code that exists today and then explain what that code is doing.

Often in what I would say a legacy language, legacy platform. He'll then ask another model to take that explanation, look at this code and write it in a modern platform, modern language. So instead of lift and shift in the cloud, he's transformed that thinking into rewrite. into the cloud using generative AI as the crutch, if you will.

It's a tool, it's not a replacement. And I'm going to be very clear about that because the engineers that are doing this actually had to become smarter. Because not only can you ask the generative AI tools to write the code, you ask another tool to rewrite the code that the first tool wrote to clean it up.

And then you have another one that reads it back to you, and you look at the first explanation and the final explanation, and if they're not the same, you're in trouble. Then you need an engineer to look at the code and basically understand it's not going to do any harm. You see, the idea then is it elevates what the engineer skill set is.

Because if the code generated is doing things that are using patterns, if you will, but in a way that the engineers never coded, the engineer is going to have to learn that coding structure to ensure it's the right way to do it. So they're elevating themselves as well. So what I would say, it's a really great tool for, modernizing your applications.

That's one application that's getting some traction, and I couldn't see myself at all writing anything. You know, you've got your third party products that will scan the code for vulnerabilities, scan the code for best practices, etc. I don't want to name the names of the products, but you know where they are out there.

The key then is, ChantGBT tools and find out the common coding mistakes? I have posted several articles in the Architecture and Governance magazine that allow. others to see what I've learned in the process of my learning. And it all started this coding thing when I shared some code with the guy that started this.

I said, I wrote some Python code. I put an error in it. Can you find it? And it had to do with one of these organizations that wanted, was considering bringing me on as a technical fellow. And they said, we want to know if you know how to code. And I said, so I take a coding test. They go, yeah, I'm cool with that.

I don't have any problem with that. I said, and you will too. And they go, what? So I gave them a link and I said, it's on GitHub. There's an error in this code. Can you find it? It's only 69 lines. You should be able to find it. I want to know that you can read code if you're going to ask me if I can write code.

And they kind of laughed, and they could not find the error. Really? Interesting. Yeah. And then my colleague, who's literally a genius at coding, he couldn't find the error either. And he was smart. He goes, I'm going to run it through ChantGPD4, and I'm going to assume it'll tell me what the error is. And it did.

It told him it was a multi threading error. It was a common shared variable that was not being locked. And guess what? It fixed it. It actually gave a suggestion to it. Yeah. That got us charged. And then he starts asking questions. You code Python like you code C sharp. It's pretty bad. It's pretty ugly. Do you mind if I ask you to clean it up?

I go, no, ask him. He did. And it made it really great. Well, I understand what it's doing at this point. No, because I'm not that good a Python developer. But it really did clean up the code, made it more efficient, faster. And you could ask it the same question. What does this code do to the original code? And it got the exact same exclamation back.

So I'm getting up generative AI for an engineer, a software engineer is a must tool, whether you call it a copilot or a companion. It's an augmentation. It's not a replacement. It's an augmentation. It's no different than the accountant that's got that quirky calculator today. Still, I don't know why. Next to their PC with paper tape on it.

I don't know why, but they have them. Yes. Yes. It's a tool.

[00:17:25] Justin Grammens: A hundred percent. Believe that, you know, I've started playing around with this over the past six to nine months or so. I installed Copilot and I was kind of skeptical to begin with. And like you said, to name whatever tool you want, there's a ton of them that are out there, but I was like, yeah, you know, let me, let me just sort of see what this thing does.

And it's one of those things where you just kind of maybe just start playing around with it. Hey, generate me a function that does X, you know. But now what I found is, is it's really good at documentation, like it kind of sees what I was writing and it will kind of self document along the way and it will suggest things that I would say in my comments, in my code comments for that, which I think is, you know, fabulous.

And then you can, like you said, clean it up, right? So. I'm a Java programmer by trade, right? So like I'm very much into statically typed languages and defining all your variables up front and stuff like that. And you're right. A lot of these other dynamic languages like Python or Ruby, they're just a little bit more free flowing in the code.

You can tell I'm a Java programmer, but I'm hacking Ruby for example, or something. So you can definitely do some really interesting like code modifications and ask it to. Hey, how else would I do this loop, right? Is there a more efficient way to do this? Right? Cause I have my crutches, I have my certain ways that I always sort of do various, you know, operations and it just gives me other ideas and I can accept them or not, right?

That multi threading thing was really interesting. I had to take a quick look cause there's that dining philosophers problem, right? Where it's like, you know, you're sitting next to somebody and you can only eat and it kind of goes around the table and you end up in this deadlock situation. And those are things that, you know, most beginning programmers don't have any awareness of, right?

And I view this tool as you kind of have somebody who is pretty advanced that can sit next to a junior developer, right? I have some, some devs on my team that are like, well, I want to learn it the hard way. Like, I want to like understand all of the code. And I'm like, you know, I get that side of the argument.

Wouldn't it be great if you had somebody with you explaining why you could ask questions of it. So that's the thing that I think is just so beautiful about this is like you can actually ask it questions as to why something is working a certain way or not and actually get a response. Right. And you can do that 24, 7, 365, you know, it can be one o'clock in the morning and you can be, you know, doing this.

You want somebody to do a code review for you? I just, yeah, again, it's completely changed the way that I view software engineering. When it comes to code generation, and even when it comes to design, frankly, I'm working on a presentation right now that I'm, I want to kind of walk everybody through everything from UML, like, right, so it won't do UML diagrams, but it'll tell you exactly how to do it.

It'll say make a box. You know, draw one to many relationship this way, and then you can say, well, but I want it to be this, and then I'll say, okay, move the box, right, you know, draw this intermediate, you know, object, and it'll tell you in words what it needs to do. And we're not that far away from it actually being able to draw it, you know, but I think it's a super powerful tool.

I mean, again, you're preaching to the choir here, but there's all sorts of new, different use cases, but then to back it up, you know, when I was talking about, you know, 10 minutes ago, these use cases we're talking about. I'm just wondering how much of the general public sees these, right? How much of this stuff is going to be under the surface that we're going to view it as technologists or doctors, you know what I mean?

Or lawyers or all these types of stuff. But like, how much is the general Joe or Jane who's just out doing their business? How much are they going to be impacted by this?

[00:20:29] Jim Wilt: Yeah, there's a good question. The medical industry, I think, is another use case where you're going to see a lot of traction, but I think there's a more hype about it right now because it makes the headlines.

Generative AI diagnosis better than doctors, 75 percent of the time, whatever it says, I don't know what it is. The first time that I had exposure to something like this was in the 90s. Dr. Joel Robertson, country's foremost brain chemist, was building an application, it's called Next Opinion, NX Opinion, and it was something where he wanted to do diagnosis for patients that can't represent themselves.

And it could be due to them being knocked out, it could be to whatever, mostly it's an emergency room diagnosis. It had to do with a friend of his whose son died in the emergency room and it was considered a drug overdose and he knew it was not. So he did a second autopsy and found out that it was a rare condition.

And the parents said, how do we prevent this from ever happening again? And that's where next opinion was born. And it's a long, probably not news application any longer, but it was a brilliant application because it could be localized to various countries. And it was free. That was the other thing when he built it, he said, it cannot be sold.

It has to be free to hospitals to use. Well, that limits only teaching hospitals in the United States. It used Bayesian search logic and Bayesian engines, and it used cognitive intelligence as opposed to artificial intelligence, but it was able to augment, if you will, looking at symptoms, creating what I would say a list of options that the doctor could use to consider as what would be highly prioritized as probable solutions as opposed to, you know, doctor relying on himself.

Your programmers that want to learn it the hard way themselves. I respect that when you're a doctor and Mike's situation is there, I think they're willing to take any kind of help they can get. So it's kind of started with that. And then I've been working with a colleague for two years. And he's built out two traditional AI solutions for some of the largest health organizations in our country that are looking at patient records, paper, as well as electronic.

And he built out what I would say, one of the, the greatest, most accurate predictions of patient diagnosis. And it is at best at this point in time, it is at best. 85 percent accurate and that's the best you can get it. And it was never to replace the doctor or replace the nurse. It was to take hundreds of doctors that have to look at thousands of cases a day.

And allow them to make better predictions as to what to do about the cases for insurance purposes to fund it or not fund it and what have you. And we can put links to all of this at the end. He wrote an article, Harnessing Generative AI in Enterprises, The Dual Pillage of Knowledge and Prompt Engineering.

And his focus was on something called knowledge engineering. You've heard of knowledge management. Knowledge engineering is taking what I'm going to say, The knowledge of people in a profession like the health profession, and not just the data that they're working with, but the things that they know between their ears, how do you capture that?

And how do you apply that to decision making and better decision making? And I think one of the things that he identified was that's the most difficult place, because you can't, you can't harvest information out of a person's brain. Where a nurse typically knows a patient better than the doctor because they spend more time, they know the family history, and they take only so many notes.

And then my daughter in law is a nurse and she goes, and we know so much more than even that. And so that's why the best I can get is 85 percent accuracy. He doesn't expect it ever to improve because there's so much knowledge in people's heads. But it isn't about replacing people. It never has been about replacing people.

It's about taking people that have this knowledge and making them access it faster, more accurately, and share it with others when you think about that. So I think the health profession is another use case that really has what I would say a real benefit to this new technology, but it's just scratching the surface.

And it probably is going to have to advance significantly more before it's going to take another leap. You know, there's, there's a lot more that will have to happen. And there's another one I wanted to talk about that I just thought. I never would have come up with, but I was. Working with an organization in the West Coast, the large retail organization, larger than any that we have here.

And they wanted to operate more efficiently in their, their stores, essentially. How do we allow our stores to operate more efficiently? What they found is. Again, it's the situation that the general manager, the source between their ears know so much more than software can do, and you can tell by their sales, you know, they're, they're nearly 300 billion organization.

You can tell by their sales, general managers really know. What's going on? So, you know, the question that first came out was, so do you want us to create a replication of them? Or do you want us to replicate what they do? Oh, no, no, no, we're never going to replace them. They're brilliant. How do we make their lives easier?

How do we get information in front of them? So that they can make faster, more accurate decisions. I mean, they're making great decisions as it is. This company has never had a bad year. The question is, how do we help them help themselves? And I said, wow, we'll take that challenge on. And we did build a prototype, a semi functional prototype.

That was really cool because we use generative AI to look at the factors, both in and out. in the organization, the supply chain, and the weather of around the supply chain and the geopolitical climate around the supply chain to identify when anything in the environment is going to have an impact on their ability to deliver goods.

And I thought that was kind of cool if we could actually start melding some of these things together, internal factors, external factors. I worked in an organization once where we were talking about a customer 360 view. And I said, I think we need a customer 720 deal. And the question was, can you explain that?

And, and this is again, early AI stages when I was doing AI. And it was machine learning. It wasn't generative AI. I said, we can get our information is the 360, everything we know about the customer. We know that we can get that. We can put it in a data lake, data lake house. We can put it wherever it needs to be, but there's a wealth of information about our customer.

That's we can also get like if they become Facebook friends with us, or if they become part of, you know, some sort of premium program that we have, we can get so much more information of them, but we don't host that data, someone else does, that's the 720, that's the other information about them that we can do.

That was a great idea until some of these organizations realized we were doing that and they shut it down. Oh, sure.

[00:27:25] Justin Grammens: Well, the data, the data is the new gold, right? Or the new oil or whatever they say. Right. And I was just, yeah, just on the podcast, talking to somebody the other day about that is like, you know, in fact.

People have asked me at this most recent event that I was at, that I was, you know, up there. It was a Q& A session about AI and somebody was like, well, who should I invest in? Like, what are the companies to invest in artificial intelligence? You know, and I'm like, well, you know, there, there's obviously the people that are at the chip set level, you know, NVIDIA and AMD and those people, because those things, you know, obviously there's computation, but I, I said in general, follow the data, right?

The companies that have the data are the ones that are going to be the most important. And of course there's the big names, you know, like the Googles and the Facebooks and of stuff. But there's also a lot of other smaller companies. So I'm, I'm consulting with a company right now, I'm on their, I'm on their advisory board and they're actually getting biometric information around stress levels.

Right. And so they've captured over like 5 million heartbeats of people that are first responders, right? So EMTs, police officers, you know, firefighters. People like that. And they have a lot of data around how these people on, how their heartbeats are going and what their biometrics are as they're going into these things.

And you know, and they're just a little startup right now. Of course, you gotta turn that into something that's actionable. But there's tons of startups up there, you know, out there that are getting all sorts of information. And I just, I said, you know, in general, find the companies that have the data because that's kind of a precursor to all this stuff, right?

You kind of need that first.

[00:28:46] Jim Wilt: I think you, you bring up an interesting point. There is no one. Organization that will lead in this situation. You know, it took us years. We had many leaders in cloud at some point, but ultimately we're multi cloud for most organizations today. There are only three organizations.

I know there's a single cloud. Microsoft.

So I think generative AI is going to follow a similar suit, but it's necessity. We know there's bias and we know there's hallucinations, right? And so you need to realize that certain. Models are going to have certain biases. Certain models are going to create hallucinations. There's a third one that I discovered recently.

I say I discovered, it's always been there. I've just never had a name for it. And it's called blind spots. There, there are blind spots to these models as well. I'll give you an example. I won't mention names, but one of the providers of these models builds other software too. It's not the only thing that they do.

And I asked their model, tell me about their name AVC. And, and the response was, and I'll, I'll read it back to you. It's hilarious. I apologize, but I'm unfamiliar with your name, ABC. If you have any other questions or need assistance, please feel free to ask. Okay. So I'm asking their tool about their product.

And it's a blind spot. Yeah, sure. So I added one word, the word project. Tell me about your name, Project ABC. Oh, I have all the information you need about it. Wow. I think you have pages of information. So these blind spots are out there. So I asked competing generative AI tool about company name, product ABC.

Two of them told me about it. We think you mean project ABC and that's okay. Perfect. You corrected me. Thank you. But then one of them went into a hallucination mode and it basically said, it's a robot and I'm a robot too. And I was like, where did that come from? Right, right, right. So you're not going to win by investing in one tool.

You're going to win by understanding when to use which tool and when to recognize which tool is applicable for the type of answers it can provide. And you learn that over usage and over time. So my, my question is, who's capturing every question they're asking and then getting a rating on the answer that they're getting so they can understand when to use which tool and one of our organizations local here in the Minneapolis area is doing a fantastic job of that.

Brilliant. They have a product out already. In the generative AI space of their own, they give to their customers in the legal profession. And what they also have is, you know, they're working on other products that they're building, but they, you know, in conversations with their CTO, they found that certain products cost more than other products for certain types of questions.

And you need to be cost conscious because you can go bankrupt using these tools if you're not careful. The other one is. To know a less costly tool is okay for a less important question. So they actually are now guiding, if you will, the questions around the most cost effective and the best bang for the buck, meaning I don't need the gold in the answer, I can take the bronze answer.

If it's ancillary kind of a thing. Now that's an art when you think about it. And so your engineers now are becoming engineers of the arts in addition to engineers of the technology.

[00:32:20] Justin Grammens: Well, you kind of hit a little bit on this prompt engineering thing too, right? I mean, that one word kind of changed the whole context with the large language model.

Yes. Exactly. Yeah. Yeah. And again, I, I don't know what to think about that. People say, Oh, prompt engineering, that's the next hot job to get, you know, earn up to a million dollars a year being a prompt engineer. I guess I, I don't know if I've, I've seen that play out, but it certainly is. It certainly is a good skill to have.

And like you said, it's one of those things where you just got to experience it and try it, you know, in a lot of different ways in order to, how to interface with these large language models.

[00:32:53] Jim Wilt: I think it's a part of what I'm going to say the next evolution of programming languages is understanding how to put prompts together for the outcomes that you're trying to get.

It's to make your code better, It was to diagnose something if it's to predict a way of operating more efficiently, you need to know the prompts to put together. And also you need to understand these copilots, these GPTs that you can create. I think there's 3 million GPTs in the marketplace now. You mentioned about 8 billion users reminds me of the days when Visual Basic became popular and everybody was a developer.

Yes. A lot of programs came out and some of them were different. So the reality is there's going to be an onslaught. I think one of the things I try to discern when I'm working with organizations and engineers, especially are those that are the get rich quick schemers, if you will, versus those that I want to make a difference to the way I work, help me with that.

Those are the ones you want to invest in. And I think you can see that there's going to be an onslaught of flurry. of everybody's got a GPT to do anything, probably to how can I best feed my dog and cat, you know, and things along those lines. And the reality is get into the technology yourself, build your own.

Don't worry about what others are building. I mean, if you want to go ahead, fine. But there's easy enough to build your own and to learn how to build them to give you better insights into yourself. One of the things I did at CodeFreeze this year at the university of Minnesota was I demonstrated this. So I built a GPT and a co pilot.

You're in Minneapolis, so you don't know who's into it. You don't want to just, you know, build one for 3M or build one for target. Right. Right. You want to be careful about this. So I built one for Krispy Kreme. I remember that. How can you go wrong with donuts? What's great about being able to do these is you can take their SCC 10 case statements or 40 F if you're in a different country.

Which is the government filing. You can take their websites and you can do searches about them and put that all into these tools. And I know they become grounded and they're called rags. And then you've got a model that is yours. You can ask questions about how's Krispy Kreme doing? Their profits are up, even though they're operating at a loss.

Because of COVID, they're operating at a less of a loss and the trajectory is very positive. They're going to do very, very well. Great. That's cool. I can not only invest in Krispy Kreme donuts, but I can encourage all my friends to buy more Krispy Kreme donuts. Sure, sure.

[00:35:23] Justin Grammens: Yeah, you know, I think what's cool about a lot of this stuff is, you're right, it can start to make inferences.

You know, people talk, I just saw a recent, so I, I post this weekly newsletter and just posted one. It's This morning, actually, and there was a article that was out, still early research paper type stuff, but you know, everybody sort of calls these things, and I have as well, they're stochastic parrots, right?

They're just, they're just predicting the next word, right? And I did a whole large language model thing at Open Source North last year, where I pretty much proved that I pretty much showed, okay, look at all this data, this is, this is what's happening and this is how it's building up language and it's starting to understand language a little bit.

So this, this research paper that they put out though, is that they're starting to say that, you know, since they're so large, they're actually, they don't know yet why or how, or if we can even comprehend. But, uh, it's actually showing signs of invention, right? Of actually taking a different path than what you mathematically would assume would be the next one.

And of course there's temperature fluctuations you can play around with, you can throw randomness in there. There's a lot of different ways you can do it. But, you know, I think it remains to be seen sort of how far we push these large language models. And after they start getting into just sizes of things that we just can't even comprehend, what's going to happen?

You talk about working on something that hasn't ever been done before. Maybe that's part of what we're talking about here is just the fascination that we just don't know where this is going to end. I

[00:36:45] Jim Wilt: think what's exciting about that in this part of kind of looping back to my origins is you're running full speed.

It's something you don't know where it is and you don't know where the brick wall is. As you have a brick wall behind you that's beautiful, by the way, the light painting and things like that. You're running head first that you're going to hit some brick walls at times. It hurts, but that's all part of what I'm going to say being on the bleeding edge.

The leading edge is a little more predictable. The bleeding edge means blood is involved. And that's unfortunate, but you talked about, you know, context, windows, prop temperature, top key, nuclear sampling, top key sampling, all of these things are parameters that allow us to tune things. I think what you're hitting on really well is that, and by the way, if you take the parameters out of.

the value context that they're allowed, you can get gibberish to come out of these models. Just absolute gibberish. And so, it's interesting, if you were to take the common thread of what is it doing, it's predicting the next word, based on patterns. And I think one of the cool things is they're fed so much data of so many different things, you can actually get patterns that will surface that you would not initially see for yourself.

And that gets to the shock, if you will, of what generative AI is doing to the industry right now is, let's just say, I don't know, pulling numbers out of the air. Let's see a 50 patterns per human is all that we can really. Manage and in our brains. I know the numbers that I'm just saying it's a number.

Easy math, right? And it's looking at 50, 000 patterns. The chances of it showing a pattern that matches yours and one that doesn't are quite high. Very low actually they'll match yours unless it's pretty common, but it's going to be, it's going to show you insights, if you will. That's what we call an insights platform out west is insights about what's going on.

Based on the patterns it recognizes from the data it's been trained with and it sees through tokens versus the insights that you would do in your head by yourself. Now, it's not there to replace you. It's there to make you think things differently. I've done a lot of really, really, really hard physics problems in my, you know, college and in my career in aerospace.

And my wife has a degree in education. So in college, she would sit with me on the physics floor She said, because I couldn't leave 'cause I couldn't do the problems by myself. You had to work as a team to figure out these blasted visits problems. And you're at the whiteboard and you're thinking things like that and you know, sweats boiling down your forehead, and then a preschool teacher or early education teacher, an education person asks you, can you just explain it to me?

Yeah. What is it you're trying to sell? And you do. Yeah, you're right. Explaining it to someone in a different profession, you realize. Oh, I see what I was missing. And now my wife, now girlfriend at the time with a degree in education, solved the physics problem. She had no idea, but it's because she asked questions differently.

Now think about generative AI. It's essentially allowing us to do that. It's looking at problems. differently than we normally would. Traditional AI looks at it the way we would. That's the thing that excites me about generative AI. It isn't the traditional machine learning. Machine learning, I teach it to make decisions like I make decisions by the data that I gave it to repeat decisions that I've made in the past.

Okay, generative AI does not work that way. And nor do your data scientists and generative AI work the way that your data scientists and traditional AI do. You have to realize then. What is it going to prompt me back with this prompts? And I love the prompt where you ask, I need to make a decision about where to eat dinner tonight.

Ask me five questions. Right. Yep. Yeah. Yeah. And it gives you five questions. What are you hungry for? What kind of foods do you like? Blah, blah, blah, blah, blah, blah. You answer them. And it gives you some insights about, you know, that famous Scottish steakhouse, McDonald's.

[00:40:47] Justin Grammens: Yeah. Right. Well, there's even this idea of reverse prompting, right?

So you can ask it to say, what prompt would I ask you? If I wanted to get an answer such as X, right? And I, I spoke at a conference last year on this and I was looking at it with the focus was on job description. So I was, I was saying, look, you as a, as a headhunter actually was more geared towards people that were trying to find and write good job descriptions to try and, you know, find better talent.

I said, look, you, you kind of know the skillsets of what needs to be there, but you know, of what you want to write with regards to the ideal candidate, but ask the prompt. to tell you how you would actually end up finding this ideal candidate. So it was a fun little like twister and a guy came up to me afterwards.

He's like, I'd never actually thought about it that way. You know, I was always more or less just thinking forward, forward to, you know, write me a prompt or write me the candidate. I'm like, no, you have the candidate. Ask it to tell you the prompt and then take that prompt and then ask it, right? And see what you end up getting.

And it got something that was different. Like you said, it generated something different than what they had originally had intended for and it actually was better.

[00:41:45] Jim Wilt: So, yeah, it's exactly right. And that's, that's the beauty of it is it gives us different facets of looking at the same thing.

[00:41:52] Justin Grammens: Yeah, for sure.

Well, we've been talking about some really, really good applications of AI, because that's really what I like to talk about. And, you know, you mentioned physics. I actually was a physics and a math major, and I view physics as actually applied math. And I ended up changing my major because I was always saying, I don't want to solve for X.

I don't really care what X is, right? What I care about is if I take a football and I throw it, With some velocity, how far is it going to go? Right. And so I actually ended up majoring in applied math. Actually, I was a lot more focused on applied math. I got a minor in physics, but you know, I love the applications of it.

Right. So just more or less, you know, solving problems for the sake of solving problems or working on neural nets. People can geek out on that, but I really love the applications.

[00:42:32] Jim Wilt: Think about what you said, remember back enough to your math problems, your physics problems. Generative AI has always been on our minds when you think about it, because When you were in college, didn't you go to the back of the book, get the answer?

Cause it's always four. So, you know, and then you work the problem backwards. You can't solve it forward. So you started the answer and you work it backwards to see if you can get it there. And you do solve it that way sometimes. So then you can solve it forward. And I think generative AI is allowing us to look at things from not just forward and backward, but sideways and diagonal ways and things like that.

Which not a bad thing when you think about it. No,

[00:43:07] Justin Grammens: no, not, not at all. Not at all. Well, this, this has been, this has been great. I think one of the other things that I do like to ask here before we're going to kind of wind down here in the next, next five minutes or so is if I was just coming out of school, right?

So what would you suggest that I look at? You know, what courses, what conferences, what books, I don't know. What other stuff are you, maybe would you suggest somebody

[00:43:25] Jim Wilt: explore? So generative AI is, is, is so new and several have books. Out there, David, you know, Espandol's got his book out. I would say definitely you want that book, but there's going to be a lot in the industry.

around knowledge and training, if you will, you have to discern between the training that teaches you how to use specific tools. And those are just general knowledge that are, are out there. And one of the things I like is on the, the chance GPT open AI website. There's a lot of videos just about generative AI in general that are not selling their product.

That's showing how to use their product. They're getting into the mathematics and the way that these systems work. And they're I think that's a good background for anybody. You don't need to be a mathematician to understand it. It helps, but you don't need to be. And then There are, I'm going to say, a lot of organizations that are spinning up, I'm going to say, ways to get people up to speed on these things.

I know that the Architecture and Governance magazine that I've been writing for has an AI article just about every week. And it's generally from someone around the world that's got a really good insight that they can share. So it's going to happen that that's going to happen. I would love to see our universities.

Grasp into some studies around this. I think initially it's going to be independent studies. So if you're in college right now, go to your chair of your department and say, I want to do an independent study on geology, on literature or whatever, but I want to leverage generative AI. As part of this study, will you allow me to do that and will you back me in doing that and get some momentum behind that would be one of the things I would really, really want to see focusing on.

And the last thing I would say is in organizations, don't wait for permission, you know, go and play with the technology on your own. Don't use company data. There's a lot of public data that's out there. Go get Kaggle data, start training your models, start playing with generative AI, make tokens from the data that they've got, find out what you can do with it.

You're only going to get better by playing with the technology. Don't go at it trying to solve a specific problem initially, go at it trying to solve anything. Just, you know, like I said, where am I going to go to dinner tonight? Kind of a fix. Those kinds of things, that's where you have fun with it.

That's where the real learning occurs, because you're going to get crap answers, and you're going to say, what did I do wrong? How do I get a better answer? And now you're diving into it. You're tweaking parameters. You know, you're going to learn about hyper parameters. You're going to learn about how to get it to think either more like you or more different than you, whatever it is you're after.

And now you are learning the technology. You're learning how to leverage the technology. You talked about prompt engineering. It is, it is an art of its own. Be able to pay anyone a million dollars for their profiting skills, but good profiting skills are going to become, I'm going to say a baseline skill

[00:46:23] Justin Grammens: set for any job.

Right. Yeah. Yeah. It is. It's going to become table stakes. People are just going to, going to have to be able to ask the right questions and be able to formulate them in such a way that some agent, you know, AI, whatever habit, you know, I like the word agent is just going to be able to be able to interpret that.

Of course. Make them more efficient. And at the end of the day, yeah, people that are using this are going to become more efficient in their jobs. So that's good. Yeah. And the other one is like Hugging Face, right? So that it just is basically open source, open source models out there that anybody can pull down and run locally, but they even have a way for you to run them through the browser as well.

So yeah, really, really awesome that there's just stuff out there to do that. Well, good, Jim. This has been great. It's been a lot of fun and it's just gone by super fast. So I definitely want to have you back on the program again in the future. You say this stuff's moving so fast. I'll have you send me any links, but we also do have liner notes as well.

So as, as we go back and re listen to the transcript of this, we'll, we'll pull out some stuff, but yeah, definitely want to put that information. Uh, before I forget, how do people get ahold of you?

[00:47:20] Jim Wilt: So probably, uh, put my LinkedIn connection is probably the best way to get ahold of me. Definitely anybody that has interest in this, I want to help them out any way they can.

The pioneers are not going to come from, from my generation. They're going to come from the people coming up and then are just genuinely curious about it. And I want to put a fire under those people.

[00:47:40] Justin Grammens: I love it. This is great. It's great. Yep. I think you and I have the same kindred spirit in a lot of ways through this stuff.

So. Well, great. I, again, appreciate you having, having on the program, sharing all your knowledge here with our community and wish you the best and we'll definitely keep in touch.

[00:47:54] AI Speaker: You've listened to another episode of the conversations on Applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization.

You can visit us at AppliedAI. mn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at AppliedAI. mn if you are interested. Transcription by https: otter. ai