
Conversations on Applied AI
Welcome to the Conversations on Applied AI Podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of Artificial Intelligence and Deep Learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real-world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI.MN. Enjoy!
Conversations on Applied AI
David Howard - Why AI Adoption is Harder Than AI Development
The conversation this week is with David Howard. David is a multidisciplinary data scientist and researcher with more than six years of industry experience and eight years in academia. He's been focused on pioneering advancements in generative AI and driving decision-making through insightful analytics, predictive modeling, and advanced data interpretation.
As a Ph. D. and former professor in mathematics with research in computer algorithms, he specializes in harnessing complex mathematical theories and applying them in practical ways to solve real-world problems. Makes a lot of sense being on this podcast because we really love applications. He is currently leading technical advancements at CoxAuto with a robust team of 45 data scientists, where his current role involves educating and mentoring others on the utilization of large language models in generative AI.
Always an educator, like myself, he hosts a bi-weekly seminar called Mundane to Masterful, educating more than 100 people as he demonstrates generative AI and its transformative potential across various industries.
If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!
Resources and Topics Mentioned in this Episode
- CoxAuto
- Pythagorean Triples
- FleetNet America
- Ready Logistics
- Gartner hype cycle
- Bayesian statistics
- MTEB Leaderboard
- Retrieval-Augmented Generation AI
Enjoy!
Your host,
Justin Grammens
[00:00:00] David Howard: My opinion, the biggest problem is sort of change management, right? You can build the best model ever, but convincing someone to use it in whatever venue that looks like, I find is incredibly difficult. And it also like leads me therefore to anybody that's like developing in this community, but whatever part of AI you're part of, whether it's an AI, it's your data science, or if you're part of putting models in production. Become a better communicator. Convincing others to use is humongous skill.
[00:00:31] AI Announcer: Welcome to the conversations on Applied AI podcast, where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today.
[00:01:02] Justin Grammens: Welcome everyone to the Conversations on Applied AI podcast. I'm your host Justin Grammons, and our guest today is David Howard. David is a multidisciplinary data scientist and researcher with more than six years of industry experience and eight years in academia. He's been focused on pioneering advancements in generative AI and driving decision making through insightful analytics, predictive modeling, and advanced data interpretation.
As a Ph. D. and former professor in mathematics with research in computer algorithms, he specializes in harnessing complex mathematical theories and applying them in practical ways to solve real world problems. Makes a lot of sense being on this podcast because we really love applications. He is currently leading technical advancements at CoxAuto with a robust team of 45 data scientists, where his current role involves educating and mentoring others on the utilization of large language models in generative AI.
Always an educator, like myself, he hosts a bi weekly seminar called Mundane to Masterful, educating more than 100 people as he demonstrates generative AI and its transformative potential across various industries. So thank you, David, so much for being on the Conversations on Applied AI podcast today.
[00:02:04] David Howard: Thank you so much. I think you did a better job than I could have myself.
[00:02:08] Justin Grammens: I've done this a couple of times for sure. So clearly, yeah, yeah. Well, awesome. So I told the audience here a little bit about sort of where you're at today, but I'm always curious, you know, kind of what was your trajectory and what was your path as you sort of walked these little, uh, you know, steps along the way towards getting where you're at today.
[00:02:24] David Howard: Yeah, I mean, in general, I would say when you make a prediction forecasting, even yourself, like there's going to be incorrect inaccuracy is all okay, you know, when I joined academia, like, I thought that was my life for the rest of my life. And honestly, I love mathematics, right? I was in combinatorics, graph theory and thinking that this mathematics problems that are out there often esoteric and not the direction necessarily of this podcast, but I loved working on those type of problems.
And then essentially. Around 2017, 2018, we realized as a family that we were going to have to move to Atlanta, and that basically necessitated a change, right? So if you're in academia, unless you are the Andrew Ng of this world or the Geoffrey Hinton's, like, you don't get necessarily the choice based on the location.
So I knew I had to make a career switch, given my background in computer science from undergrad, and still, even within research, I would occasionally use computer algorithms. It made a natural. Jump. And so I started learning in 2017 and haven't stopped ever since. And the thinking in terms of data science is very similar, I would argue, than mathematics.
If you're still working on mathematical problems, it's just that your tooling is a little bit different depending on what you want to do, right? You've got Visualization tools. You've got algorithm tools. You could bring in knowledge from a particular subject matter space that you want, like become better python coder.
Apologies to the R users. I'm not the greatest R coder. I can read the code, but you know, learning a better memory optimization, learning, you know, I mean, there's. Totally infinite supply, like, you know, as a data scientist, you wear many hats, no matter what you're working at. And so, like, your knowledge can suddenly branch off into all these different areas.
It's super important. I would say, you know, I would could spend my entire life just learning and not talking to anybody. And like, it could go on forever.
[00:04:08] Justin Grammens: So, yeah, as a professor of mathematics, you kind of are in this lane, right? There's just the one thing you're going to be teaching, and this whole data science world allows you a much more.
Wide variety
[00:04:18] David Howard: tools, I would say a wider variety, but there is plenty of opportunities. I think for teaching, you know, I taught various subject. If you want to go down and talk about number theory right now, I am ready to have a lesson or two. In fact, it's like my go to in terms of sometimes testing out.
Generative AI like I like using primitive Pythagorean triples, which you've never heard of, but it's, you know, you've heard of Pythagorean theorems and you've heard of 345 triangles in all likelihood. But the whole mathematics behind generating these is a topic that sort of can be done in intro number theory.
But if you ask. These various LLMs up until GPT 4. 0, they weren't good, but I mean, GPT 4. 0 does a fantastic job of very, in terms of the mathematics space, like I'm very impressed with that.
[00:05:04] Justin Grammens: Yeah, my undergrad was in applied math and physics, so it's been a long time, but I did love the applications of math.
And so per your point, like I didn't find it interesting, classical math, like let's solve for X. Like I was like, no, I want to solve for X because I want to figure out how far I threw the baseball or how far the baseball is going to go based on this velocity. Right. But, you know, you're right. That's what got me into software engineering and I got a master's in software.
And I actually teach at a local university here. I teach in the master's program. So I got my foot sort of in both sides, but yeah, I've been, I've been thinking about like, Oh, just the traditional path, trying to get a PhD. And I've talked to the people that have gotten them. And it seems like it's a unique type of thing to do.
Like it's a lot of work. Agreed. Obviously, I think you probably do it more because it's a passion that you want to go to. Yes,
[00:05:49] David Howard: 100 percent like do it because it's a passion. Like if you want to earn quote money, if that's your goal, like I would say there are faster paths. But in terms of enjoyment, fulfillment, like I, yeah, I loved doing my PhD.
[00:06:03] Justin Grammens: Yeah. Well, good. So after you left academia, right, what was the first company you went to?
[00:06:08] David Howard: Delta Airlines. Then I, I love that company. So just before the podcast, we were talking about how I was up at three o'clock this morning because I was, I still have standby options even after leaving the company, but it was a great company to work for.
I really enjoyed bringing ML models. I worked with a fantastic team. It was great. I really loved my years at Delta. Nice. And now. Cox Auto, which I actually even love even more in my current role.
[00:06:32] Justin Grammens: Well, tell us a little bit about that. So what is Cox Auto and uh, what do you do with all these data scientists there?
[00:06:37] David Howard: So quick thing of Cox Auto, it's, I'm sure a large percent of people have not heard of Cox Automotive, but it is actually the biggest player in the automotive space. You probably likely have heard of companies like. Autotrader, Kelley Blue Book, maybe Mannheim. This is the companies that Cox Automotive owns.
They're spaced largely in the infrastructure in cars, so they provide dealers with services, they handle transportation, they work in service and repair, all over the place, and stuff with EV batteries now. So lots of different things that they work on. And the company sort of, Internal model, but doesn't actually, you know, you didn't get published anywhere is that when you hire, we worry about all the job description, but we also don't hire a jerk.
I think it really shows into the company. I love everybody that I work with. I mean, we have arguments. Don't get me wrong, but it's a fantastic workplace to be in. You know, I work with like you talked about the beginning, 50 or so, 45 or so data scientists. And then I work across the product. I talked with senior leadership and my role is kind of the most senior technical person in the data science community.
So I spend a lot of time, as you talked about, educating others, whether that's mentorship to junior colleagues, collaborating with other colleagues inside data science and also software engineering, model ops, learning from each other. I will advise new leadership, so I'm in lots of different meetings saying, Okay, we have this new product use case, you know, from a technical standpoint, what do you see benefits of using certain algorithms?
How difficult potential lift from a technical standpoint? And then my boss will also be in those meetings and she is the most amazing boss and also very knowledgeable inside of A. I. M. L. And she has more of the business complete Mindset in that regard, like, okay, what resources do we have, like, how well do we think it will be absorbed by various parts of the company?
Or if we're doing an external to dealers, things of that nature. And we work amazingly in tandem. So lots of things are now constantly learning. So I get to complete autonomy from her. You know, I meet once a week and I'll say, I'm going to work on this today because I think it's useful. And she goes, that's awesome.
Also, these things are coming down the pipe. If you have age and those fantastic partnership, we have
[00:08:40] Justin Grammens: awesome. That's great to hear. Yeah. Yeah. That can be hard to find sometimes. I've worked a lot of different jobs. So you're, you're, you're very lucky to be in that environment now. I mean, so Cox auto, how long have they been around?
[00:08:52] David Howard: So Cox automotive specifically, I'm going to get this wrong. And like, I should know this, but I think it was Mannheim adoption. Oh man, is it 1980, maybe 75, but like, if I'm off by like 50 years, apologies. No, I mean, I think
[00:09:09] Justin Grammens: the crux of my question is, is there's probably been a lot of data science going on within this organization for a long period of time, right?
[00:09:15] David Howard: Yes and no. I mean. It's tough to say, right? Like, because my argument would be data science development. Like, what does that look like over the, you know, previously there was a bunch of stuff in SAS and we were having statistical methods and distributions. And before that we had smaller data. So like linear regression was still there.
So like, there's a. I don't know how I would define when the moment data science started. And also I was going to say that Cox Automotive is actually part of a larger Cox Enterprises. So the Cox family, it's a privately owned company, and they have Cox Communications, which is a cable company, and they own stuff in other forms of media.
So I think one of their first like acquisitions was like, I think Dayton Newspaper. In Ohio, I think that's right. So they've been around for, you know, like a hundred years or so.
[00:10:00] Justin Grammens: Yeah. So when it comes to automotive then, so you guys are sitting on a lot of data, obviously, right? Yeah. This is the story that someone wants to come in and yeah, yeah.
I mean, what, like walk me through a typical use case because. Eventually, we're going to start talking about Gen AI, how Gen AI plays into this. But before Gen AI, right, what is the typical use case?
[00:10:17] David Howard: I mean, it's tough. So, like I said at the very beginning, like Cox Automotive is a lot of different branches.
Yeah. So, you know, I'm on calls in various parts of the data science community. We have people working on a company inside that's called FleetNet America and Ready Logistics. And these are both working on optimization problems. So there's an exchange where dealers have to deliver Cars to other dealerships or other locations, and then there's also the truck drivers or truck people that have to haul them, right?
And you need to take loads and balance them with your resources. It is a resource scheduling optimization problem, and that is how inside right now. Data science is working on it, but we're also working on. Auctions. So we have the biggest wholesale auction space. So like dealers selling to other dealers, they bring cars into these Mannheim auctions, which there are others, but it's the biggest player.
And now we're talking about models that involves valuations. So like a lot of wholesale and is also in public sales. Valuation so the kelly blue book has been around for forever and like there's a driving force that we're sort of it's like circular logic like we're building models for valuations, but at the same time, like then people are believing them.
And so it's like a feedback loop. So there's all sorts of things that this is just traditional ML. Space. You know, we offer products within Mannheim that are, I was just on a call today about improving our deal shield product, which is basically insurance for the dealers when they go to buy a car. Like here is this fee that you pay, you can return it.
No questions asked, but it works just like insurance. So there's a safety net, reduce the variance in terms of loss for the dealers. And now working on the generative AI space, like you said, and I get a big hand in that because I'm helping lead the company in that regard.
[00:12:02] Justin Grammens: That's awesome. And so these are situations where what people need to generate text typically, like you say, using these large language models.
[00:12:09] David Howard: Generate text or absorb text. So right now for generate text, I was working on a POC for call summarization. So we have various. I haven't used where you know it is a call center you get transcripts we'd like to get a call summary so being a colleague who is more on the software engineering making it all happen like I warm the theory and algorithm part of it we generate an output for summarizing calls in a particular structure and then we're gonna work on.
Accuracy for all the information as well as worrying about P. I. There's other things that I'm working with. You know, we have all these data sets, 10, 000 or so data sets at Cox auto that are, I should say, think of tabular column data sets, right? So standard row table and they don't have generated descriptions.
So Part of my, I've written algorithms to basically digest the table, do some basic manipulations on the, the data and output a description. And so that's almost incomplete flight. And that is going to be a big lift for infrastructure in the company. So just two, but we have other ones in flight and other people that are doing, and we're talking about how to best evaluate and yeah, it's, it's a very, as you know, a very exciting place to live in.
[00:13:23] Justin Grammens: Yeah. Well, what are some challenges you're seeing as you're working through some of this?
[00:13:28] David Howard: So what type of challenge are you talking about? Technical challenges? You've got people challenges, everything,
[00:13:33] Justin Grammens: everything, because a lot of companies. Yeah. I talked to a lot of companies and they're like, I want to put some AI in my company.
And I'm like, Oh man, but we need to understand what the business, you know, use cases. So there's a lot of
[00:13:43] David Howard: this for the rest of the podcast, like I feel like me good. So, I mean, I'll try to, you know, sort of limit it a little bit, but in my opinion, actually this stands, you know, AI, not just generative AI, but.
My opinion, the biggest problem is sort of change management, right? You could build the best model ever, but convincing someone to use it in whatever venue that looks like I find is incredibly difficult, and it also like leads me, therefore, to anybody that's like developing in this community, whatever part of AI you're part of, whether it's an AI, if you're data science or if you're part of putting models in production, become a better communicator, like convincing others to use is Humongous skill.
I need to work on it more, right? But I wish I was a better communicated than I currently am. I think I do all right of the average bear, but I can always improve and I can tell you the things that I do wrong when I'm communicating and things that I focus on but want to get better. So I'm going off.
That's one big challenge and that's the biggest challenge. I said, I mean, there's also Fears, particularly in the Jenny I space, you know, we're terrified of hallucinations, you know, for months, years now on end about like, well, when GPT screws up and the amazing part is like humans screw up all the time, we're easy to forgive those like, Oh, my buddy made this mistake.
It's fine. It's not a big deal. But like, look at this model. I've seen this one era. Oh, it's terrible. I'm never going to use it. And like getting that feeling out of people's head, understanding thing. Probability is next to impossible. And those are some of the big challenges. But you know, there are others.
Measuring how good a response is from unstructured data. Like, I don't think as a community, we've really got that. Maybe we ever will, right? Numbers are a lot easier to measure than how good my image is, how good this response is from the LLM.
[00:15:30] Justin Grammens: Yeah, I think we're all sort of feeling our way through it. And you know, when ChadCPT came out, You know, in 2022, it sort of just flooded the world with this box where you could ask it any question you wanted.
And so people started asking it questions and I've seen this before a lot of projects that we work on. You get to that 80 percent where you're like, this thing's really awesome. Then the cracks start to form. And then this isn't even particular to AI. It's really just about any new emerging technology, right?
Where people, you know, the Gartner hype cycle, right? There's all this hype that's built up around it. But once it actually goes into practice. There's a lot of overinflated expectations and it goes into what they call the trough of disillusionment. Yep, I know exactly
[00:16:06] David Howard: what you're talking about. Yep, talking about this.
It's sad but true, honestly.
[00:16:10] Justin Grammens: Yeah, but getting people grounded in, like, what the capabilities are and probably figuring out how they can use it differently. I mean, you see people that bash it, you know, basically down. But I also see people potentially Using chat GPT as a Google search, right? And I, when I talk to people about like, that's not the same thing.
They need to be educated enough on that. It's really just predicting the next word, but they also need to understand, like, what does that mean versus just typing in a question like you would expect Google to respond with, right?
[00:16:39] David Howard: Yeah. I love the analogy you drew that, right? Like you go back to like the two thousands when.
Google basically became a thing and the bottom line is it just takes time for people, right? Like those us that are really excited about it and want to learn more and really are pushing ourselves to get better and learn. But the standard person I don't think is doing that, right? Like they're doing whatever is important to them, whether it's family or, you know, I've.
Been Google searching for years now, like, why do I need to change it? It's good enough for me. I mean, it's gonna take slow adoption, but I really love your pointing out that parallel that this is sort of the next I would say generation in terms of ways to do your own research, right? Google's had that ability.
Suddenly you got that information that became accessible. And now, with the large language models, you have the ability to Google. Google. Aggregate that at much higher speeds. Yeah, of course, we're still worried about the misinformation that exists. I mean, we're getting better, but we're always worried about that.
Previously, we would have to spend a lot of time going through web pages, but now it completely, you know, I want to new my new nutritional, you know, schema. Like, what am I going to do to get healthier? Right? Like, okay, this can recommend some ideas for me. And. Trigger some ideas in my own thought that are faster than me going through page after page of Google search and maybe reading the same thing and being like, okay, I've done that already.
What else we got? So it's exciting place to be. But I think the big thing is going to be time. To absorb all this from a human population.
[00:18:01] Justin Grammens: Gotcha. Yeah. Yeah. So it's just going to take time for adoption to kind of work its way through and, and use the tool the
[00:18:07] David Howard: right way. Yeah. That said, forecasting is impossible.
And half of what I'm going to say is wrong, like, I don't know what the future brings. I, I hate in general forecasting, you know, we do our best we can, but then things like COVID happened and you're like, okay, there goes all our models.
[00:18:21] Justin Grammens: Yeah. Well, there's, there's some quote, I think it's attributed to Bill Gates, but I don't know if he actually said it where we.
Basically kind of over predict how far we're going to get in a short term and under predict how far we're going to get in the long term, right? So we basically think that these things are just going to be completely transformative in the next 12 months. And we, we say, Oh, it's actually 55 is going to completely blow everything out of the water.
When it turns out that actually becoming more incremental, right? I mean, I know that the first one came out and we saw some incremental things, but I think over time, it doesn't really going to move very quickly. Now. 10 years from now, we can't even imagine, can't even imagine, can't even imagine what that, I mean, I'm
[00:18:55] David Howard: excited to see that Sora capabilities, this text to video, which it's really just kind of just starting to see.
I mean, that's, you've seen obviously that stuff on OpenAI Sora, the videos they create. I mean, it's truly amazing, right? Like seeing these things. With that said, you watch it the second time and then you start laughing out loud, right? With all the little details, like people becoming multiple people walking on water.
The camera pans and what once was a forest is now a big canyon. I mean, it's hilarious, but it's amazing and exciting to see. So I can't wait to see where that goes.
[00:19:27] Justin Grammens: Yeah, for sure. Some of the things that I was thinking of here as you were talking. Well, so how, how do you think this is going to impact the future of work?
Do you have any, you know, as you're looking back at both within your organization or, you know, I don't know if you have children or what have you, but yeah. So, I mean, what do you think the world looks like? I saw you
[00:19:42] David Howard: also have two kids. I got one of each, but I have two that I love. I mean, Yeah, we were actually, uh, I was discussing this over lunch yesterday with a bunch of friends of mine, you know, what is going to be it like for writing, right?
On one hand, I'd say the eloquence of these large language models, particularly, you know, the Anthropic or Gemini or JBT, they're very eloquent in what they write. Let's ignore what they're writing about, but the actual Quality. And so what is that going to mean for us? Is that going to, when we have the visual, like it's a good example in terms of, Oh, this is good writing.
Like this is great. So will that help me from that perspective? Yes. But now I have this laziness factor that I'm sure you've done a play times like, Oh, please rewrite this email for me. Like I can't or quickly summarize this. Like I need to do it, but I don't want to do it or this PowerPoint presentation.
And so the actual Me writing, I think, in some sense is going down less what I'm writing. So will that detract? So as this becomes more absorbed in education system, are we going to get better writers on average? Are we going to get worse writers? My bet is worse, but like I said, I don't know.
[00:20:49] Justin Grammens: Yeah, I could go both ways.
Absolutely. I mean, I'm lucky, lucky in quotes in that, you know, the classes that I teach are all technical and they're around machine learning and AI and IOT. And so. You know, I don't really have term papers that the students need to write per se, I do actually say every week you need to find at least one article and get up in front of the class and present it right so they find an article related to AI and machine learning and get up and then so they need to actually regurgitate it right be able to basically speak it and certainly they're using chat GPT just to summarize the article, but there's still that human side of it where you need to actually be able to present so I love that.
And of course, most of the other, like the Capstone project is all code and all that sort of stuff. And of course, they're using ChatGPT to generate their code. But at the end of the day, they came up with the idea on their Capstone project on what they're going to do. And they actually need to get up and present and they need to find all the hardware.
The project is all about basically Internet of Things, getting data from the physical world, building a model LLM, to be honest. It's basically could work on computer vision or. Sound or temperature, you know, whatever it is, but build a model around that and then, uh, you know, apply it.
[00:21:55] David Howard: I love that approach.
Yeah. I really, I mean, like, you know, as a former educator, like I, and going back to that communications, I was saying at the beginning where you're basically saying present this, right? Like that's so key to helping them become better communicators and you're still. Using a hybrid approach, right? Like you said, they're using GPT or whatever LLM in the model to help them along, but they're starting with their own ideas.
And that's I think the future of work, just like Google was right back in the 2000s. How do we do research before 2000? It was okay. You go to the library, you look up a resource, you start reading, check your sources, you write your citations, and now it's shifted to I'm just going to go Google it and find that information and now it'll shift to this other.
The speed of creating things is going to be great, but all that there's obviously the negative things and we all worry about the maluse or I wrote my, you don't have to worry about term papers, but he talked about you have to worry about I just Googled this topic. I'm going to write this term paper, go GPT and just.
All the educator has is how many times did the thing use Dell, and if you use it a lot, then they probably use
[00:22:56] Justin Grammens: it
[00:22:56] David Howard: a lot.
[00:22:57] Justin Grammens: Yeah, well that, that does make me think then, you know, if these tools are generating the content that's going out on the internet, and now we're scraping the internet to make the next version of these tools, I feel like it's just going to continue to degenerate.
Over time. Maybe I've heard
[00:23:11] David Howard: this.
[00:23:12] Justin Grammens: Yeah.
[00:23:12] David Howard: Yeah. And my boss feels the same way. I mean, and maybe that's true, but you also talked about this hybrid approach where we are interjecting our own thoughts and ideas that still might be there, but absolutely the circular loop in this. Degrade, but it's eloquent, so I don't know what that degrade is going to look like.
I'm always excited about the future because I think any of our predictions aren't going to be right. Like, they'll be in the area, but to actually see what happens, it'd be so neat to find out. Yeah. At least I hope it's neat to find
[00:23:39] Justin Grammens: out. Well, it might be that writing without it is going to be the unique, cool thing five years from now.
Like, I don't know if it'll be like retro, like, you know, oh yeah, sure, all these other things. Now all of a sudden it's going to come back in style for people to start with a blank page and just start writing, right? So.
[00:23:54] David Howard: I'm curious, so you asked that question to other people in the podcast. What is their general sort of consensus about where we're going to be?
[00:24:02] Justin Grammens: I would say all over the place. I mean, I would say a lot of people are scared, you know, that basically it's going to, AI is going to take my job. But I would say more often than not, people are optimistic that it should be used as a tool and that overall it's going to be a net positive for us. And I don't know if I can summarize the large language model size.
It probably goes, you know, goes back and forth. But in general, I asked this question. What do you think about AI in the future of work, right? And I would say overall, most people are generally right now positive about it. That it's going to take away mundane tasks that we don't want to do anyways. That it's going to be a collaborator with us.
It's going to help us get started, right? The moment I need to write a presentation or a speech, I do go to ChatGPT to give me an outline. Right? Like, why not? And then I pick and choose, right? I don't take it all. I'm like, well, these are some interesting ideas. I'm going to ditch the rest, right? But why not start there, right?
So yeah, you still need to be the subject matter expert. And like you were saying, like, if you use ChatGPT to generate your term paper, You still need to do the research to make sure that it's actually right, though, you know, I mean, it'll cite no correct. You should.
[00:25:07] David Howard: Yeah, yeah, I doubt that knowing some students whether that entirely.
[00:25:12] Justin Grammens: Sure, sure, but they're setting themselves up. I mean, they're putting their name on. Yeah, at the end of the day, if you're calling it your work, I don't care if you use chat GPT or not, if we're calling it your work, the quality should be something that you can stand behind, right?
[00:25:25] David Howard: I mean the understanding that sometimes an 18 to 22-year-old that you know
[00:25:29] Justin Grammens: Yeah,
[00:25:29] David Howard: yeah. Complete, understood that concept of what's mine.
[00:25:32] Justin Grammens: I'm lucky in that, yeah, I'm dealing with master's students graduate level, so a little different. That's very nice. Yeah. They're a little bit more seasoned, a little bit
[00:25:39] David Howard: sometimes.
[00:25:40] Justin Grammens: I mean, I'm
[00:25:40] David Howard: still idiot in lots of different ways, so like Yeah,
[00:25:43] Justin Grammens: for sure.
You know, so let's focus a little bit on people entering the field, right? So, you know, you and I chatted a little bit about this before the podcast, but yeah, if I'm a, a young mathematician, let's say coming out of school, for example, right? So I just graduated and I'm curious about this AI thing because, so this is one thing that I will say, I don't think the schools and my school included, a lot of the schools are really good about actually.
Teaching the most latest and greatest things. And there's nothing wrong with that, but it's just universities have typically kind of been behind the curve, right? They, it takes a while. And so if I'm coming out of school right now with a undergraduate in math, I'm not sure if I've gotten exposed to large language models or LLM ops or all this type of stuff.
Like, how do you think somebody should kind of at least start exploring this?
[00:26:26] David Howard: So I, I think you're, you're right in some things that, you know, the university almost can't, right, honestly, if you go and I'm going a little bit tangible, I'll come back to the education here. We're moving at the speed of like light.
It feels like in terms of and as a result, sometimes the quality of that. Research is, you know, I read plenty of articles, you know, like my morning coffee, I'm reading medium articles and I'm reading and I'm like, okay, this is a good idea. However, your research is based off of like anecdotal or like 700 examples.
You're like, I think this is the end. You have the attention grabbing headlines. So I understand potentially the universities play there that like they can't honestly keep up. And I understand that you can really take for research. A. I know it's gonna be next to impossible with that in terms of getting to the field and drawing a little back.
There are still grounding fundamentals that I think are pivotal. And I worry a little bit, especially reading the level of acceptance. We get press releases that this large metal is doing so well on this data set, and it's gonna be the next best thing. And that's what a business does. And I get that. But being able to parse that as a researcher and really a scientist, if you're gonna be called a data scientist, in my opinion, You really have to be grounded and understand base statistics and basic data science machine learning principles, right?
Like I can't train on my test set, right? Like that's the simple one. And that's potentially all over these metrics. It's hard to tell because you don't know normally trained for the OMS grounding and fundamentals, even though it's not directly related, focusing on your communication. Right. Like you were talking about that earlier in that capstone project, regardless of what you want to do in AI or even something else, your ability to communicate is just, in my opinion, so pivotal.
Even if you're an individual contributor, not a manager, like in business, you still need to put your point out there. You still need to get people engaged in what you're talking about. I agree. Look at every opportunity to improve my communication, right? We're required once a quarter to do a course on LinkedIn.
I don't do AI. I do improved communication. That is always my course there. And in terms of actual learning, like there are, I normally do, like, Coursera, because those are more quickly to come out, or Udemy, or however Breno said, or one of the other platforms. And I often do picking and choosing, where I'm like, okay, this is, you know, I need to learn more about reinforcement learning.
Let's just look at that piece of this module, and I don't end up finishing courses. Now, I understand, you know, that their platforms are built on getting certificates, and you, there's still value in getting those certificates, but to me, the learning is the base core. So, learn fundamentals, learn about the algorithms, do projects as well.
When you get your first data science job, you should have projects, even if you don't have any experience. You're just coming out of school. You're just trying to get your first data science ml job, like build a practical portfolio. You know, I did four projects before I became a data scientist that were just here's some ml projects.
And here is why the business would care if it was a business use case. And I'm selling that also to the hiring manager. Like But my ability to sell, which goes back to that communication. So there's lots, but the venue in which you learn that can be lots of different ways, like courses, you can learn articles.
If you can find mentors, that's fantastic. And what you want to learn, it's there's so much I could go on forever, right? Like I want to get better at Bayesian statistics and do I use it very often? No, but I'd like that tool better in my toolkit. That's just scratching the surface in some sense. Yeah,
[00:29:56] Justin Grammens: no, no, that's great.
Yeah. So let your heart sort of guide
[00:29:59] David Howard: what you, which a large percentage, I mean, still get those fundamentals. Like you, you should know your statistics. I love that when I was first learning out there, someone gave me a. A fun little thing that I think was total fictitious, but a hospital administrator gets up and talks to the nursing staff and says, you know, we're doing a great job here.
80 percent of the people that come in here are gone within two weeks. And the nurses are like, well, we know 95 percent of the hospital here, and they've been here well beyond that. And then the statistician goes up and like, yeah, and both of these things can be completely true. Right. And understanding that immediately from a statistics point of view, I think is really, you know, you should be there.
When you're in a data scientist role. Yeah, for
[00:30:40] Justin Grammens: sure. Yeah, there's this whole term around storytelling, right? Storytelling is a thing.
[00:30:46] David Howard: Yep, but work on that. I strongly believe in that. Given that change management is so hard at the end of the day.
[00:30:51] AI Announcer: It's
[00:30:51] David Howard: fun to work on all the technical stuff. I get that. We get really excited about, okay, I can make my algorithm better.
Let's go search for better features, work on my metrics, understanding these metrics, building these pretty graphs, but you got to storytell. It is part of. Anybody's job, regardless, even outside of AI, like the better storytellers, the better career path, I think you're going to have.
[00:31:15] Justin Grammens: That's good. Yeah. And you sort of talk about, you know, mentors and whatnot.
Is there a good AI community in Atlanta? Do you know, I mean, you guys, are there people that come out and do conferences and stuff like that? Just curious. Yeah.
[00:31:27] David Howard: So we've, we've had a couple of things historically. So there was for a while, the Southeastern. Data science. And I don't know what's happened up.
It's I think that's pivoting to the conference. That's going to be in the spring. I forget the name right off the sand. I'm going on this week to collide AI and I've never been to that one. So I'm excited about that. And there are meetups right now because we have like 50 people that I work with that are doing data science.
A lot of the time I spend potentially with them. And like I said, my venue is medium articles often in the morning. And then, you know, I still have Like my wife and two kids who like, that's important. Work life balance is a thing. So if I was single, I would probably explore more of that, but it's only so much time as it's Hey, yeah, so I think there is, and there are people like in Home Depot, there are people at Delta.
I know that for a fact, uh, but I could know more, honestly.
[00:32:19] Justin Grammens: Yeah, sure. No, there's only 24 hours in a day for sure. Obviously, I'm sure
[00:32:23] David Howard: you understand that too.
[00:32:24] Justin Grammens: Yeah, you know, I was looking at your profile and you do this mundane to masterful thing, right? Which I think is fabulous that you're kind of showing people what's possible.
And I actually just started this thing here called our AI Innovators Lab here at my company. And I basically open the doors and for two hours on Mondays and Wednesdays, we go through examples and we've been able to, I mean, it can be very time consuming being an educator or teacher to come up with stuff to do, right?
And the last thing that I need to do is start offering free classes when I have enough other things going on. So we've been finding open source projects, right? GitHub's got a ton of great stuff. And so we're just working through some of these examples out there. So where do you get your material for some of the stuff that you're doing?
[00:33:06] David Howard: Where I can find it? I mean, I honestly, right now, in two weeks, I need something. I have two other co hosts right now, and I had something four weeks out, but the person said they couldn't do it two weeks out. So it's almost like. Completely on the fly. And like you said, there's only 24 hours a day. So I work with inside the data science community.
I work inside the senior management structure. So, and also product. You know, the product is bringing on this gen AI use case. You know, come talk about it, right? Like for a business standpoint, why did you do it? How is it behaving? What did you care about things of that? And then you can also have the data science people that are working on it discuss the difficulties or the interesting things about that.
We have a computer vision team which are just truly amazing. Who work in the subpart called fusion of Cox automotive. And I get some guy that comes on there and he comes and talks about the latest and greatest engineering I envision space. One time I did a fun thing where I used, I compared mid journey with Google image creator.
There was Facebook has theirs and we have a version of stable fusion up. Uh, along with Dolly 3 and basically I, there was a prompt, they all created images and then I sent out to the community at Cox Auto, essentially a Slack channel that's called hashtag AI is the Slack channel. There's like 400 people part of it and I say, please come and vote for the best answer.
On which model created the best image based off of the prompt. So a crowd sourcing thing and then look at the results and then tell everyone, like, here are the results here. The best reformers jolly three, by the way, one back in this was like eight or nine months ago. So like, who knows what now, but there were still.
Interesting things we got out of like, and no surprise, I'm sure you're aware that in general, words and images aren't the greatest. I think it's getting better, but you know, if I say like, please put this long word, say illumination or something like that, like it might have one L among other things. So lots of our ideas.
And, you know, we have our friend, the yellow M to also ask us, like, I need some new ideas. Just this morning, I was doing that to try to kind of come up with ideas. So, and then there's this time to create it. Right. So if I'm going to do it myself, that's obviously more time than if I can find a speaker.
[00:35:17] Justin Grammens: Yeah. Yeah. Well, delegation, man. Yeah. If you can find other people within the company that want to do a specific thing. You know, what's also fun is a hugging face, you know, just going in there. I think there's 600, 000 models that they have built in there and you can just experiment. Sure. Yeah, you can just experiment and play around with not only their model data set as
[00:35:34] David Howard: well.
And that's the only place, actually, if you ever go down to the embedding space, right, like, how good is this embedder? The only place I look for currently is on the MTEB Hugging Face Leaderboard. It's the only metrics I, like, when you're doing embedding, like, it's kind of Hard to see, like, how well is the embedder doing?
Right? And so it's difficult to measure. And I remember talking a while back to one of the people at, um, it was, he was at Zilz and it was saying, like, which is the company that, that owns, like, Milvus, which is, I think is an embedder. I could be being a little bit getting my names wrong here, but the bottom line, I was like, You're in the, you know, embedding space and RAG architecture space.
What do you find is the most important? Because there's so many pieces when you're doing a RAG architecture. And he said, like, the embedder itself is probably the biggest thing, like switching out the embedders to see performance. Because I was like, what if I use a different metric? Like, I'm really excited.
The mathematician is like, should I use cosine similarity? Should I use, uh, you know, Euclidean distance or a dot product, which are like the sort of the three under the hood things that you use. And he's like, yeah, that's like the 15th thing I look at. So I, I, yeah, I went off a little tangent, but I do like hugging faces.
It's an incredible resource for a variety of things. Yeah, for sure.
[00:36:43] Justin Grammens: Well, good. This has been good, David. How do, how do people get ahold of you and just find you on LinkedIn? Is that the main way?
[00:36:48] David Howard: LinkedIn or david. howard at coxautoinc. com to send me an email. You know, I'm always excited to hear different things, particularly I had someone when I talked about.
This LLM Battle Royale, which you can Google and see from which I gave twice, once at the AI4 conference and once with this GA Insight webcast. And these sort of things where you're focused on different experiments that you're running that are in the research community, like, please reach out to me if you run any of these, I'd love to hear.
From a research, like if you're coming to sell me, you know, please bear our company like I get 10, 000 of these a day that are just, but if it's from an educated professors, like I want to hear about it. Yeah.
[00:37:30] Justin Grammens: Awesome. We have liner notes and all of our stuff. So, you know, when people are listening to the podcast, they can flip through.
Um, not only the transcript, of course, but also I'll have a link off to your email address and LinkedIn for sure. So, well, awesome. Was there anything else, David? I always kind of try and give somebody the thing, be like, yeah, was there something else you want to talk about?
[00:37:47] David Howard: I mean, I would love to see, you know, when people come on what they're interested in, like, what are the big things, you know, I have 8 million different things that I'm interested, but you know, there's only this time.
But if you crowdsource, what are the people most interested that have come on? Like that, I would really like to know about putting you on the spot. Yeah,
[00:38:04] Justin Grammens: well, I mean, I think there's two sides to it. Number one is the people that are on, you know, the guests that are on, what are they interested in? Right.
And then there's the listeners. And I would say this podcast is actually from a nonprofit. So if you didn't know, we run a nonprofit, it's a 501c3 and it's called Applied AI. And it's broad based. Basically, our whole focus is educating people. And how AI is going to change their life. And it depends on not only business, but also personal aspects, just across the board, across industries, across expertise, what have you.
So my goal and a lot of this stuff is to make sure that we touch on every little piece that we can, that no matter what you're doing, whether you're eight years old or 80 years old, you're going to listen to this podcast here and you're going to get something out. So my overall goal is just. Providing a platform to allow people to come on here to basically share what they know.
Now, I will say probably most people fall under this, Hey, this is how I'm using it at work. Right? So you're probably a pretty good example of, you know, somebody who's on there within a company. Maybe they're a manager, maybe they're a developer, maybe they're in academia. I actually had a guy, the last one that just went live was a guy from the Ohio state university.
Right? So he. Basically, he had a PhD and all this stuff has written 30 different papers and all this type of stuff. Awesome. Really, really good, but super deep. Right. And so I would say that it kind of goes across the gamut with regards to the people that come on. But, you know, the, most of the questions that I love to hear people answer is how do you think it's going to change the future of work?
How are you applying it within your organization or even your personal life? Right. And people share various stories about. How they're using it with their kids. I just recorded a podcast earlier today where I shared that, hey, my 12 year old, we sit down with Chad GPT and I say, generate me a math problem, a math word problem for a 12 year old.
And it generates something for us. And then we collectively work on it. Right. And not to say that he needs to do more math problems. He already has enough from school, but it actually generates good ones, right? Stuff that I wouldn't have thought about. And then it's a thing that we can work on together.
Right. So. It's cool. Yeah. So I mean, I'm just looking for all sorts of little use cases, people that come on and they figure out where they're using it and sort of how it can make our life better and how it can change how we, I guess in some ways, how we live. Right. You know, I really think it's going to have a huge impact on our lives going forward.
[00:40:22] David Howard: I agree. I don't know exactly fully what that's going to look like, but I do think it's going to have a huge impact.
[00:40:27] Justin Grammens: Yeah. Yeah. I mean, there's things like, just imagine if you take some of this stuff to the extreme, I mean, what if you had something that was the whole idea around like an agent, you know, or a bot that's with you.
What if it always listened and learned, like it was continuously learning based on words that you said, the way you interacted, pictures you took, all that type of stuff. And imagine if this person, this agent, you know, was with you until you're 80 years old, right? And then imagine if that person then or that agent then could be passed down to the next generation, right?
I mean, I, I just think back like, yeah, I have some memories of my grandparents, right? And obviously there are certain times. But like, I can't go back and talk to him or her, like my grandparents have passed away. And so I just think it's going to be interesting to see sort of how it's going to allow us to kind of live on in some ways.
[00:41:15] David Howard: I 100 percent agree with that. I mean, I've seen a device where we can keep those day to day tasks and potentially your conversations like from a reflection standpoint, that's gonna be, I think, so cool. Like, oh, I talked to this person last week. What do we talk about? Like, I know it's on the tip of my tongue.
And then, like, you could potentially bring it back and it'd be like, oh, wow, that's been, thank you. Thank you, because, like, I cannot remember everything with 100 percent accuracy, but I know that you have, like, the data is there, so therefore it can be pulled back. Yeah. But I did also see, it's a while back, but it was a Netflix thing.
Was it called, was it Black Mirror? Oh, my gosh. Yes, Black Mirror. If you want to have a negative view on AI, by the way, but it was interesting that there was that one episode where, you know, they have ocular implants and keeping an entire history of what's happened to you, right? And the negative aspects of this, like once it's there, it's there forever.
And I do worry about that. But for anybody that likes AI, I would say, don't go watch that podcast. It's a very, very depressing, but it's interesting and thought provoking.
[00:42:13] Justin Grammens: Awesome, man. Good stuff. I'll be sure to put a link off to black mirror
[00:42:17] David Howard: or don't like, I mean, if you want to be depressed, like go watch it.
If you don't want to be depressed, I don't go watch it.
[00:42:23] Justin Grammens: Well, David, I appreciate the time today. Thanks. Thanks again. And that's a ton of fun. Yeah. Look forward to talking to you again in the future.
[00:42:30] AI Announcer: You've listened to another episode of the Conversations on Applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization.
You can visit us at AppliedAI. mn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at AppliedAI. mn if you are interested in participating in a future episode. Thank you for listening.