Conversations on Applied AI
Welcome to the Conversations on Applied AI Podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of Artificial Intelligence and Deep Learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real-world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI.MN. Enjoy!
Conversations on Applied AI
Matt Nash - The Importance of Authenticity and Reliability in AI
The conversation this week is with Matt Nash. Matt is the CEO at Kairos Technologies, where he and the team are focused on building a better future for healthcare by making technology products, recruiting the best in the industry, and directing them at the gross inefficiency and waste that is the health industry. He's skilled at technical architecture, Team delivery paradigms, applying emerging technologies and leading effective teams that can execute.
If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!
Resources and Topics Mentioned in this Episode
- Kairos Technologies
- Neural network
- Artificial general intelligence
- Bayesian probability
- Midjourney
- Stable Diffusion Online
- Machine Learning Guide Podcast
- Titan Synthetics
- The Innovators Dilemma by Clayton Christensen
Enjoy!
Your host,
Justin Grammens
Matt Nash 0:00
First thing we hear when we talk about generative AI with folks in the health insurance, any risk business is they know how hallucination prone transform architecture is. And they are highly skeptical until we tell them, we don't use it for much. And then they're interested. But they are very sensitive to hallucinations. They're very sensitive to authentic things. Because very in a very real sense, these folks are dealing with life and death. And they're handling data. It's very sensitive. And so ambiguity is not their favorite thing, and I don't blame them at all. So we really try to stay much closer to the authenticity and reliability.
AI Announcer 0:39
Welcome to the conversations on Applied AI podcast where Justin Grammens and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.
Justin Grammens 1:10
Welcome everyone to the conversations on applied AI Podcast. Today we're talking with Matt Nash. Matt is the CEO at Kairos Technologies, where him and the team are focused on building a better future for healthcare by making technology products, recruiting the best in the industry, and directing them at the gross inefficiency and waste that is the health industry. He's skilled at technical architecture, Team delivery paradigms, applying emerging technologies and leading effective teams that can execute. Thank you, Matt, so much for being on the program today.
Matt Nash 1:37
Thanks for having me. I'm really excited to chat with you today. I appreciate those generous introduction we got about a decade's worth of experience in software engineering, I bounced all over from from ag tech, to fin tech, eventually found my way in med tech never really resonated with me. Because when I was 17, actually, I had a really rare, someone called myxoid liposarcoma. And I won't go into what that is. But it was very, very, very, not good, right? It's a type of cancer. I learned really quickly what that meant meant a lot of debt, when I was younger, meant a lot of strife. And so when I found medtech, I really found something that resonated with me. And that's why when we say gross inefficiencies, we don't just mean something personal, something a lot of us have seen, I'm sure a lot of your listeners have been through. So we're kind of on a mission to change that we've all experienced that somewhere and other things are directly or through someone else. And right now, I would say we're starting from the top. That is we're trying to break down the barriers between experts and data. Okay, which right now are a huge, huge problem if you talk to anyone the health space.
Justin Grammens 2:45
Yeah, and it's probably been a problem for for decades. So I mean, you're gonna apply artificial intelligence to kind of help help this issue.
Matt Nash 2:53
Absolutely. Certainly everybody's cognitive of chat GPT. It's ubiquitous, right? It's all over LinkedIn. Experts have cropped up everywhere. I'm proud to say that when we first had this idea we were the crazy ones are seen as a little crazy a few years ago, we're using generative AI we're now using traditional, I say that with heavy air quotes, the traditional transformer architecture that you see used in GPT. We are using generative adversarial networks. Okay. Yes, ganz. That's right. We're channeling those to replicate tabular data that will very closely mimic authentically Mimic, in fact, ground truth data to create a digital twin of healthcare volume that is still private, and can be propagated to, you know, any number of records can be moved through our platforms without risk. It's a pretty groundbreaking approach. There are very few competitors taking this particular attack that we take, but it's one that will allow that authenticity preservation. It'll make the data still useful for other AI, for example,
Justin Grammens 3:55
yeah, awesome. Well, I was looking up the name of your company, what remind us or listeners again, what what Kairos means?
Matt Nash 4:02
I love to talk about Kairos. Kairos is the name of our lab. And we'll be producing multiple products in this lab, you can think of it as the health insurance world's skunkworks. And we're building a lot of things that we've noted from our experience in the industry will help. And we saw that health is behind. And to be frank with you. I think we all know why it's behind his data is really tough to get stuff to use responsibly. And there's a lot of well deserved regulatory burden between innovators and and you know, their ultimate designs. Kairos stands or means the God of opportunity to kind of shorten that up and make it as succinct as possible. It's an ultimate concept of time. You can sort of hack years and years with the progress if you strike at the right time, right when the proverbial iron is hot. And so we see this as that time as generative AI emerges. We see this is that time when computing is becoming a lot more ubiquitous. he benefited greatly from low crypto crashing, the subsequent production and expense when it comes to GPU hardware, for example. So we are finding an opportune moment to strike where we can in the self care space to solve the big problems that have, in many cases remained unsolved for years.
Justin Grammens 5:17
Yeah. And so taking a generative approach, that's, that's a different, it's a different way to sign up come at the problem. Do you guys have like a basic use case where you would see this being applied?
Matt Nash 5:28
Yeah, that's really a question. And it actually came from our efforts to use similar systems to Pandora or other music, right recommender systems, which show audience may be familiar with, let's say, Hey, you like this song, you might like another song? Well, at this time, myself and my colleagues were working in enterprise or a major health insurer. And someone came along and said, I think it'd be really great if we could predict when people are going to need orthopedic surgery. And if we can tell that we can navigate them to a better doctor that costs everyone wants, and then give them a little bit of those savings back. It sounds like a great situation. Everybody wins. It's a tough nut to crack. And the reason is, people don't trust insurance companies, especially when they're telling you which doctor to go to. Nobody wants to hear that. Yeah, most folks don't realize that costs and quality have no correlation whatsoever in health care. And we're hoping they do in the future. That's a big part of the transparency play that's underway. However, this created opportunity for us to predict these episodes as early as we could. So we can never get people to an area where they'd be able to get some level of care for last, save a little bit of money on the premiums. Well, the incident for that particular case, we were looking at something like point 1% A year, not very common occurrence is rare. Yeah. But very high dollar when it did occur, often many comorbidities that drove prices up. And so we were faced with the task of generating a recommender system for these folks. And they ended up with a recall of around 13%. Which, hey, if backlog was point one, were we proud of that? Said 130x? Yeah, well, it wasn't good enough. It wasn't good enough to be actionable. And we learned a valuable lesson. It didn't matter if it was 130 tons of background noise or background noises, that tiny. And so we tried it over sampling, we tried numerous other approaches. And ultimately, this led to the genesis of Titan, which is our generative data product coming out of the Kairos. Lab. Okay, where we said, what if we just synthesized more data? What if we were able to avoid the issues with up sampling? Right, where we might introduce weird bias that we we don't understand. And instead, were able to authentically replicate the very few new replacements we had. So use cases currently in production. I'm not saying it doesn't have room for improvement. But I will say that we have folks who are struggling with we hear the same kind of struggle all the time. insufficient data, just we're training recommender systems for predicting readmissions, detecting fraud, pretty much anything that the machine learning function is trying to do, both in provider and payer institutions, augmented to solve the data bottleneck, as it were with synthetic data. That's the future we envision.
Justin Grammens 8:17
That's awesome. And so what like when, when was the company formed? When did you guys come up with this product idea?
Matt Nash 8:22
Sure. So product r&d started some years ago, and had undergone very rudimentary research and development in the early days. Kairos is a lab we started all over when the lab was founded. Because there were so many better systems, better approaches we use. And that was early 2022. I think we committed our first time to code in May 2022. So just over a year ago, that we've we've been in the market, and we decided to go to market last fall when we saw really promising results from the product.
Justin Grammens 8:51
Gotcha. Gotcha. And so do you guys help companies then take their data and integrate it in with your product, your hands off approach? Like, here's our stuff, go and have fun?
Matt Nash 9:01
Yeah. So typically, what we find, understandably, is that enterprise architecture, security compliance, and the others are never going to put their data somewhere where we can learn from it. It's not on their system. Yeah, sure. So our model is to come on premises with right in the protected environment where the data exists, and treat it like a cleanroom. So we, we bring our Titan model in, and we leave it on your hardware as provisioned to spec, we walk out the door, you transact the data onto it, we train it things proverbially, when it's done, this is what happened, your distance and all the probabilities are correct, right? So we're constantly cross referencing synthetic to real synthetic to real. When that's done, the organization goes in, deletes their ground truth data. And all that's left is a relatively small generative model. You can poke that general model and say, yep, there's no real data in here. And then we host it in the cloud or in a private cloud or on prem wherever it makes sense. organization they can pull any amount of data from it. In fact, I just asked our model to give me 50,000 novel records. And it did it in about just a few minutes here. So I've got this pulled up. I wanted to do that before they evolved the chat to see how long it took doesn't take long.
Justin Grammens 10:13
Does it? Doesn't sound like it at all? Well, one of the questions I like to ask people that are on this program is, you know, what do you what do you consider artificial intelligence? Like? How would you describe AI to somebody who maybe isn't in this industry,
Matt Nash 10:25
I try to demystify it as much as I can. And my biggest place to say it's been expert systems, in some form of orders, or any approximate an abacus to me, is a little bit of AI. Right. So let's not act like this is something entirely new. What is new is our ability to teach machines in a more complex way for them to see connections and correlations we don't see. And obviously, that's more of a question of what is a neural network than what is artificial intelligence? Sure. So I know a lot of folks want to differentiate and say, artificial intelligence is only artificial general intelligence. And everything else is some form of machine learning. I flirt a little bit the other way and saying sort of everything's some proxy for our intelligence, we codified our intelligence in almost every piece of technology we've ever built different about this current generation is that it's able to learn alongside us in ways that we don't even quite fully understand yet. And that's what's so exciting is kind of floating about this new era. So Artificial General Intelligence is another matter. And I usually don't wax too much on that. I think it's an interesting topic. I think, one of the much debate, I might side with Microsoft researchers who published the page that say, well, perhaps GPT is sentient, but not because it's AGI but maybe you're just a large language model. I thought that was kind of a charming take on the whole question. And one that I can identify with so
Justin Grammens 11:50
good. Well, yeah, and it sounds like you're you're viewing this as a complementary technology, right? It's not this idea that humans are kind of done. And so in the healthcare space, you know, you see your product and being used alongside physicians, for example, or people that are healthcare providers are were aware in this sort of the health care, food chain, I guess, do you see yourself living?
Matt Nash 12:12
Yeah. And so for this product, in particular, when we're talking about synthetic data, we imagine living in the data science departments, all payer organizations, and research wings, a provider organizations, but our products live all over the place. And from an AI standpoint, I really don't see, I think the larger question being, what's the replacement going to look like once AI moves in? While these knowledge workers are displaced? I think, just in our products alone, there will be a new class of labor, mental labor, whose job is to understand how to parameterize and interpret the output from models. Today, when I'm talking to organizations about how to use our product, we're getting a lot of blank stares, are folks who have the intersection of business knowledge, enough understanding of generative AI to parameterize, effectively, or the ability to think abstractly enough about the output to maximize its use? And I think I just said three different jobs probably get specialized enough. And so the person who interprets the complex health contract and answers questions on the phone today will probably reskill, to understanding how to parameterize questions to a large language model, and others still specialized in other AI adjacent fields. I think the great job loss will largely be slow migration. Of course, we'll have some folks move along. But we're gonna see that anyway with the boomer generation retiring. So there'll be some other confounding factors for that. But I think the generation of folks getting into the workplace right now have little to worry about, as far as job security into the future when it comes to AI. I understand the spectrum. And I just know that humans are great at finding ways to add value.
Justin Grammens 13:55
Yeah. And so I guess for our listeners, when you say parameterised questions, and maybe we let's maybe we can drill a little bit more deeper into that, like, what, what's an example of that?
Matt Nash 14:04
Yeah, very happy to dig into that our model in particular, which I can speak to, and in some intimate level of knowledge, has 57 million different parameters. Frankly, we don't know what all of those are. And if you ask open AI, they'll tell you the same thing about chat GPT with its, I believe, 2 trillion, 2 trillion parameters. So imagine that imagine the vast corpus of knowledge that represents beyond every other word and how they interact. It's the structure of sentences and overall sentiment and saw is a lot less complicated for us in the abstract. We're interested in things like how diabetes and socio economic status as a correlate zip code might be interesting. That might be one phone number for us. Okay, well parameter might be how a diagnosis relates to another diagnoses, for example, so if somebody is overweight, they may also have diabetes. If somebody experiences X, they may experience why, right. So there's just basic Bayesian principles, but we codified them into parameters. We have a lot of those, but the ability to ask our model for the right thing will be its own skill set, for example. So we might have a general model, which creates a general population of folks, which can be very useful if you want to test your system endpoints. But if you're interested in creating a detailed simulation, of what the next year might look like in claim volume, you're going to need to think in the abstract, but how to parameterize our model. For instance, next year may be less of a COVID year than even this year is. And based on the hypothesis, we might sample from 2019, to build a really good corpus for generating synthetic patient encounters for 2024. There might be other things that we liked about 22 and 23. Cancer treatments, for example, that we still wanted to see represented those 24. So we could sample from there, and then conditional eyes and say, just don't give me any COVID stuff, give me all the new stuff, and just transfer learning. So we can create a realistic simulation of what the volume would look like, that's going to be quite an advanced skill. That'll mean, in the sense of a technical skill, we will require complex abstract reasoning, detailed business knowledge, and an ability to think in the abstract that I think is going to be in high demand.
Justin Grammens 16:28
Yeah. What you mentioned digital twin, I think is that it's that's kind of the word that you use. And I think that's been a buzzword, actually, that's been out for quite some time. But at the end of the day, it's really this feels to me, like it's this model that you can play with. So you can do a lot of what if scenarios on it, right? It's sort of a representation of, of something in the in and I My background is ladders in the internet of things. So we talked a lot about IoT, like physical devices that are out there, that you could start playing around with parameters, but it really applies anywhere. And I think, I think that's the power of what you're talking about, like, what if we were to try this?
Matt Nash 17:01
Yeah, that's insightful, right? digital twins first stood up as a concept sort of, for physical analog. Yeah, you could test the physics of a thing, a digital twin, put it through conditions, you'd never put it through in a simulation. But imagine if we weren't talking about the physical characteristics, but the data characteristics of tabular volume, and that's what generative synthetic data will be capable of in the future. There aren't a lot of folks commercial off the shelf that can offer that to you today. Be skeptical if they say they can, because I'm telling you university folks we talked to are telling us, it's just as hard for them. Yeah, these are the published folks on it. So they're the authority. It's hard work. And it's difficult. Once we solve it, and we're in the midst of it, we're going to see a class of business arise that is simulation driven. So capital markets, fast food chains, all of these will be using digital twins of their customer base, to hypothesize new markets, to test out new products in the lake. And I think it's gonna be pretty remarkable age.
Justin Grammens 18:05
Yeah, for sure. And when when you were talking about these parameters, and you know, kind of having enough knowledge to be able to tune them in a lot of ways. You know, the big buzzword right now is prompt engineering. I have done a couple of different talks on that everyone's looking at Chad GPT, saying, Well, how am I going to ask the right question and get the right answer? Right. That's that's a huge part of what we're dealing with here and feel like that's what you're also saying, in your sector, people are going to have to know enough to ask the right questions.
Matt Nash 18:31
That is absolutely right. We are still torn on whether or not we're going to use language processing prompts the way that GPT and for example, mid journey or stable the fusion do those seem to work really well for unstructured things, but we suspect, our user, our general user, we very used to filtering data, and how different is filtering data from parameterizing generation, we think we might find something good in saying there is no difference. And I think folks are used to applying search filters, even casual goal users often do this. And so I imagined, at least for our tabular data generation, that that's where we'll start. And we'll have quite a high burden of proof to move to natural language, prompt engineering kinds of things, because it is so hard to get reliable results. And that's one area where we're gonna need to be very markedly different from the transformer architecture. So we're gonna have to have predictable, reliable, authentic results that are actionable in a business setting, as opposed to sort of in a fun toy.
Justin Grammens 19:29
Yeah, yeah, it feels like you know, kind of know your user. And these CPAs people, like you said, I think are people that are sort of, you know, within organization probably already have some science background. They're probably used to, like you said, filtering and running queries or whatever, rather than just end user that maybe it's using GPT.
Matt Nash 19:48
Absolutely. And I'll tell you when the first thing we hear when we talk about generative AI with folks in the health insurance, any risk business is they know how hallucination prone transform architecture as they are highly skeptical, until we tell them, we don't use it for much. And then they're interested. But they are very sensitive to hallucination. They're very sensitive to authentic things. And because very in a very real sense, these folks are dealing with life and death. And they're handling data is very sensitive. And so ambiguity is not their favorite thing. And I don't blame them at all. So we really try to stay much closer to the authenticity and reliability of the world. We're playing with GPT. Two, we have fun with it. Not to disparage that we enjoy it. I use it for recreational purposes for tabletop gaming, and everything like that. I would prefer the type mentioned if my doctor was gonna use something godmen research though.
Justin Grammens 20:43
Yeah. In fact, I was just talking to a guy earlier this afternoon, who their whole company is around, you know, kind of squashing hallucinations, right? They're trying to create a new product that is essentially hallucination free. That's the thing that they're bringing to the market. But I think some of this dovetails into AGI because I feel like we're at a time right now, where we are building AI systems that are very narrow for certain areas, and they're really good at that. And it's almost like, do we actually need AGI in a lot of ways, right, talking to a doctor and getting getting, you know, medical advice is a lot different than me asking somebody, can you write me a poem? And and Is it is it so bad for us to be having tuning these models for optimal performance?
Matt Nash 21:24
You know, I will, I will say that it really isn't. And I don't think that AGI would present such a big benefit to society, or be that big of a revelation once it arrived, because I think a lot of that utility will already have been achieved by augmented human beings, using advanced generative models, using causal AI to understand their world even better. So I think when AGI arises, I won't say yes, I'll say when, when AGI arises, I think it will be a bit of a anticlimax, right? If you're looking from the modern today's perspective, contemporary perspective, for I don't know, 50 or 100 years. By that time, it'll seem sort of a footnote, as opposed to the revolutionary effect on society, certainly, from a philosophical standpoint, will be immense, in so many ways. But in the way that we live our lives regularly, I don't think it would have quite the impact. We're expecting it to things that we haven't already achieved through other beings.
Justin Grammens 22:22
Sure, sure. Well, one of the things I like to ask people on the show is, you know, if you were to rewind back in your career, maybe just coming out of school for, for example, like what, what classes would you suggest people take or books they read or conferences they attend, or stuff like that, if you're just getting into this area?
Matt Nash 22:39
You know, thank you for asking that. I think the most effective source for me early on this is when I was a software engineer. And I was working in ag Tech, I started listening to Tyler Ramillies podcast, which was machine learning guide, you might be familiar with it. Very, very helpful. It wasn't just sort of a primer, I don't think he does it anymore. But it was really great for going from software engineer to ml nascent as actual practitioner, and give some good examples of doing that. That was his target audience. If you're of a business nature, there's a great book called prediction machines, which talks about you may be familiar talks about this really as a change in the cost of inference, as opposed to anything else, that if we can frame the computing revolution as a change in the cost to computation, and look at all the implications of that what might happen if inference becomes free. And that's that's a pretty, pretty astounding book to read, if you really absorb it, I suggest all leaders, not just technology leaders, all leaders, because many of you are leading data companies and don't know it, to read that book and think about what inference tasks your company does, that are short, essentially now in the next decade become free and near free, and how that changes your business. So that's highly recommended for young engineers, or just new engineers, I shouldn't say young engineers, new engineers to the field, there are a ton of great resources out there, open source tools. And I'll tell you even at a company like ours, we're going to we're going to market and we've got national tier interest and customers. It's still some version of open source libraries that we've heavily developed on, there's no cost to thing, you can pull those things in at home and start tinkering with them. And they're surprisingly well documented, many of them and are really easy to get started with. So I usually recommend Python and some open source libraries. And there are some great books. Data science from scratch, is the one I started with when I started getting a little more practical. So
Justin Grammens 24:44
that's great. And yeah, I mean, you're right that this is the beauty of the internet. There's just so many tools out there. We will definitely include links to all the things you talked about in the liner notes for this so people will be able to click on them and check them out. Prediction machines is it First thing because I think from it's been a year or so, but it's written by economists I think these guys are. And so they're taking that sort of cost and that free cost and sort of applying it just across the industry, and that the book is kind of getting a little older, I think, but it is like still 100% Spot on. I just I love it. I love it.
Matt Nash 25:18
Yeah, I think it does a better job explaining the practical effects of machine learning and are in more broadly AI than most other speculators today. Think it's easy for us right now to see all the great effects. Today I get very excited, and misty eyed, but I think they had a quite an overtake, it's still want to say it's profound, or not profound. It's visionary and profound, but it's also very sober. So can be used in practical sense.
Justin Grammens 25:45
For sure, for sure. Well, Matt, I guess, how do people get a hold of you?
Matt Nash 25:50
So you can reach out to me and Matt, at Titan? ti ta n cinfed? synthetics.ai? Our name is that's because I hope there are there's a whole industry of competitors in the synthetics industry that are working on other types of problems. You know, we we stand in the medical space primarily, and may dabble in other verticals. But I imagine a world where there are synthetic spenders across all different industries, and every one of your listeners will be creating the next one.
Justin Grammens 26:20
Yeah, that's great. Yeah, we'll put a link off to you guys, the website and all that stuff as well. Are you guys guys hiring?
Matt Nash 26:25
You know, always growth is volatile. So think my next hire will be in the next week? And I don't know after that. So very small, always hiring, always want to see and meet folks and answer questions from prospective employees. Because at the very least, we're all engineers that have been through too happy to hear from you are happy to learn about you, and what you're passionate about.
Justin Grammens 26:48
That's good. That's good. Yeah, I'll put it I'll put a link up to your Contact Us page and people can reach out to you directly. Is there anything else maybe that I didn't touch on, you know that you would like to maybe put a put a little bit of an exclamation point on some of the topics we covered or anything new?
Matt Nash 27:03
Yeah, you know, the one thing I said, so I was up in your neck of the woods, like I had mentioned just a couple of weeks ago. And the slide I ended on with a bunch of insurance executives, was all it said was remember your purpose. And what I was trying to drive home, there was not some, you know, glib, goofy thing that has nothing to do with practical implication, it's instead don't try to make this a tool to reduce your FTE, right, if I get to talk to the executives, who are making the buy, in that AI space today, I know the pressure I know the temptation you're dealing with, I understand the opportunity space to Scott and translate to dollars, I know that resist that temptation, as much as you can think of it as a pending your capabilities. Because if you don't, right, you're gonna miss a a bunch of new product offerings that you're bringing to the table. B, you may lose your current market to someone sitting in their living room, who can generate the same services and infrastructure you've got for a fraction of the cost with generative AI and make it seem like an army of 1000 units one person. It's slightly hyperbolic, but not by much. And so I invite executives, I invite folks working in established industry today, to really take a minute, step back and think about this technological revolution. This is not the internet. This is the printing press. That's the level of technological evolution we're at right now. It's gonna have that much of an upsetting effect on everything we do. And all the nature of the data on the internet. So if you fail to take advantage, as a change agent, as a way to reimagine yourself, the way that you fulfill your mission to your customers, somebody else will certainly do it. So a major inflection point we're in the middle of the reason you can't see forward is because we're in the singularity right now. Right? We've nobody knows what will happen tomorrow. And so that puts you it's actually advantageous to you if you're an established position in industry, to say, Okay, well, Google might know a lot, but they don't know my industry. Amazon might know a lot, but they don't know my industry, why not us? Why not us reinvent ourselves, why not us take the next step. Creating AI tools. Now can be tough to get the talent, I know, but you got to be thinking in those terms. And if you're thinking in those terms, you'll be alright. You're whether it right, but the the arrogance of hey, we're an established industry to always be here. Do not count on that. Because this is printing press back and like father saying, you know, there's a reason certain religions just died out right around that time. We're the ones whose towns didn't have printing presses, right. It's an enormous magnifying effect on on the emerging ideas and existing ideas and the democratizes things in a way. I think industry is not ready for and so it's not folks like me, who have to get it. You get it. You're trying to build a few Sure, you've got to get it as an industry leader, and you've got to take lead in the company, be that voice. If they're listening to this podcast, they're probably already not just again, dollars and cents, probably abstract thinkers, probably faced with folks who think that way, often, you're going to have to keep fighting that fight to keep your company above water to keep them relevant in the future, because they'll lose the plot.
Justin Grammens 30:22
Absolutely, ya know, as you were talking, I was thinking about the book, The Innovators Dilemma, which is by Clayton Christensen. And I remember reading that quite some years back, and it kind of goes through a series of these companies just, you know, they they stuck to their guns and did the same thing over and over again. And there were all these little companies that at their point, they're like, Oh, these are the ankle biters are not doing much. No, no, no, completely disrupted the entire business. And they were they were out of business and within a decade, so yeah, we are at an interesting time. And I think a lot of it's also going to be you'd right people just trying to figure it out. I mean, I think I think it's so early right now that a lot of executives, I feel like are just kind of all scared. I guess. They don't they don't really know what's coming down.
Matt Nash 31:03
Let's use that fear. Right? It was not the worst thing if it catalyzes us to
Justin Grammens 31:07
action. Right, right. Yeah. No other things can come out of that.
Matt Nash 31:11
Right. And so I think let's be let's be smart about it. Like we might be too reactive. But you can find partners in the space who got expertise, who are willing to speak with you about it, whether they have a product to sell you or not irrelevant, right? Yeah. Most of the folks in this space now who actually have a product to show you, us included, were in it before it was cool before it was very profitable. Right. So we're passionate about the topic, you've still got a great resource there. Most of us will chat with you just to hear something about where you're at and what you're worried about. And the existential question I try to pose to these focuses, imagine what kind of impact do you think AI will have on your industry? And then I like to follow it up with, okay, now imagine somebody who's assault or who's on the other side of that impact people with them? If the answer to that question is no, it's probably no. Yeah. If the answer's no, you have work to do. Right? And we, you know, we and others can help you do that work, or at least strategize position yourself for how you want to handle it. So recommend that he reach out and get to know somebody in the space, you're probably gonna have to partner up. And you know, there are a lot of great partners out there that that can help you get to that next point. So
Justin Grammens 32:25
for sure, for sure. Well, great, Matt, I appreciate your time today. Thank you for all the work that you do. It sounds like quite an awesome product that you guys are bringing to market trying to kind of revamp healthcare which is like you said at the beginning it's in desperate need. It's it's way out of date right now. So a lot of people are coming at it from different angles, and I respect all the work that you guys are doing at your at your business and wish you nothing but the best to succeed.
AI Announcer 32:51
You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening