Conversations on Applied AI

Josh Cutler - Building Expert Systems Using Artificial Intelligence

July 20, 2020 Justin Grammens Season 1 Episode 5
Conversations on Applied AI
Josh Cutler - Building Expert Systems Using Artificial Intelligence
Show Notes Transcript

In this episode, we are joined by Josh Cutler, VP & Sr. Distinguished Engineer for AI Platforms & Transformation Team at Optum. Josh shared with us his 15+ years of experience in applying data and intelligence to solve hard business problems. I really liked Josh's definition of the application of AI being, "How can we help computers make good decisions". This cuts to the core of why we are building these systems and what the outcome should be.

Besides the applications Josh is building with his team today, he shared with us how he got into the field, the startups he founded, and how Microsoft was actually ahead of their time in this space with their Live Labs initiative and so much more. Enjoy!

Finally, if you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future Applied AI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!


Here's just a few of the many fun and interesting topics discussed during this podcast:

Enjoy!
Your host,
Justin Grammens

Josh Cutler :

So I am an absolute skeptic that we are going to replace the vast majority of people with machines anytime soon. I think that right now we are very, very good at automating tasks. We're not necessarily good at automating jobs, nor should we even necessarily be trying to

AI Announcer :

welcome to the conversations on applied AI podcast where Justin Grammens and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.

Justin Grammens :

Alright, welcome everybody to the conversations on applied AI podcast. Today we have Josh Cutler, lead AI platforms and transformation team at optim. If you aren't aware optim as a health services and innovation company on a mission to help people live healthier lives, and make the health system work better for everyone by connecting systems across 150 countries, Josh is also a serial entrepreneur and co founder at multiple startups, like deep machine and ramble. And Josh, I also just recently learned that you attended Duke University studying a PhD in philosophy, and it focused on machine learning and social science. That's sounds fascinating. I'm curious to know more how these studies have had an impact on your thinking related to COVID-19, obviously, in the current climate we're in So sure, we'll cover that later in the podcast. So Thanks, Josh. Appreciate you being here.

Josh Cutler :

Yeah, yeah.

Justin Grammens :

I'm excited to talk about it. Awesome. Well, cool. I mean, I give a little bit of an intro there about you. But I'm curious to know, maybe if you could fill in a little bit of the trajectory of your career, and basically, most recently, I guess what you've been up to in the field of artificial intelligence.

Josh Cutler :

Sure, sure. So like you said, I've been an option. And a lot of the work that I focus on right now is how do we scale up adoption of AI at a company our size, you have people doing it in myriad different ways, right? of people who are focusing on, you know, clinical innovations or claims, innovation, just all of the various things that take place in the healthcare space. And what my team's focus on is saying, All right, well, how can we scale this up with unified tooling make it easy for people to access the data use, the latest and greatest in this space is so dynamic and moving so quickly, particularly in the document processing world. So that's everything from computer vision to look at things like faxes to natural language processing, making sense of clinical notes. We're really, really moving quickly there. And my team's mission is to make it easy for us as a company to take advantage of those innovations as they happen.

Justin Grammens :

Nice, nice, so fits very much along with sort of applying all these new technologies to real world business use cases.

Josh Cutler :

Yeah, that's exactly it. A lot of my experience in the past is my startups is saying, All right, we have these cool innovations technically, how is this actually useful? And that last step has been just The deathblow for so many AI technology companies, and frankly, just AI technologies, even if you're not building a company out of it, it's very easy to make that that novel toy use case. But when it comes to taking things and actually making them valuable and making them useful at scale, like that's been the trick, I think for everybody,

Justin Grammens :

how would you define AI? I guess you have a short synopsis that you use when somebody outside the field asks you or

Josh Cutler :

so I guess I have two definitions. One of them is kind of tongue in cheek, but probably more accurate. And then one of them is the one I probably use more often. So I think AI has is a somewhat malleable definition. And it typically encompasses things that we currently think of as hard to do with computers. And like anything that falls into that bucket we've called AI historically, whether it was you know, playing games that we've now completely solved automatic speech recognition, which we're making huge strides in things like image recognition, which used to be a very difficult problem. And now we're seeing in many cases, this As a solved problem, but to in kind of layman's terms when I'm talking to like my business friends who are trying to figure out alright, how do I make sense of how to use something like this? It's how can we help computers? make good decisions, right? And in the simplest case, right, our thermostat is kind of a stupid form of AI, right? It's going to take the current temperature, and then make a decision about whether or not to turn on our h fac. So anytime we are building systems where computers are going to make decisions that is, you know, at some level, a form of artificial intelligence, in my opinion. And now there's, there's tons of different approaches. Some of them are very data driven. Some of them are very rules driven, you should call those expert systems. And so there's kind of full spectrum of approaches for how you would tackle that sort of thing.

Justin Grammens :

Nice. Cool. Very good. I you mentioned about largely solved problems. I guess I'd like to dig a little bit more deeper on that. Why Why do you think some of these problems are becoming largely solved? And in some ways,

Josh Cutler :

so I think it's a couple of things right? Some of them are very amenable to Some of the things that have happened recently, both from an algorithmic perspective, a can like the availability of cheap and scalable compute perspective. And then just datasets, many of the problems that we're still wrestling with may be solvable when we have better and larger data sets. They just don't exist. They haven't been business reasons to create them. But if you think about what's happened in the past 510 years, a number of deep learning type techniques are now really feasible in ways that they weren't both from a data availability and a compute availability perspective. And that's where we've seen, I think, the most dramatic gains and problems that we used to think of as somewhat intractable.

Justin Grammens :

Nice. All right, yeah. Thanks. That makes a ton of sense. So how did you get into the field, you did a PhD at Duke and stuff like that you were already sort of in this world of deep learning machine learning, going back years is that fair assessment.

Josh Cutler :

Kind of my my path has been very meandering. So as an undergraduate, I studied math, computer science. I'd been programming my whole life. I started as a kid, like So many wanting to make video games. So I got into it. I think the first things I built was I would take those Choose Your Own Adventure books, and I would type them in in basic and just use goto statements. So I could do it that way instead of turning pages literally. But where that manifested for me was being really interested in gaming, I just Alright, well, how do you make games work, it's so cool to render graphics on the screen. But if you don't have opponents that are smart, then you don't really have a game. And so I in undergraduate spent a lot of time just thinking about algorithmic approaches, you know, pathfinding, a star, that type of stuff, just so I could make games. But I'd always had an interest in politics. So right out of undergraduate, I went to Duke and I was pursuing a PhD in political science. And what is interesting about political science at the graduate level is it can become very quantitative, which is different than most people's undergraduate political science experiences. It says where I really got into statistical learning statistical inference and started thinking about well what what are predictable phenomenon and this is a whole discipline. That's what most of my peers who are now professors are now doing, whether it's predicting election outcomes or interstate conflict, these types of things?

Justin Grammens :

Yeah, I was I was thinking of Nate Silver, I guess he's probably Yeah. probably read a lot of his stuff.

Josh Cutler :

Yeah, precisely. So Nate Silver is probably the most famous example of someone who uses data and statistics to predict political phenomenon, although there are many, many others who just don't have quite the notoriety. And so I did that for a couple years, I quit a BD and went to Microsoft. So moved to Seattle and worked in a research group out there called Live labs. And our function was to try and commercialize the work that was coming out of Microsoft Research. So there I was typically taking things like entity recognition modules that had been developed by others, and saying, alright, how do we put these into products? So we built some prototypes of things that died but would look like one note on the internet. But what if you could automatically annotate a bunch of content that you'd put in there using natural language recognition? And then that kind of started my startup journey where did not do a ton of data work, but what

Justin Grammens :

was the timeframe? You're talking here?

Josh Cutler :

So when I got there, we just released Vista. So it would have been. Okay. Right around then I was out there for about

Justin Grammens :

three years. So Microsoft was, I mean, in some ways they were kind of, would you say they were ahead of it? I mean, were they kind of ahead of Google in some of these in some of this aspect? You think?

Josh Cutler :

So Microsoft Research as a research institution is one of the, I'd say for most and has been historically just in terms of publications and doing this type of work. I think where we struggled at the time, and still, to some extent do is getting those into the products right. And there were so many cool demos that die because they just don't make sense to necessarily get into a product and I think that's just it's really hard, right? Yeah, some of the things did turn into amazing products like Kinect right, which is all computer vision, but many didn't.

Justin Grammens :

Yeah, I have to kind of think about the PowerPoint, paperclip or whatever, right? Yeah. Those are things that people really love would laugh at. Why why why is this thing talking to me paperclip, you know, and the work that went into building pads per se,

Josh Cutler :

but the thing is on paper like if you do it right in You could identify when people were getting frustrated trying to do things. It sounds like a good idea. But that's when the rubber hits the road. It was not sure. Sure, sure.

Justin Grammens :

So you're in Seattle. Yeah. And it's very interesting that there's this research group, it almost feels like sort of that bridge between academia. You know, it might have felt similar to that, I guess, in some ways where you were doing research academia, then but then you actually have the means to apply it to a product sounds like an interesting role.

Josh Cutler :

Yeah. And that was explicitly our our mandate to try and do this because so much had died on the vine. Prior to that in the research world. Ultimately, our lab was shut down. And so I ended up moving back to the Twin Cities at this point, worked kind of got into the rails community started just understanding what it meant to build web apps, because that really wasn't what we were doing at Microsoft and built a startup actually with my wife. I was a company called sisal. It ended up being one of the largest sports card communities online. And what that was was a just scratched an itch when my parents gave me my baseball cards back and I had no idea how to organize it. test them. But B was a playground where we collected a lot of data, all of our data was assembled by our users, right? So they were scanning images of their cards, they were correcting the metadata around them. Because the reality is for things that people stopped manufacturing in 1890, like there is not necessarily good data about right. Right, right. And what that was, was an opportunity to start then applying some of these data skills again. So one of the things that we did was we built predictive pricing algorithms. In the collectible space, people are really want to know, you know, was my collection worth? What are these things worth? And for rare items that don't have a lot of volume in auction, you have to build statistical models. You can't just say, Oh, yeah, you know, 10 of those sold yesterday, and that's what they sold for. So invested in a bunch of time in that. And then that's really where I got my feet wet with computer vision to start saying, All right, well, here's a picture of a Joe Mauer baseball card. Do I really have to have someone come in and tag this as being Minnesota Twins, or can we just recognize the logo and say, This is a Minnesota Twins card?

Justin Grammens :

Sweet, huh? Very cool. Very cool. So Yeah, so basically taking you know, I might the thought came back to me, I wonder if you ever did the same thing with comics. It's the same thing. I've got a whole shelves and shelves of comics that I'm trying to evaluate at some point. Oh, yeah. And you're right, the human cost of somebody to go through and analyze all these things, why? Why not have a machine do it? And I think back to what you said earlier, you know, like, how can we have machines do things and teach them to do things? Because they're really good at scanning through thousands of images, whereas humans aren't right. We're tired. We get bored quickly. We oftentimes make mistakes because of it. So cool. So yeah, so you kind of did your first startup, where did that end up going?

Josh Cutler :

We ended up being acquired by one of the larger players in the space. Beckett media. Cool.

Justin Grammens :

So then you're looking at Okay, you got you've got this itch for image recognition and data. Is that sort of where deep machine came in?

Josh Cutler :

Yes, exactly. So I was actually at optim. I had been there about nine months and I met up with my friends, Dan Grigsby, and he was kind of mulling this space as well. saw an opportunity here, we actually met at a book club, talking about, you know what the impact of AI was going to be, and decided All right, let's take a swing at this because it was it that weird moment where a lot of companies knew that there was something for them there if they could unlock the value of their data. They've been hearing, you know, data is the new oil for forever. So they're building data lakes, but they weren't getting anything out of it. They were just storing things. And so what we wanted to do was say, Alright, well, we can help you think about how to get value out of there. And then if you don't have the people, we can do it for you as well. And it was turned into kind of a machine learning consultancy. As you were talking about people, you know, hearing about the power of AI. I was reading a book recently I forget the title or whatever. But she was talking about AI going through a series of winters, right, basically, ai winter. So, you know, people were talking about some of the stuff in the in the late 60s, early 70s. And, you know, I don't know what your perspective is. Her feeling was every 10 years. We sort of got in this winter where it was like, people would pull back Because they're like, I haven't seen the value in this yet. And a lot of ways I've seen this with Internet of Things, where I've done a lot of work and IoT, we're in explosive growth time right now, do you see another winter on the horizon? Or some some pullback? Or is it just hasn't been proven enough now where we've sort of crossed the chasm, I guess, into companies now willing to invest more and more in it. It's tough to assess whether or not you know, you see another full blown winter, like what people describe where people just kind of completely sour on this idea. But so much of that, to me comes down to what are the expectations set around what machine learning and AI can do for you? So my experience thus far has been that people are a little bit more tempered in their expectations. They know that there's something there, but they don't think it's going to completely change their business, despite what you might hear from some of the vendors commercials. You know, there are absolutely vendors out there like oh, you know, we're going to revolutionize your business, blah, blah, blah, like that's probably not going to happen. Most of the folks that I've talked to though, they know that, you know, there are some operational savings. They could be achieved, right? Even if it just makes the existing folks they have more productive. And maybe there's some new experiences they could deliver right now that they can think about something that would have just been way too expensive to go hire 100,000 people to look at images, but now they don't have to. My gut says that while we may reach a plateau, right, well, we're not necessarily advancing as quickly as we have over the past five or six years. I don't think people will be as put off by that because they seem to have more realistic expectations this time around.

Justin Grammens :

Yeah, for sure. For sure. I remember seeing commercials in the early 2000s, like when the internet really sort of took off or late 90s, early 2000s, where it was pretty much the solution for everything. And they were kind of pushing the envelope. I mean, they didn't call it 5g, but they were saying, you know, 100 gig downloads and hear this and that stuff. It's like come on, really, I mean, so a lot of times what gets sold out in the marketplace is is way is still still futuristic, and kind of you're trying to sort of temper that, I guess bring that down to reality. But I think all of us probably are seeing some real world applications. I guess I'll get to the brass tacks here on this because there's a lot of questions. around how will this will replace humans? Do you have any like perspective on that? Right? If we'd have been on people reading, watching, looking at thousands of images, you know, what are they going to do instead? And maybe, maybe that's not a good example. But it feels like as we're getting higher up the evolutionary chain, these computers are going to be doing much more than reading images and analyzing them, they might even be able to, in the case of cancer and doctors, right, they they might be able to spot things that a human would and what's, what's your feeling for that with regards to AI and sort of the future of people's employment?

Josh Cutler :

So I am an absolute skeptic that we are going to replace the vast majority of people with machines anytime soon. I think that right now, we are very, very good at automating tasks, right? We're not necessarily good at automating jobs, nor should we even necessarily be trying to. So what that means is that what your job looks like might change a little bit, right? Because some of maybe the more boring and mundane tasks that you had to do or automated, but there's very few cases I've seen where Realistically, we're even close to automating away a job. So, to some extent, we may just be eliminating some of the busy work for people, making them more productive so that they can focus on the things that are uniquely human. Because when we think about what's rewarding about our jobs, it is very rarely the things that computers are good at anyway, the things that we are good at as humans are the things that we enjoy doing. So I'm somewhat optimistic that it actually makes the workplace better for a lot of us.

Justin Grammens :

I think a lot of people will have that view that it's a complementary technology. I guess. It's not a full pick up and replace. So it sounds like you're sort of in that camp. That's right. Yeah. So so you, you had this consultancy, and then you moved on to another startup with Mitch Koopa. Fellow friend of both you and myself. You guys. Were doing something in speech,

Josh Cutler :

right? Yes, that's right. So I had a second interlude where I went back to Duke same program was there for a few more years, it still didn't finish. So I'm add, but I did manifest as a textbook which just came out. I co authored with friend It just came out in May here. But yeah, so it started a start up with Mitch koupit. And Ryan deiss. Below was called ramble, what we were trying to do was take advantage of the breakthroughs in natural language understanding and conversational AI, and say, what could we do to make people more effective, we knew that the opportunity was to eliminate notetaking. Because so much of actually more jobs than I realized, before I went into it was writing down things that had happened, whether that was a doctor writing clinical notes, or it was a sales rep, just documenting the conversations that they had. And this is part of why I really do believe that we're going to automate away the boring tasks. Salespeople and doctors hate writing down notes. They love talking to people. And so to the extent that you could automate that, that note taking you allowed them to then do the things they enjoyed, which was talking to customers talking to patients. So what we built with Randall was a smart phone system for salespeople. We build out the telephony infrastructure. We built out a number of different things for analyzing conversations so that you could understand how well you were doing how well you were adhering to your sales playbook. And just give sales reps the tools to really quickly take what was interesting about their conversation and get it into their CRM.

Justin Grammens :

That's perfect. That's perfect. My dad's a doctor retired now and but I whenever I find these these things with regards to, hey, look, they're automating your job away. He was a cancer doctor. So they're like, they are spotting cancer cells better than you could dad because they have all this information. And, you know, his his response back to me was Yeah, but they can never have that human touch. Right. And so it can never really touch the patient can never really sympathize with them. And that's a piece that will never be replaced. And I think it's, you know, sales could apply in the same thing, too, right. It's all about the relationship, isn't it?

Josh Cutler :

That's right. Absolutely. And to some extent, it may cause us to introspect a little bit about what are the valuable parts of our job. So in, you know that doctor's case, is it really staring at MRIs? That is what makes them a doctor? Or is it communicating those things with their patient?

Justin Grammens :

Yep, exactly. So bring us up to current day then. So you

Josh Cutler :

went to optimum at some point? That's right. Yeah. So we ended up winding down ramble after about two and a half year run and made it to optim. Well, I guess I'm in the same organization that I was in last time. some new faces, though, was pretty excited about what they are trying to do. And just some of the change I'd seen since I was there last time, and how hard they really leaned into the fact that when you have the scale that they've got, and the data that they've got, we could really make health care better, right, by being smarter. And so the opportunity just was was way too interesting. And honestly, there's a lot of areas that have good data, but I knew I didn't want to sell ads, right? And you can go work at Facebook and go work at Google. But ultimately, you're selling ads. You're not making people healthier. So that was a big component for me.

Justin Grammens :

Sure, well, it seems like Optimus picked up a lot of great talent here over the past couple of years. And so I believe that it part of it is the mission, right? Like, yeah, you're not just doing AI for the sake of AI, it actually touches and impacts human lives. So it's a great, great thing to do.

Josh Cutler :

Yeah, that's exactly it for me. Because there's a ton of places you can apply it. I mean, when I started, I wanted to go build video games and nothing against building video games. I hope to do that as well. I do it in my spare time a little bit. But that's probably at least for me, not the area that I'm going to feel like I'm having the most impact given what it feels like the world needs right now.

Justin Grammens :

Yeah. So what's it What's a day in the life for you? What's a typical day if you could share that?

Josh Cutler :

Yeah, so for me is probably not going to be what it would be typical for somebody coming up because I'm kind of, I'm managing teams now. But I guess I can give a day in the life of both so for for me right now. It's very much two things one, trying to collate all of the various needs of people across our organization, right because UnitedHealth Group is is huge. We have, you know, massive clinical networks, we have pharmacies, we have a bank, we have all of these various people who have different data requirements. And when you're building platforms that are going to serve them all, you really have to spend a lot of time with your customers, you can't just say, Well, this is how I build models, that deep machine, you should do it this way. So that's a big part of what I do, and just building consensus, right, because all of the various business leaders could choose to not use the platforms that I build. That's kind of one of the things I really appreciate is the stuff I've worked on is not delivered via Fiat or decree, I actually have to convince everyone that it's valuable. So I spend most of my day doing that. Now, for folks that use our platforms that train models, I would say it's it's very much like what you're gonna hear the same story everywhere else. It is probably 80% of their time is talking to the people who have generated the data, whether it is claims data or clinical data or what IoT data to just understand what are you looking at and try and understand what were the sensors reading or you know, what was used to train Lay this facts into text so they can understand you know, what the the bias in the data might be what they are is they're looking at our whether or not their dependent variables actually represent what they think. And then once they've got a handle on that, and they really understand what is the problem that they're setting out to solve, right? Because frequently when you work with with product or business teams that haven't done a lot of AI, they're not going to give you a predicted this variable, right? Like, that's, that's never how the problem is presented. But ultimately, if you're going to turn it into a machine learning problem, you need to translate the business problem into Okay, well, if I could predict what would you be able to solve your business problem better? So there's a lot of that it's a lot of just understanding the businesses themselves. And when I say businesses, I mean, either what's happening, you know, with that unit in the pharmacy or whatever, to understand, like, what things that are predictable, help them and then there's probably you know, 20% of your time is doing those predictions, which is the fun part.

Justin Grammens :

I've just started. A lot of it is gathering requirements and data cleaning, and in some ways, and then the fun part is really getting the output. Right.

Josh Cutler :

It is if for some, I think it depends, right, I actually, I really enjoy the helping people understand what machine learning even is, and helping them formulate that. Okay, well, if you could predict this, would you be able to run your business more efficiently? But maybe that's just because I, I love learning about how business separation. Like I feel like every one of these engagement like Oh, man, I didn't realize that's how that worked at all. Okay, tell me more.

Justin Grammens :

Yeah, it's, it's, it's interesting to see, especially when you can apply technology that people haven't seen before just to see their eyes light up, right when it when it actually works. There's a lot of satisfaction there. Do you get a sense sometimes you guys run into things where it's like, yeah, this is great. And then you start doing the analysis or trying to pull the data in and you just have to say, we can't go any farther. Absolutely.

Josh Cutler :

Yeah. And so this is something I learned the hard way doing deep machine, but it's, I think you should always structure your projects around a very cheap, what I call a signal project? And basically, is their signal in the data? Can you predict the thing that you would like to predict given the data you have on hand? Because if the answer is no, then there's no reason for us to invest a ton of time and energy here. Sure, doing that. That to me is the MVP of a data project. Right? So like, we like to think about MVPs in the software world all the time, like, what is the minimal viable thing to test that this is actually there's some there there. We frequently skip that step when doing data analysis. And I think it's super important. Because otherwise you may invest a ton of time where you just use the data you have may not be correlated to the outcome that you care about, just because of their truly not or because there was some problem with data collection or for whatever reason.

Justin Grammens :

Yeah, yeah, for sure. Have you seen things that you've put? I mean, maybe you've been there long enough or what but have you seen things that have been shelved for a while and then you come back a certain time later, where you can pick it back up again and be like, hey, let's, let's let's let's take a second stab at that or do these things sometimes Die. They're done.

Josh Cutler :

Yeah. So I don't know that I have seen an example of that yet. But I've only painted optim for about a year. So I do have an example of one that I actually bumped into time and time again. So it seems obvious, I think, to a lot of people, that sentiment on phone calls should be predictive of all sorts of things. And I was convinced for sure that it would be, especially in the sales world, I was like, Oh, yeah, if people are, if sentiment is high, you must be more likely to be closing a sale. And after spending quite a bit of time with it, if they're just it's not particularly well correlated. People are grumpy or happy. And it has more to do with whether or not they were stuck in traffic and way less to do with whether or not they're going to close a deal, especially in the b2b world. It's not the consumer buying world. And it took me quite a while to convince myself that that was true. And now I've seen it in a couple of different instances.

Justin Grammens :

That's interesting. Yeah, yeah. No, I mean, sometimes the humans come into the thing with bias already. So you to the problem, you know what I mean? So as as a human, I come in and say, of course, these things will line up. And that's, that's fascinating. I mean, and it also is a sort of a mind bend to people to finally, accept the data for what it is. Because we have emotions. And we just think, of course, but you know, like I say, it can take some time for you to finally convince yourself that these things aren't correlated.

Josh Cutler :

Yeah, exactly. So do you have any,

Justin Grammens :

you know, advice or you know, like classes that people take or conferences, I guess, you know, if if I was a student or changing careers, for example, or whatever, and I find this stuff fascinating. Do you have any recommendations I guess that somebody might might want to explore?

Josh Cutler :

Yeah, so I think there's there's kind of two backgrounds they typically bump into there's so there's the IMA software developer and want to learn about machine learning. And I think if that's the case, there's there's a number of good resources. I really think that for folks who want to get into deep learning the fast AI courses are probably the right place to start. There's just a ton of great content out there. Their their tooling is good. So I highly encourage most folks to check that out. I think for people who do not have a programming background, I actually recommend that they start there, you absolutely can go do a data science boot camp and learn to run regression models and things. But I think that people will then they're just setting themselves up if they want to get into this field professionally, to run into some blockers down the road where they realize that oh, I there's still some computer science things I need to understand.

Justin Grammens :

Yeah, man, I guess, since you're overseeing a team, they're at optimum. Where do you guys pull your talent from? You know, I'm asking you guys like looking at at masters level people. You guys hire people just sort of across the board, you bring in interns?

Josh Cutler :

It's a full spectrum, right? There's needs for people ranging from the PhD level all the way down to undergraduates. It really depends on what you've studied. So there are now data science programs right offered at the undergraduate level. If you were a statistics major, some of the words are different, but the concepts are the exact same segwaying into machine learning. So it really just depends. I think that most Teams benefit from having a full spectrum of folks ranging from the senior person who's been doing it for a while down to people who are fresh out of school, because they've probably are up to date on what is the latest and greatest in many ways that people who've been doing it for a long time may not be.

Justin Grammens :

Yeah, yeah, good point. And I'm wondering a little bit more about just maybe the soft skills of the job. You know, you hear about this concept of storytelling and stuff like that, it seems to me like maybe there's the hardcore statistics people and then there's probably a range then you get out to people that maybe didn't major in a STEM related thing, but yet, they're just really good at writing. They're really good or they're very articulate, and they can sort of talk to the to the C suite about this. It seems like there's a bunch of different skill sets that are involved in this whole area.

Josh Cutler :

Absolutely. So actually think that the person you're describing there is the one that is rarest thus far in my experience in industry, someone who can communicate, I would say both to the technical and non technical audience. out the opportunities that machine learning in particular, but just AI generally present, if you are dangerous enough that you actually can speak to technical audience, but but can also translate that I think that now is your moment career wise.

Justin Grammens :

For sure, for sure, embrace it. Well, how would people want to reach out and connect with you?

Josh Cutler :

Yeah. So I'm on Twitter, Josh underscore Cutler, you can find me on LinkedIn as well. I'm happy to chat. I've worked with a number of folks who just reached out and said, hey, how can I get started? The course I taught at Duke was for social scientists who wanted to get into programming and machine learning. So we've got a bunch of content that I can share as well with anybody who's interested in cool.

Justin Grammens :

Now, I mentioned at the beginning COVID. And so we I've, I've since alluded that topic, but I am I am curious with the current state to win right now. How are you looking at it? How is how is healthcare in general, sort of your perspective? Looking at this, how is AI being applied as far as what you know, I see a lot of articles of course in the news, but you know, how how are you guys approaching it as a health data company?

Josh Cutler :

It's a great question. This is one of those instances where you really need to say okay, well what are things that I could predict that would allow me to have any sort of agency to help right so things that you could imagine are would be really useful are knowing where the next outbreaks are going to be where high risk areas are now that's only useful if you can do something about it right so you say okay, well, if we think there's going to be outbreaks in areas x y&z Can we make sure that they're perfect paired from a ventilator perspective in these types of things. I don't, I can't speak to work that is being done or using AI in the vaccine generation space, but I know that that is a fertile area for pharmacy companies just doing modeling of these things without actually having to create them. So I imagined that that's probably a very hot area as well.

Justin Grammens :

Yeah, I have seen some websites. Obviously there's a bunch of website sites that have different models of this ad. And the other thing, and you know, I don't know which one it was in particular, but there was like a dozen different sliders. And the moment you adjusted one thing or the other, you know, basically the R value changed from this to that, you know, so it's more contagious, there's more higher population, then the models just completely changed. Right. And so, it feels like there's just so many variables here that it's almost Well, that's what they're doing. They're basically running like rerunning the models on a daily basis, because you have so many variables going on, is that sort of fair to say?

Josh Cutler :

Well, to some extent, you have to right, so a lot of those websites you see with sliders are saying, Well, if this is true, then you'll see this outcome. Well, we don't need to think about what all of the possible outcomes are because we actually have COVID. And we can measure what the AR is like, like we should we should be updating it and saying, Okay, well, this is what's actually true, right? Yeah. Now we want to model you know, what would happen if contagion levels went up, things like that, so that we can make sure that we're taking proper precautions, but you do kind of have to update these things. With high frequency because the truth on the ground is changing.

Justin Grammens :

Yeah. What's interesting about this, and maybe it'll play out over time. I mean, if we have a full curve the way COVID behaves, could we do you think, use that come the next wave or the next, you know, virus? The feeling right now we're in it in a state where we just we have never seen this before. So it's hard for us to pull in historical data. Do you think that's true? Or is there is there actually a date out there that I think we can point to and say, Well, if we can take a big enough guess here, or we're relatively confident, even though it's not COVID? We believe that it typically behaves this way. Is that kind of work you're doing?

Josh Cutler :

So I'm not an epidemiologist. So I am hesitant to speculate on how much this looks like previous things. No, this is an area where I'm actually very critical of our discipline. And by that I mean just kind of statisticians and machine learning folks who, who wade into things that are actually really well studied areas and like Aha, but you don't know About deep learning need to know about deep learning. And I'm pretty sensitive to that too. Because being trained in the social sciences, you frequently will have like some physicists who, like elections are trivial to model, you just need to use, you know, this set of differential equations that I've imagined, like, Well, they've been people studying elections for hundreds of years at this point. Yeah. It's quite that simple. If it was, they would have already done it.

Justin Grammens :

Right, right, for sure. Well, great. Are there any other topics that you want to share with our listeners? I don't

Josh Cutler :

think so. Thanks for having me. I'm always happy to talk about kind of how I got where I am. And just what I think is happening, because I think we're at a pretty exciting moment. While a lot has happened in the past, I'd say 510 years in just because of the adoption of deep learning. I think we're right now on the cusp of actually getting these things into the hands of people manifested in products. So I think it's pretty exciting.

Justin Grammens :

Very cool. Well, Thanks, Josh. I appreciate the time and as always, I guess Continue to help and shape healthcare for all of us, and I'm sure we'll be in contact in the future.

Josh Cutler :

All right, thank you.

AI Announcer :

You've listened to another episode of the conversations on applied AI podcast. We hope you're eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied AI dot m and to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn if you are interested in participating in a future episode. Thank you for listening