Conversations on Applied AI - Stories from Experts in Artificial Intelligence

Mike Hugo - Using AI Technologies for Biomedical Analysis

April 21, 2021 Justin Grammens Season 1 Episode 18
Conversations on Applied AI - Stories from Experts in Artificial Intelligence
Mike Hugo - Using AI Technologies for Biomedical Analysis
Show Notes Transcript

One of the areas in which Artificial Intelligence is making a huge impact on our lives is the areas of biomedical analysis. In this episode, I'm joined by Mike Hugo - CTO, and Co-founder at Vyasa Analytics. We discuss all of the latest and greatest trends going on in this amazing field and discuss everything from Transformer Models to his favorite books and what a Pteranodon is! I hope you enjoy this conversation and huge thanks to Mike for taking the time to break this complex subject down and explain it in clear and simple terms.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events! 

Resources and Topics Mentioned in this Episode

Enjoy!
Your host,
Justin Grammens

Mike Hugo  0:00  
The key point for me is that it's not all hype. There's actually some rubber that's meeting the road here. There are some really cool things that are happening that are going to change the way that computers are able to analyze large amounts of data and come back with really good answers to things.

AI Announcer  0:19  
Welcome to the conversations on applied AI podcast where Justin Gremlins and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.

Justin Grammens  0:50  
Welcome to the conversations on applied AI podcast. Today we have Mike Hugo. Mike is a technical leader with experience in Agile software development and a focus on getting products to market quickly. He enjoys building quality software and applying cutting edge solutions to complex problems. He also has experience in many different industries, including Life Sciences, retail, healthcare, insurance, commercial real estate, medical devices, and finance just to name a few. He also works with companies both large and small. Currently, Mike is the CTO and co founder of biasa analytics, where him in the team use AI technologies for biomedical analysis. This is an extremely fascinating area of applications that use AI. So I'm very excited that Mike has taken the time to be on the program. Thanks, Mike. 

Mike Hugo  1:29  
Hey, thanks for having me. 

Justin Grammens  1:30  
Awesome. Sweet. Well, so yeah, I gave a little bit of a background here in terms of what you're doing, you know, maybe want to give a little bit more info, I guess, for the guards to your trajectory of your career, kind of how you started and how you got to where you're at?

Mike Hugo  1:42  
Sure, absolutely. So I started out doing web application development, long time back. And I worked, doing traveling consulting for a while and then landed at some local companies like Wells Fargo and Medtronic, I did much of independent contracting for a while and then worked at some startups along the way as well. Most of my experience is in java web development. But recently, you know, we've expanded out into programming in many different languages and working on many different problems for different industries. So I really enjoy solving different problems for different industries. You know, it just kind of so happened that my path has gone a meandering way across many different companies. I landed at a company called a startup called antigen. And at antigen we were building technologies, web applications for life science companies. And that's kind of how I got started in the life sciences space. And at energen. We were using RDF and semantic web technologies, to build applications to help companies kind of connect silos of data together. And also, we were trying to build applications that would let a user ask, like a natural language question and get actual answers back, we actually ended up selling antigen to Thomson Reuters in 2013. And so that technology continues on there, with the Thomson Reuters team, few of us stayed around for a while. And then we eventually kind of got the band back together and started a new company. So here we are, at fiasco where again, we're solving problems in life sciences space, but now we're using newer technologies to do it. So RDF turned out to be a very rigid and structured way of doing things, which was great for connecting silos of data together. But it's nice to have a more flexible architecture to be able to do dynamic things. And that's where we're starting to apply these deep learning and AI technologies to solve all sorts of problems in the life sciences and healthcare space. So we've identified kind of three areas that we're working on. The first is dealing with small compounds, so like chemical structures for drug manufacturers. The second is image analytics, and doing things like image classification. But we're not identifying cats and dogs and cars and bikes, we're doing things like identifying whether a tissue sample that has breast cancer is malignant or benign. And then the third thing we're doing is working on text analytics, and using some of the newer transformer based models to let users do things like deep learning based named entity recognition, or natural language question answering and letting a computer really kind of understand text to be able to do downstream tasks with it.

Justin Grammens  4:21  
Excellent, cool. Sounds like you're applying AI into the hot areas of classification. And you said text, you know, those are two areas where, I guess in the case of the image classification is a surgeon would be doing a lot of this work. Where does the analytics come in? I guess, for the text side.

Mike Hugo  4:36  
Yeah. So on the text front, we started using Bert based models, so transformer based models. And what's really exciting about the text analytics space is that for a long time, image analytics have had the capability to do transfer learning. So you get lots and lots of images and you train a base model that kind of understands, generally speaking how to understand images, and then you can take that and extend that model. You will apply to a more fine tuned a task. So if you're going to do an image classification task, you don't have to start from scratch. And yet, you know, hundreds of millions of images, all you really have to do is find a couple 1000 images, maybe that are representative of the classification task you want to do. And you can leverage the larger model to do the classification based on what it's learned over time. So with transformer based models for texts, we're finally able to do that on a text basis instead of on an image basis. And that has opened up a world of possibilities, because now we can use models that have been trained by larger organizations like Google has published Bert models, and many others have pre trained models that are available for different downstream tasks. And then we can fine tune it on a more specific area that we're interested in. So we can fine tune models that are specific for analyzing the text that's in clinical notes, or for mining, literature abstracts, and PubMed, you know, publicly available literature articles. So we're able to take basically the big model, the base model, and then tweak it so that it's very targeted, and very specific and very good at doing something that's more what we'd like to apply it to. And so that's what we've done. In a lot of these cases, we built models that work for the life sciences industry. Like we're having named entity models that can tag things like drugs, chemical names, diseases, you know, the standard named entity models that you see, or you know, a person, place, company, you know, organization, those sorts of things, which are also very useful. But oftentimes people want to tag something that's outside of that broader domain, and find something that's very specific that they're interested in. So that's what we've been working on. And then the in the extension to that is not only the named entity stuff, but being able to do natural language question answering. So we've built a system that allows users to ask an actual question of the text and let the computer read millions of articles and come back, not just with the articles that we might be interested in. But the actual answer to the question that you asked. So if I want to know what what gene is involved with a particular disease, the list of answers that I get back is the actual thing that I'm looking for, not just a bunch of papers that I could then go and read. It's an order of magnitude faster. And it's very good at reading the text and pulling out the actual answer that you're interested in.

Justin Grammens  7:27  
Nice, cool. When you talk about pulling out the actual answer, I found Google Now. I mean, they probably been doing this for a while, but I'll search for something and it will actually drive me right to the article. And actually, you know, highlight, okay, here's the answer that stuff that you're looking for.

Mike Hugo  7:40  
Yeah, absolutely. It's, it's really exciting stuff. And it's neat that these transformer based models are out there, because it allows anybody to train a model like this, right? They don't have to do the whole thing where they train the base model on all of the text in Wikipedia, and all of the websites that they can find and everything else, right, they just find the targeted set of information that they can then use to fine tune the model to do a more specific task.

Justin Grammens  8:04  
Cool. A couple episodes ago, I had a guy on that has a project called the AI dungeon. And he talks they use GPT, two and three. And so that's more of a model, I think, where it's actually writing responses, but I you know, I don't know, are you guys touching in some of those areas as well?

Mike Hugo  8:20  
Yeah, we're looking at all sorts of different things like that, like, we've implemented some chatbot sorts of algorithms that do that sort of thing. Primarily, though, we're focused on helping users find the data and pull back the answers. You know, the other really interesting part about this is that when you get into working with applied AI, it's not only about the algorithm, it's also about the data and where the data is, and how you get access to the data. So when we actually started a fiasco, we built a whole bunch of algorithms that could do some really cool stuff in the small compound analytic space. And we started showing people, these algorithms, we were really excited, and people were excited about it, too. But at the end of the day, they said, that's great. But I don't even know where my data is to begin with, I can't even give you the list of things that I would want to do the analysis on. So that's the other thing that we've really been working on is we've built an architecture called the data fabric that allows us to connect different silos of data together in a simple to use web user interface. And that gives us the ability to let the users find the data that they want to use in the algorithms or you know, that they want to be able to analyze the data with. That's another key component of this area is not just building the algorithms, but having a mechanism to be able to find the data that you want to use to be able to leverage the really cool technology.

Justin Grammens  9:37  
Cool. Yeah. You mentioned RDF. And the thing that popped into my head was kind of like a graph qL sort of interface is that kind of what you guys are not sure if it's modeled off of that, but that's very sort of open ended, like hey, here's the parameters I want to have come back.

Mike Hugo  9:50  
We've built some dynamic knowledge graphs, which are kind of similar to that. So you can ask a question will give you the answers back in a view that looks like a graph and then you can kind of start To navigate from that and build out a knowledge graph, you know, in RDF that's very much done upfront. It's structured data that's used to build the graph. And what we're able to do now is do it more on the fly. So as you click on the individual nodes in the graph, you can ask things to further extend the knowledge that you're working with in that view.

Justin Grammens  10:19  
Yeah, cool. Sounds very powerful. I'd like to so this is part of the technology that your team has built.

Mike Hugo  10:24  
That's right, yep, we've built a bunch of user interfaces. And we have a core API that we use that is basically the interface into the data and then the interface into the algorithms. And then on top of that, we've built half a dozen different user interfaces for different types of use cases that maybe you know, one is really good at question answering. One is really good at analyzing data and more of a spreadsheet format. One is, you know, more targeted toward the image analytics. under the covers, they're all using the same API that we've built. But it allows people to have a more targeted approach to the use case that they're trying to solve.

Justin Grammens  11:01  
Cool. You mentioned Life Sciences, in a broad term, it sounds like it's medical focused right now. But yeah, I don't really maybe you could define what life sciences means.

Mike Hugo  11:10  
Yeah, so we've seen applicability to this sort of thing across healthcare. So the medical field, also, we've been working a lot with pharmaceutical companies. So there's, there's kind of a broad range there of areas that this technology is applicable, and different things that we've built for specific use cases within that field. So I'll give you another example. In the image analytics space, we work with a pharma company that has a bunch of images from the drug manufacturing process. So when you're actually creating a tablet, that's going to be used to deliver a medicine, it starts as a liquid. And then there's a crystallization process. And the crystals that are formed in that process are a certain shape and size. And there's a quality control step. And there's also different properties that can come out of that, depending on how the crystals are forming. So oftentimes, there's a whole slew of scientists who are looking at these things under microscopes, making sure that there's crystals are the correct size, the correct shape, exactly what they want for the medicine to be delivered properly. And we were able to train an algorithm can look at those slides and say, Oh, that's this sort of crystal, and it's about this length. And here's a different example, and classify the crystals that it's seeing in those images. So it's an image classification problem. But it's more of a, you know, directed, targeted sort of application of it. That's not just taking images off the web and trying to assign a simple label to them. Okay, cool. And in most of these cases, are these have all been labeled, I guess, or tagged by somebody within the organization, or wherever you guys are finding them from? Exactly right. Oftentimes, what we find is there's a manual process that's currently in place that's being used to do their quality control of these images, or the process, really, the images are just a step in the process. That's what we can use to train the algorithm, we take that information, do the standard separated new training, test and validation sets, and then run it through the algorithm, train a model and then see where the accuracy comes out.

Justin Grammens  13:15  
Good. Does the people that have to do this, are they are they highly experienced? Or are you able to, you know, Mechanical Turk it out to various people? I mean, I probably depends on the domain expertise, I guess.

Mike Hugo  13:26  
Yeah. And it also depends a little bit on the data, because oftentimes, in this space, that information is considered very proprietary. And so you can't let it leave the firewall, right? It has to be internal to the company that you're working with. But there are other areas where there's a lot of public data. There's, for instance, clinical trials.gov, publishes information about public trials happening in the United States. And they include protocol documents, large PDF documents that are explaining the statistical analysis plan and how they're going to run the study and what the outcomes are and what the patient population is expected to be in those sorts of things. We've been able to do some analysis of that public information, and then leverage that to be able to train a downstream model to do things like extracting information from PDF documents. So there are certain areas where where we can leverage something like a Mechanical Turk, or be able to use more publicly available data to facilitate the training of a model that's used on something else that's maybe on data that's hidden away behind a firewall.

Justin Grammens  14:29  
Yeah, awesome. Well, one of the questions I like to ask people is how do you define AI? It's such a such a broad topic. It's got a lot of meanings. I guess, you know, this. Everyone's got their own perspective, have a deep Do you have a short simple definition? If people ask you what you do?

Mike Hugo  14:43  
It's a great question. It's something that everybody has a different kind of perspective on it. The way we think of it is just kind of an evolution of machine learning, right? Like when we started doing machine learning and data science, you're identifying features and you're telling the computer these are the kinds of things I'm interested In. And these are the things to look for that gets you to a point where you could classify that image as X or Y, the evolution that we have with deep learning is that now, a human doesn't necessarily have to tell the computer, what those features are, we can now basically just give the computer a bunch of examples and say, this is x, this is why you go figure out what's different about them. And what defines what they are. That means it's a lot more efficient. Because we don't have to figure out what those things are up front, we can let the computer do the heavy lifting there. And one of the reasons that this is now possible is also because of GPUs available for training like this, right? The compute power has just exploded. And that capability now is what is giving us this ability to let the computer figure it out for us. So I think that's one of the main things we talked about. I think, you know, the other area there is that there's also a bunch of open source tools that are now available like TensorFlow, or pytorch, their frameworks in place that give people a jumpstart. So having the data available, we went through the Big Data phase. Now we know how to deal with lots and lots of data. Now, the GPU capability to do the compute. And then the frameworks that allow people to kind of figure that stuff out without having to build a data science model from scratch is really what has made deep learning and AI kind of more possible, and it's really just another tool, we're not working towards generalized AI, right, what we're doing is we're finding places where these algorithms are very, very good at doing something that a human used to do. But it's, it's a task, that's something that maybe was a manual process before, and people are very good at it. That usually means the deep learning stuff can be also very good at it. But people get tired, you know, computers are really good at reading are really good at analyzing lots and lots of images without getting tired. And then that lets the people focus more on the, you know, maybe the edge cases, or the things that take more synthesis or more generalized analysis to pull out an answer on the other side.

Justin Grammens  17:07  
Yeah, for sure. Are you feeling are getting any sense of pushback from people, like you're taking my job, when you guys are building these solutions out there?

Mike Hugo  17:17  
We haven't seen that necessarily, I think, because oftentimes, the things that we're building these algorithms for are not the fun part of somebody's job. It's reading hundreds of pages of PDF documents and extracting one little piece of information, or is looking at 1000s of images to identify what shape a crystal is. And those are things that we can do, but they're not the fun parts of anybody's job necessarily. So I think, you know, people are excited about the efficiency. They're excited about automating things that can be automated, so that they can do the more exciting and interesting things.

Justin Grammens  17:52  
Yeah, for sure. Getting it probably depends, of course, on the applications. And maybe the level at which you're talking to people, I guess, if people have the ability to now sort of level up, I guess, use their domain expertise in other areas within the organization, you're right, there's a huge value that can come out. But if they don't, you know, and again, you can, you could go back to the self driving truck example. It's funny, I've heard from other people that even truck drivers don't want to be the ones driving the truck, like they would love to just push the button and then actually be more interacting with the customers, or working on actually doing the manual delivery to the you know, at the end, you know, and so there is always that fear that Oh, my gosh, the machines are coming to take my job in some in some shape or form.

Mike Hugo  18:30  
Yeah, totally.

Justin Grammens  18:32  
So I'm curious what a day in the life of is, you know, a person in your role. Obviously, you guys are a pretty small company. You're the CTO and co founder. But yeah, can you let us know kind of how you deep into the code? Are you thinking more strategically? You're probably doing everything? I don't know. Just kind of curious if you could share that with our listeners?

Mike Hugo  18:49  
Yeah, absolutely. This is the one of the fun things that I really love about working at small companies is that you get to wear many different hats. And every day is different, every week is different. And we have kind of generally goals that we're working towards, and things we want to build the software to do. But we're constantly being responsive to our users, and trying to figure out what use cases our software is good at solving and pushing that out. So you know, my morning starts with usually diving into some code. While it's a little quiet. It could be something in the data science world, or it could be something just in our RESTful API, that's, you know, returning different properties or something like that. And then, you know, we start talking to members of the team and clients and the day quickly evolves as things come up. But that's what I love. I mean, I think there's a certain amount of excitement, a certain amount of the ability to be responsive quickly, and to build cool things that makes working at a startup, more fun than working in a larger organization. So yeah, every day is different. Luckily, I do get to spend some time in code. And I get to spend some time talking with clients and I get to spend some time working with the team. I think that's all three Those things are really important to me and really fun part of my job. Excellent,

Justin Grammens  20:04  
very cool. When it comes to your guys's technology, obviously, you're, you know, things are changing and evolving quickly, where do you see this going? In 10 years? 20 years? Are you guys even thinking that far out? But I mean, would, you know, like push it to the extreme, any feeling regards to like where this AI roller coaster will take us all?

Mike Hugo  20:24  
That's a very big unknown, you know, like, who knows what somebody would have said 10 years ago about where we are now. But I do think that there's some really exciting advancements, especially in the text analytics space for computers to be able to read and understand the context of the text. You know, this, this is the the other big advancements with the transformer based models is that before you could have two sentences where one said, there was bark on the tree, and you had another sentence that says that the dog barked at the tree, and the computer would tag bark in both sentences is the same thing, right? Because it didn't have the context. But now with the new models, we're able to understand more around the context of a given thing in the text. And so I think what's going to happen is, we're going to see an explosion of text based analytics algorithms, like we did in the image space, that's going to enable all sorts of new things, whether that be better document summarization, like the GPT stuff, where they're they're actually generating whole news articles. Deep fake stuff is kind of interesting, you know, seeing the videos that people are making that are, they look really real, but they're not. So I don't know, I think there's a lot of stuff happening in this space right now. And I think the key point for me is that it's not all hype, there's actually some rubber that's meeting the road here. And there are some really cool things that that are happening that that are going to change the way that computers are able to analyze large amounts of data and come back with really good answers to things.

Justin Grammens  22:01  
Yeah, you made me think about the Turing test, when white me be able to have a conversation with somebody and not realize that it's a human, or it's not a human either way, you know, one way or the other. And we're still a little far off from that, in regards to generalized AI, but it's changing every day. Right? I feel like it's being getting pushed closer and closer to that every day.

Mike Hugo  22:19  
Yeah, absolutely.

Justin Grammens  22:20  
I talked a lot about AI and your career and what you're doing professionally. What are some things that you enjoy to do that you enjoy? Just in your personal life? You know, do you have any hobbies, other other stuff you're interested in?

Mike Hugo  22:31  
Yeah, absolutely. So I try to get outside as much as I can, going fishing and getting up to the lake, getting outside and enjoying the water, or ice skating in the winter. We also have a lot of reading happening in my house right now, to pass the time during the pandemic. I do some technical reading. But I also try to get outside of that and just read something that's not related to work at all, and set some time aside for that. Otherwise, I have a tendency to dive in so deep that I constantly I'm just, you know, thinking about a problem. And it's good to step aside from that every once in a while and just take a break. Let the subconscious work on it. Yeah, do

Justin Grammens  23:09  
you have a favorite book?

Mike Hugo  23:11  
Right now I'm reading a book that I've read many, many times over. It used to be on my dad's shelf, it's called the Eagle has landed by jack Higgins. It's kind of a world war two historical fiction ish book. And it's just something that's easy to read, because I've read it before. And it kind of takes you away to a faraway past place. And it's nice to have something that's not related to computers at all, to let your mind kind of simmer on something else.

Justin Grammens  23:36  
Yeah, thanks for thanks for sharing that. Yeah, I will. There will be liner notes with this episode. So I can all find that book and link it off to that along with all your other information. And the thing on a personal note, too, I mean, if there was anybody, or any superhero, or any character, you could be for one day, you have any, any feedback on that? Oh, that's

Mike Hugo  23:54  
a great question. I'm not so much into the superheroes. My kids really are into dinosaurs. So in particular, they're fans of the terrain and on the terrain, and on dinosaur used to fly around. And so I think that would be kind of fun to do. Be a big dinosaur that could fly around the world might be something fun.

Justin Grammens  24:13  
Very good. Well, so you've talked a little bit about your path, how you've gone from programmer to know, deep into data. Was there something leading along the way that made you more and more curious about data? I mean, it's certainly a hot thing these days, you probably got in maybe earlier than some of the other people have. But did you always sort of see yourself being interested in the space or to kind of evolve over time?

Mike Hugo  24:35  
Oh, it's definitely been an evolution. You know, I think being able to starting out as a computer programmer, and it's particularly in the.com era and building websites, it was really fun to see something and build something that you could share with the world very easily, right. And so I've always enjoyed doing that sort of thing. And data is just an evolution of that. So now we're not just building an HTM form that maybe does something when you hit a submit button. Now we're actually able to do really cool things with the data. I think the other thing that when I started doing more data analytics, with things like GitHub, and you know, online courses, it's really easy to get your feet wet and just try something. So I remember, when we were first doing image analytics, I found a bunch of images on the web of counterfeit drugs like tablets that were counterfeit, or boxes that had counterfeit markings on them. And I found a little tutorial on how to train an image classification model in TensorFlow. And you know, in an afternoon, I was able to train a classifier to look at images off of a website to determine whether something was counterfeit box or real box from manufacturer. And that's just cool. Like, it's just a random idea that you come up with, and you're able to find example code or somebody who's played around with it before and do something fun with it. That could be practical. That's really exciting. Yeah, very cool.

Justin Grammens  26:05  
Yeah. You know, you've talked about all these tools. I mean, you talked about GPUs, he talked about getting a lot of data. And he talked about us, I think, probably like leveling up, right. So we're not, we're not essentially writing what would be termed as, you know, assembly programming anymore, right? We've been able to now you just can use TensorFlow, as long as you have the data, you can sort of like throw it in. Obviously, there's a lot of knobs and tweaks and all those sorts of dials, you can turn a little bit to make it better. But in general, it's become easier and easier for anybody to jump in. And my background has been a lot historically in Internet of Things, right. So I've been working in Arduino and Raspberry Pi, I mean, for the past decade, you know, doing stuff like that, it's getting to a point now where Yeah, there's so much stuff that you can just get off the shelf off the shelf hardware sensors, you can put them out anywhere, and you can start getting data. And this is the beauty where where my head is at now is you know, getting the data, I have the capabilities to do that. But now the data analysis is where the true business value is, at the end of the day. And where my head is sort of like going with this is, you know, is there a point where it's almost, you know, too easy? Is it becoming commoditized? In a lot of ways that I don't know that because anybody can do it, it becomes trivial. Are you seeing that at all? Well, I

Mike Hugo  27:16  
see that to a certain extent, I think, you know, the democratization of software is a good thing. I think the ability for lots of people to be able to train these things is a good thing. I think what happens then is that you're able to find the problem that you want to solve with them. Right. And that's not necessarily going to be something that's it could be something that's really broadly applicable. But it might be a niche that you want to actually you want to scratch. And I think that's a really cool thing there is you don't have to have a super huge common problem that you want to solve. With the ability for anyone to train something like this, you can come up with something that's valuable for whatever you want to do and build a solution for that.

Justin Grammens  28:01  
Yeah, for sure. I guess as I as I was thinking back with my question and your your response, the other thing that came to mind was just, you mentioned about building something in an afternoon and sand. Same thing for me, I can put a sensor out and get, you know, data readings pretty quickly. But there's the whole production zation of that if that's even a word. But you know, it's it's one thing to do something in a weekend, but to bring it into what you guys are doing today, right? There's just a whole new level.

Mike Hugo  28:25  
Yeah, you're absolutely right, because you can take somebody's example off GitHub, and hit a button and run it and colab and say, Hey, I got this really cool result. But that isn't an application yet. That's maybe the core piece of something that will become an application. But there's some other glue that has to happen to make that become a reality. And so you know, finding a way to build a model, finding a way to serve a model, putting it into like TensorFlow serving so that you can do real time serving. Or maybe you can do it in bash. You know, it really depends on what your use cases, but finding ways of making those things accessible to an application so that you can do something with it, you know, in a mobile app or on a website. That becomes another aspect of it. It's not just the model, right? It's the glue that holds it all together. And that whole piece of it also makes it valuable.

Justin Grammens  29:15  
Yeah. You mentioned TensorFlow serving. I actually wrote a blog post about that a couple weeks ago on our lab 651. website. Are you guys using that? If you don't mind me asking internally, or

Mike Hugo  29:25  
Yeah, we use TensorFlow serving we also use so Nvidia has some really cool tools as well. So they have something called the Triton server, which is optimized for using multiple GPUs. It's a variation of TensorFlow serving. And they also have some really good algorithms for doing things like conversational AI. They have things for training models in a bunch of different frameworks like pytorch or TensorFlow that are optimized to work on Nvidia GPUs, of course, but we're an Nvidia Inception program partner, and so we get early access to some of those Things like they've built a model called vial Megatron, which is a spin on Megatron, which is, you know, more specific for the biomedical space. At any rate, there's some really good tools that Nvidia also puts out there that are available to help with that sort of thing. Or you just go with the plain vanilla TensorFlow serving to depending on your needs and performance requirements.

Justin Grammens  30:20  
Yeah, I mean, my blog post kind of started off with the idea of like, you know, like you and I were software developers, and it's really API's and services that really are the are the foundation of the internet. And for me, when I started looking at colab notebooks, I'm running this and I'm like, that's cool, fine. But how do I get this thing on the internet? Right? I want to make a call to it. So the beauty of TensorFlow serving was like, hey, there's a REST API for you to use. But again, like, you need to drop the model in a certain directory, you can version it, stuff like that. But like that's kind of left up to you, right? I mean, you kind of have to do all the infrastructure on the back end. So they've given you a pretty cool piece of software, but how to get it into production and manage these, these models are still up to you. Right?

Mike Hugo  31:01  
Absolutely. And then there's more, you know, more that you can do with the downstream like, if you in integrate an active learning model, or something that takes feedback from a user, you present them with two options, they pick yes or no. And then you capture that information. That's gold, right? Because now you have not only the data that you used to train the model in the beginning, but you have user feedback that you can incorporate back into the model and retrain it. But you're right, you have to build that you have to make that part of your application. And it's just another aspect of, you know, like we said before, the deep learning and the AI stuff is just a tool. And you still have to kind of build the things around that tool to make it valuable to someone.

Justin Grammens  31:43  
Yeah. Are you a fan of TensorFlow? pytorch? Both? Yeah, we

Mike Hugo  31:48  
use both. We have a lot of stuff with TensorFlow. But we also use pytorch. So we can do things with either either type of thing, or Kerris. There's so many, right. And so really, what it comes down to is, if you find an example, maybe on in an open source project that maybe is written one way or the other, you can leverage that and have that be the starting point of something. But I think we've been leaning towards TensorFlow lately. But we have stuff that runs the gamut.

Justin Grammens  32:16  
Yeah, usually, it's the community that drives adoption. So once you get that critical mass going, then yeah, everyone's Okay, we'll go this way. Or we're all sheep in some ways. But you know, it's but like you said, it's kind of the path of least resistance, right? If I can use an open source library that's written one way or the other, why reinvent it? I'll just I'll use what's out there. When it comes to learning and knowledge, it seems like you've picked up a lot of stuff just along the way. Are there any any suggestions on books at all? Or courses? conferences used to be a thing before COVID? Who knows? Are there certain places that you would suggest people explore or take a look at if they want to get into this field?

Mike Hugo  32:54  
Yeah, for me, I'm a hands on person. So I really like to try something. And so finding an example of something that I want to build, whether it's an open source project on GitHub, or a collab notebook that somebody has written, and just try and spin it up, and run it, that's kind of the bend for me the way that I've learned a lot of this stuff along the way. I know there are, you know, some online courses you can take and share. There are some great books out there, but I really enjoy actually diving in and trying things out. And so there's so many resources to do that, that I think that's an excellent way to get started is find something that you want to build. And then somebody who's done it before, guaranteed, you search GitHub for it, you'll find an example and try it, you know, check out the code and try and run it, it won't work. And then you have to figure it out. And as you do that, you'll really learn the inside and out of how to do something.

Justin Grammens  33:50  
Yeah, very good. Good idea. So how do people reach out and connect with you?

Mike Hugo  33:53  
Yeah, so you can find me on LinkedIn, we put my email out there, too. That's fine. You can visit our [email protected] Those are some pretty good ways to track me down.

Justin Grammens  34:03  
All right, well, cool. Is there anything else that I might have missed that you wanted to touch on topics, anything related to artificial intelligence, or machine learning? Mike?

Mike Hugo  34:12  
No, I think, you know, it's an exciting area of development. It's becoming something that anybody could pick up and learn. And encourage people, especially in data science field, or even just software programmers to kind of take a look at it and see if it might be something that could help with what they're building, because I think there's gonna be a lot of this stuff in the near term future. And it's useful to kind of wrap your head around how it works and what it does. For sure.

Justin Grammens  34:38  
Yeah. You know, that's, that's the one thing about the reason I call this the applied AI podcast. It's really it's not about the technology, like you said, it's just a tool. And it's the applications that some businesses aren't really seeing it yet. And I feel like the same thing with Internet of Things like people aren't, don't really understand. You can put a sensor on this piece of whatever it is out in the field to start getting more information on it. A lot of it's just I mean, technology sometimes takes a decade to finally go through the hype cycle adoption curve, if you've ever seen that by Gartner. So, yeah, so I feel like you know, while there's a lot of expectations around AI for, like, for example, you're right, we're seeing some real meat coming out of it. Some really awesome use cases, and businesses just need to be aware and be sometimes kind of led to water, I guess with regards to what's coming out.

Mike Hugo  35:25  
Absolutely. Cool. Well, great, Mike,

Justin Grammens  35:27  
thank you again for your time. I appreciate it and look forward to catching up with you later.

Mike Hugo  35:32  
Alright, thanks for having me on.

AI Announcer  35:34  
You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn if you're interested in participating in a future episode. Thank you for listening

Transcribed by https://otter.ai