Conversations on Applied AI - Stories from Experts in Artificial Intelligence

Jacob Solawetz - DataFlows and Deep Learning Applications

February 15, 2022 Justin Grammens Season 2 Episode 2
Conversations on Applied AI - Stories from Experts in Artificial Intelligence
Jacob Solawetz - DataFlows and Deep Learning Applications
Show Notes Transcript

The conversation this week is with Jacob Solawetz. Jacob is a Machine Learning Lead at RoboFlow where he’s developing and scaling computer vision using autoML model training and deployment. Prior to RoboFlow he was a Senior Quantitative Associate Analyst at Travelers. Jacob holds a bachelor's degree from Washington University in St. Louis where he majored in Mathematics, Economics, Philosophy, and Computer Science. That quite a range of things to be studying. I love it! Welcome, Jacob and thank you for being on the program.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events! 

Resources and Topics Mentioned in this Episode

Enjoy!
Your host,
Justin Grammens

Jacob Solawetz  0:00  
But I think there's like some kind of super general approach that we could start taking. Where like you could imagine like, like, right now you have transformer blocks in pewter vision and in NLP, better network architecture that's seemingly able to model everything you can see a world in which, you know, you research gets better at solving like the catastrophic forgetting problem. And you can start to store all this stuff up in one shared set of weights that can be modulate it for different use cases and stuff. And so then it just gets more and more general. But I think there's always people who are like applying it and be able to get I think that's one thing that's different about these the GPS and stuff is that it's still very much a input output and the developers integrating it with something if it's actually getting used. And I think that's just going to be how it always is where there's developers that are integrating data for us. I just don't see something that's like coding itself, you know, performing actions on its own initiative.

AI Announcer  0:59  
Welcome to the conversations on applied AI podcast where Justin Gammons and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.

Justin Grammens  1:30  
Welcome everyone to the conversations on applied AI Podcast. Today we're talking with Jacob silhouettes. Jacob is a machine learning lead at Robo flow, where he's developing and scaling computer vision using auto ML model training and deployment. Prior to Robo flow. He was a senior quantitative associate analyst at travelers. He also holds a bachelor's degree from Washington University in St. Louis, where he majored in mathematics, economics, philosophy and computer science. And that's quite a range of things to be studying. So very impressive. I love it. Welcome, Jacob. And thank you for being on the program. Yeah, thanks, Justin. I'm really excited to be here. Yeah, pretty wide range of academic interest. But I promise they've all been relevant today. Good. Yeah, I went to Augsburg College liberal arts college here in the Twin Cities. And so I love just sort of just this full range of of education. Because yeah, there's definitely a lot of overlap. And today, I'm super excited as well, because I'm very interested in computer vision, and its applications and all sorts of areas in the real world. And I have a startup that I co founded. It's called captivation. And we're using audio and video to help people become better communicators, using online presentation tools, right. So getting a chance to see how people not only how their facial recognition works, but also how their voice and speech works to improve their presentation skills. So I'm very interested in what you've been doing at Robo flow. You know, specifically, I know you guys are deep into this space, but also in your career. So maybe give a little bit of short background on how you got to where you are today. 

Jacob Solawetz  2:54  
Yeah, certainly, like Justin said, I studied math and econ at WashU. And I really got interested in quant trading, I started to write some algorithms on trading in the market. So I was across various asset classes and bonds. And stocks were kind of where I started to focus and kind of where I started my career off. And then through the process of learning, qualitative finance, I really got interested in the machine learning side of things. So by the end of my college career, I knew I kind of wanted to shoot towards an AI PhD, as I was working on pa strategies, I was also doing AI research and my nights and weekends. But then that led me into an NLP PhD, where I was working on cap on research and designing conversational dialogue systems, you know, it was actually pretty similar to what role flow is today where we had some kind of platform where there was ensembles of neural networks underneath. And then you could kind of load in custom data and then train from there and kind of fine tune into a particular domain. So the way that it works with chatbots is the exact same way that it works with computer vision models. And so I got some good exposure there and had an opportunity to join to my friends early on global flow venture, and I took the opportunity, and ever since has been history has been focusing on computer vision models now. 

Justin Grammens  4:13  
That's awesome. 

Jacob Solawetz  4:14  
Yeah, and I if you check out Robo Flo, maybe you can give us a little bit more details on exactly what you guys are doing. But it feels like like you said, this ensemble sort of training machine, right, you can feed it a bunch of images in particular and sort of classify them, and then it will start using all that data to start making predictions as new images are fed to it. Yeah, that's right. Yeah. So I guess a quick summary of what the rollable platform is, is it's a place where you can float in image data, annotate that data traditionally, that way that we find most users getting value from that as an object detection. So just kind of labeling bounding boxes around the objects that you want to detect. And then you can create dataset versions from those annotations and you can try different things like different augmentations and pre processing which edit the way that your data set is formulated the way that images are formed before you pass them into trading. And then you can kick off training on the back end. And they get inference back on a hosted API. So you can kind of go through this whole process again and again easily as you're adding images. And so the biggest kind of philosophy of Robo flow that I think is true in AI landscape at large today is that data sets are immutable. And if you're doing anything in production, if you want to be debugging your machine learning models, you're going to be iterating on your dataset and continuing to publish new versions of your model and really fine tuning it to the domain that your your model is working on. So that seems to have played out as a need developers would machine learning scientists have. And then the other side of things is that robot flow has been expanding the horizon of people who can actually use computer vision and deploy it into their application. So you no longer need to have a PhD in computer vision to start making some of these play together models because you can just use a service like ours to sit on the shoulders of giants and use some of the best open source research that is out there. 

Justin Grammens  6:02  
That's awesome. Well, I think I saw where you employee number one, I guess when you mentioned early days, yeah. 

Jacob Solawetz  6:07  
So I was played number one I was technically, I don't know what my title was. But I think it was like technical generalist or something. But I did Elephanta things like marketing and customer success. And as well as I know, engineering and kind of working on the product side of things, which is now my primary focus. So now abstracted, a little a few layers away, I still go on sales calls things but yeah, a little bit deeper down to the leads. Sure. I got to pitch in everywhere you can I started a number of different companies. And yeah, people were all sorts of different hats. You had mentioned working on an a, like a PhD in AI. Right. So what could have sort of sort of got you away from I guess, the research side into a business? Is there anything? Yeah, definitely. So I mean, when I was shooting towards an AI, PhD, always had the idea of starting an AI company afterwards. And so when I started my PhD, it was at Michigan, and I was working on a company called plink at the same time. So I always kind of had one foot in the startup world, if that makes sense. And then, you know, when Coronavirus hit my lab was going entirely virtual actually don't know if they've still met up yet good friend from home was wondering if I wanted to, they got accepted into Y Combinator. But it's like, Oh, do you want to move in and start working on this area together? It seemed like a better option at least and it kind of my micro lens for my life. You know, in a macro lens, I was like headed towards a PhD in AI research. Gotcha. Is it something you think you might go back to someday? Or is it are you feel like you're getting so much information and so much knowledge right now just being out in submerged in it? Yeah, definitely. I kind of think of my career a little bit as like a gradient descent towards AGI in some sense, you know, you kind of be one thing that gets closer and work at all, I think that's good was posted more general. But there's a lot of things about this gig industry now that I'm finding a ton of value in, which is one keeping the problems real. And then also this kind of feeling of having a user base that's pulling things out of you, and kind of trying to think about an idea is larger than your own heads. The other thing is there's a lot of problems with the academic research world as it stands in AI, and I'd be happy to put the code out that get into that. But yeah, I mean, I think places like row flow is a very interesting place to get exposure to career in AI. But but also, there's things like open AI to where there's, there's these kinds of really cool initiatives that are forming where you can get alternative exposures to build. There's really not that many professors out there, too. That's the other thing. We're watching this podcast, I'm sure we'll be thinking about as well. So sure. So yeah, you have all these images, I guess when I think about then it's the tagging and labeling of them that can be a monumental task for some of these companies is is that true? Or like? Like, what are some of the barriers as people start using robot flow? Yeah, labeling a really large data set can be a really slow thing, there are different approaches you can take to like the label labeling size. So there's automated labeling, when you train a model, you can use the bottle to label and then just kind of edit from there, which lets you get kind of more intimate with your model, then because you have to watch them for corrective Washington for correct it. But it also speeds up labeling. And then there's other automated solutions like Amazon sage maker, and you can use things like scale AI is all really good, like mass labeling platform. And the solution there too, for us in most cases is if we don't do something extremely well, we just integrate with people. So you can log in scale annotations of the robo flow. The other side of it is a lot of just like problem scoping, a lot of problems now are tractable with pretty small datasets. So if you can scope your problem and chunk it down and kind of limit the domain that your model has to span, you can limit data set sizes, but it's always different for every problem, and more data is always better. Sure. 

Justin Grammens  9:46  
Do you guys do anything with like transfer learning? Like, can I bring in a model from something else and apply it to a different business use case?

Yeah. So this is actually kind of a cool transition to talk a little bit about maybe river flow universe as well. But yes, so checkpoint. Training is supported. So if you have a version that you started with, you can take that version and start a new thing. So maybe if you wanted to protect people, it might be good to use a cert from the cocoa checkpoint, which is a big open source data set that Microsoft released, it has a lot of people on it. So you already start from a pretty good base, if you start from that model. Now, you can't necessarily import your model in from elsewhere, because you're gonna imagine for my side, that'd be very difficult to get all these frameworks to play with each other in different checkpoints, the play log, so we kind of it has to be trading within row flow to do the checkpointing. But now, the other interesting thing that we're working on is this brothel universe concept, which is a public repository of datasets. So now you can come into our application, and you can make a dataset public and use a lot of our features for free if you're willing to make your data public. And we found with a kind of a nice bifurcation between the communities that use our platform, which is there's a lot of students and hobbyists are completely open to that idea. And then enterprise clients need their data private. And so they've pretty much self selected themselves into that private area. But the cool thing about that is that we're accumulating a massive, massive data set of data sets, you know, so it's like, got over 20,000, just different datasets are, so I think it's like, or your million images or something like that now, so it's, you can start from all those public checkpoints, too. So also, another one of our thesis is that there's a lot of these models that are going to be redundant. So you won't even have to make your own model. So there's like a plane cards model on Rainbow universe, where people are inferring against it, who didn't even train it. And that's kind of the end goal, where we want people to get to where developers can just pick up an API that they want to use. If it's detecting playing cards, then maybe they can just pick that API up, they don't have to even pipeline anymore, you're just abstract, the whole thing, only, I think that's really going to help advance I guess, the power of AI. And it's kind of the beauty of open source just in general, right? Everyone's been able to, as you said, stand on the shoulders of giants, and people build these models using data that I'd never have access to, for example, then we can do some really cool things together. 

And, you know, there's a lot of compute that goes into building these models, too, right? Like, why would you have to or want to, or just a lot of energy, just actually just electricity that goes to make these these models? So why why reinvent the wheel on that? Right? 

Jacob Solawetz  12:19  
Yeah, certainly, no need to spin in GPUs are necessarily, have you heard of teachable machine? Okay. It is a sort of thing where you can actually train pictures of various things. And, you know, you can send in pictures of cats and pictures of dogs. And then you can show it a picture of a cat or a dog and it will go ahead and show you what's there. And you can can actually export your model, then how it into like a TensorFlow model that you can then apply somewhere else, are you guys able to, again, kind of diving a little bit more into Robo flow, but we can get on to other things, but I'm just just more or less curious? Can Can you export your model at all out of this stuff? Uh, yeah. So there's teachable machine seems pretty similar. What we're doing, we do image classification to exporting is kind of always a little bit of a sticky topic or the long story. If you allow people to export their bottle, they can sometimes just kind of run away from the platform and then, or they figure out how to trade a similar one. And then they just started run away. And we're trying to kind of sell it more as a bundled package. So a lot of the deploy out offerings are actually abstracted, they're swallowed by Docker containers that you can stand up, because the thing there too, is, then you have all the installs as well. And that you can have just a little server that spins up, that runs your model for you. And so you get completely abstracted away. Because if you just get a weights file, you still need to build TensorFlow, you still need to deploy a server, you maybe need to make your own Docker service surrounding it. And so we try to just kind of give you that whole thing. And then you can just have a full server, it's a you license that to run your model. 

Justin Grammens  13:47  
Sure, sure. And like you said, these models need to iterate they need to grow over time. And teachable machine is really something I've just think, for people to see the power of what you can do with just a few images. And that that's just like what's really cool. And I'm actually reading a book right now. It's called the 1000 brains. And it's by the guy, Jeffrey Hawkins, I think is his name. But he basically was the guy that founded palm, and started working on the palm pilot. And what's really interesting about the book is he really talks a lot about as a human, there's a lot of predictive things that your body that your brain does automatically does for you, right, if you were to reach over and grab a coffee cup, there's a certain sense or a certain feeling that you expect to have happened to you and when that doesn't happen. And it's like, oh, I need to move my hand or This doesn't feel right, or it's hot, or it's cold, or whatever it is. And so this whole idea of prediction and different types of ways that our brains work, and it's all biochemical type stuff, right? That's basically going on. But there's neurons and chemical reactions that are happening in our brain very similar to what would happen in a neural network. And it's been really, really cool to see the research that's been happening on that. But where's my brain going with this? Well, it's really kind of going back to I think, this idea that you can't just create a model once and then be done with it. And so I think you guys provide a lot of those tools right to say if I have more data to feed into this system, let's add some more to it make it better.

Jacob Solawetz  15:04  
Yeah, definitely. And just really focusing on the dataset, just put the dataset first. You know, that's kind of his thing. I think, why these things is usually the Think of the data set first, unless you're working on some really, really general technology. Most of the time, it's good to just kind of find a model app way that makes sense. And then focus on the data. Yeah, for sure. How would you define artificial intelligence? That's a good question. I guess I would define artificial intelligence as teaching machines, concepts that humans can understand, well, then sometimes maybe even exceeding human level of intelligence with the thing that you're teaching it. And I did kind of a YouTube recently that has kind of some of these ideas here, where it's people have been asking me, I wrote below what the difference between machine learning and AI is like, and then I also hear computer vision. And sometimes I think all three of these are interchangeable sometimes. But the way my mind you have it right now is you have like AI, which has a very broad umbrella machine learning, which is also very profitable, but includes things like computer vision, NLP, so images and text, but it also has things like ASR. So like let's speech waves, and then all kinds of other data inputs that you could be putting in. But images and texts are kind of the primary focuses right now. There's definitely other data pools that are getting process. So yeah, so machine learning and AI span both images and text. But then for NLP and computer vision, you have those specifications. And then the difference between machine learning and AI is that AI tends to mean things that are a little more general now, things that are AI now will be machine learning and a few years. And then as the horizons grow AI will kind of like be this term that keeps an occupying the horizon of things. So I guess what that means an application now is like, you can use AI technologies, when you're using more general models like you're using like Bert, or if you're using clip, which is a new one in your vision that connects images and text. So if you're doing anything zero shot, or you're predicting from just like, no training at all that in my mind is like you're using AI, and then if you're fine tuning something in with kind of like narrow domain specific datasets. So that's machine learning. And so machine learning would be like object detection, image classification, slot value pairing, you know, text classification, question entering that kind of stuff, where you feed in a custom data set, and it's like, the pipes are already kind of running. That's machine learning and like, fine. I guess the other difference, too, is like, Yeah, but unsupervised. And supervised, unsupervised is AI, but obviously, you know, that's kind of a tricky term. Because even these unsupervised models are supervised in a way, it's just they get to say unsupervised, because they use a huge data set that they don't need to annotate. But the model training is still supervised. So that was, I kind of went on a tangent there. Well, you did get me thinking then it just about artificial general intelligence. And I think I like to talk to people a little bit about its, do you think that's possible? Yeah, I certainly do. I think the definition is a little bit difficult. What exactly is listening? I don't think there's something that just like runs itself, I think the intent stays with different humans. But I think there's like some kind of super general approach that we can start taking where like you could imagine like, like, right now you have like transformer blocks in computer vision, and in NLP, better kind of network architecture that's seemingly able to model everything. And so you can see a world in which research gets better at solving, like the catastrophic forgetting problem. And you can start to store all this stuff up in one shared set of weights that can be modulate it for different use cases and stuff. And so then it just gets more and more general. But I think there's always people are like applying it. And I think that's one thing that's different about these, like, the GP keys and stuff is that it's still very much a input output and the developers integrating it into something if it's actually getting used. And I think that's just going to have to be how it always is where there's developers that are integrating data flows. I just don't see something that's like coding itself. Anytime. I don't know, you know, performing actions of its own initiative. Sam Altman, founder of open AI, his definition when asked what is AGI? What's the threshold for you? And his definition is, occupy is greater than one half of the world economy is maintained by this thing, which I think from that standpoint, yes, that is gonna happen. But is it a thing that has like toned like Terminator is probably like, less likely? Interesting. Yeah, it is pretty broad. I mean, what do you mean by occupies half the world economy? And in some ways, my mind goes to well, what sort of job roles are we talking about here? Because there is a fear that humans are being essentially outsourced by AI and computers. And I think to some extent, they are

Justin Grammens  19:59  
But What's your thought on that is? I mean, is this something that people need to worry about? Like, how does this impact the future of our jobs and our work? 

Jacob Solawetz  20:07  
Yeah, I mean, unemployment is that like, an all time low, or it was no pre COVID. And everything, and COVID really didn't have anything to do with AI. So at the same time, there was all kinds of machine learning technologies being pulled out, maybe we haven't really seen the full scope of those. But I don't know there's people developing on rebel clothes, popcorn, were making titles for themselves called Computer Vision engineer, or the other thing that I think is also a very real factor is there is sometimes maybe some frictions in the academy where people do from things but like, human demand is infinite. AI will be deployed by human intense, I don't see anything work, like you automate things away. And then people are like, no, actually, we're just born. We don't want any more things, or we don't want more performances or whatever. Yeah. And that that well, hello is equal, especially if you like pair that with like, crypto and stuff where, you know, things aren't moving. Where liquidly? And like, I don't know, yeah, I think the fear for some people is, is my job has been outsourced by a computer, right? We always sort of thought oh, musician could be would be safe, right? They could always perform and make music. And now a sudden, like, oh, AI is doing that instead. So it's always just an interesting question to sort of ponder and more or less think about the human race, you know, like, so what is our value? Because, you know, you think about even just 1520 years ago, or so, there were a lot of people that would be cashiers at a grocery store, for example, right. And now it's just gonna self, you know, check out and the next wave of that is just walk out of the store, right? You don't even need even even check anything out anymore. You just walk out the store. So there's a whole number of job titles here that have been eliminated, just because of technology. And, you know, some people are worried about that, I guess. Yeah, that doesn't make sense. And, you know, in some cases, with like, a centralized AI that generates an insane amount of value for society, I definitely believe in like UBI. And I was at some of the Andrew Yang meetups in St. Paul, and that kind of stuff. So Gotcha. I'm not saying we shouldn't hedge. I'm not saying we shouldn't hedge against it, but it's probably gonna shake itself out. 

Justin Grammens  22:10  
Sure. Have you read his book, The War on normal people? 

Jacob Solawetz  22:13  
I think I did start it. I don't know if I finished it. 

Justin Grammens  22:15  
But yeah, he paints a pretty dire picture of like, what's going to happen and why it's different this time around, right? Because people are like, well, the industrial revolution came. And, you know, a lot of people moved from an agriculture economy to more of an industrial, you know, world. So there was some retraining that was involved, but there was a place for them to land, he lays out some reasons as to why Yeah, we need to have this minimum income for people. But outside of the politics side, it is very interesting. And, you know, my, my mind kind of goes back and forth every day, I see something really cool. You know, I guess this question to you to, like, have you what's a cool AI project? Maybe that you've seen happen recently, but I see something cool that that happens. And I'm like, Oh, wow, you know, human race is doomed. And then I dig in a little bit more. And it's like, no, we're still really far away. Because the model is specifically trained in one little segment, right. And so the moment it veers off from that, then, you know, a two year old kid could pick it out better than this computer could. Right. So there's still a long ways to go. But I don't know. Have you seen any interesting projects, and anything that you're dabbling in? Or you saw on the news that you thought were that was interesting going on today? 

Jacob Solawetz  23:18  
Yeah. So if we weren't even, like either issue projects, I could talk for days, user made a flame girl, a weed killer. Oh, nice. So you've got this Kwame bear on this bot, and it goes across his lawn, and I'm gonna detect Soviet interference that that's awesome. So that was a that was a fumble on the way there a robot? For sure. Well, yeah, I am. And my background is in IoT. So anytime it sort of involves a product, it's using AI and IoT together. It's just like, perfectly right, my sweet spot. So yeah. Do you read a lot? Do you have any books or or conferences or topics, any sort of stuff you're learning today? Either AI or non AI related? Really? Yeah. Oh, yeah. I'm trying to read a good amount. I've got various books that I'm working on AI related. I've got this one right here. This one's called I'm a strange loop by Douglas Hofstadter, the same author of gadelle, Escher, Bach, which is a really good AI book, but it's like 900 pages long. The kind of the summary here is that he thinks that consciousness emerges from loops in the brain self referential loops. And so that's kind of where the Escher stuff comes in, where Esther has a lot of these loops in his artwork, right. You also see that in a lot of mathematics, like girdles Incompleteness Theorem where certain things in math could never stand by themselves. They have to at some point, there's some kind of self referential loop and logic. This is cool stuff. If you want to get there I don't know if what tangible things I got a lot of that book. I certainly had a good time reading it is also important. Yeah, sometimes you just read for enjoyment. Yeah. On the soccer side of things I've read it got deeper this right now, which is by Oscar Wilde.

He's in prison. And he has a lover, Lord Alfred, who's just taken him on such a journey in life. And he's ready to give out all his grievances, but also continued loves for Lord Henry, or Lord Alfred. So it's kind of an interesting raid, like romantic period. But I'll put links to all this stuff as we publish for sure. One of the things I'd like to talk to people about and ask them is, you know, you've been able to be successful here, as you've been working through your career. I mean, any advice on classes people should take? Like, how would somebody new get into this industry? This field? Yeah, certainly, I do feel somewhat qualified answer. Yes, I got into machine learning in AI from more or less bootstrapping it. And some classes I found useful was Andrew Yang has a great Coursera class on machine learning. Many people may have heard of it. But if you haven't, you actually do things in like MATLAB, I think and you really get used to how to manipulate the matrices that make up neural networks. And that's a really good, good way to kind of start learning things. If you want to move a little faster. And you don't don't think you need that, that you can start to abstract a little bit. There's a fast AI courses, which are really good now to kind of start getting your hands on building models in Python, and using some libraries that are very particularly tailored for people to get a low floor and then a high ceiling later. And then yeah, just learning like I torch is another great thing to do. But yeah, and then the biggest thing, which is always the best advice, or learning how to code anything, instead, start a project and you know, kind of get involved in something that's going to force you to need to leverage different technologies where you wouldn't normally be leveraging us. Yeah, for sure. You know, one of the things that I tell people is try and find a meetup, try and find a group. And they can be in the Twin Cities, like the applied AI group that we we have that meets the first Thursday of the month, but it can be now anywhere, right? Everything's gone virtual. So there's a great opportunity and look forward to having you and I think a coworker as well presented a future meetup here at the the applied AI group, right. Yeah, definitely. Looking forward to that. And then yeah, that is a great point that you can find to kind of push yourself forward. That's huge. Well, yeah. Is there anything else that you want to talk about? Or share with listeners that maybe we didn't go over? Yeah, I guess I just be curious to cover a little bit of your experience with vision. I know, you said you're one. So embedded devices, oh, we could get into like deployment things a little bit, too. And I guess what kind of ways that you leverage computer vision, some of the work that you've done, and be careful about that? Yeah, well, we work with a company that was doing inspection on leather. So we worked with them to can, again, a lot of these cases are human cases, where humans are looking at particular images. And it's very time consuming, it's very boring. It takes some skill to understand where there's holes in leather, that technology is one thing, but it's also just like, What is the use case? What's the application around it? Right? So if you're making leather, for example, it's okay to have you know, holes or pock marks in these leather, if it's on the inside of a shoe, for example, someplace you don't see, right. But if you're gonna be making a car seat, or a couch, you need like, wide, expansive areas that maybe don't have issues in them. And that could be veins, it could be, you know, ticks that are sort of blemishes in the leather. And so right now, there's people that are literally at, you know, places where they're taking leather, and they're grading it A through F, and it takes a human eye to take a look at it, and they move these leather pieces around. And it's like, wow, how could we actually have a computer do that instead? Because it's not the best job in the world. I mean, imagine just going there and just carrying these things around these chunks of leather and figuring out where they should be deposited. So long story short, is yes, in the industrial space, you know, I've worked on sort of train essentially material detection. Right. So that's one area. You know, the other area is really on sentiment analysis that I talked about to begin with, where you know, when someone's presenting at a conference, for example, but now everyone's online, right? Everyone's using zoom, everyone's using teams, whatever it is, you know, how can you become a better presenter by making good eye contact by using good hand gestures, for example, by smiling, a lot of these companies, you're right are adding just like plugins and stuff that they can add to it. And ours is can be done in real time, but also can be used as a practice tool, as well. So so people could use it in either case, but you know, there's so many other places, other people I've interviewed on the program here, you know, everything from medical imaging, right to, you know, finding cancer and whatnot. Those are sort of the ones that everyone's heard of reads about two images as people fly over in agriculture, you know, drones and stuff like that, picking out areas that are either overwatered or underwater. There's just a lot of stuff you get out of these images today that the images were a not available maybe 10 years ago or more. They were very, very costly to do. Yeah, totally. And then also with the way that AI research has evolved as a horizon of tasks that you can also now saw was just so much more vast than it used to be. So yeah, so if you're watching this on passing, I think we're getting better

Transcribed by https://otter.ai