Conversations on Applied AI

Seth Clark - The Future of MLOps and AI at Scale

April 25, 2023 Justin Grammens Season 3 Episode 8
Conversations on Applied AI
Seth Clark - The Future of MLOps and AI at Scale
Show Notes Transcript

The conversation this week is with Seth Clark. Seth is an MIT-trained engineer who just likes building things. He's the Head of Product for Modzy and worked with a team of brilliant designers, developers, and data scientists building the future of ML ops and AI at scale. When he's not in the office, he's probably soldering something in his Makerspace checking in on one of his beehives, or fixated on a bad product experience that he wishes he could fix. 

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

Enjoy!

Your host,
Justin Grammens

Justin Grammens  0:00  

Greetings Applied AI Podcast listeners. This is Justin Grammens, your host of the Conversations on Applied AI Podcast. Just dropping in to let you know about a very special event we have coming up on Friday, May 12. It's the spring 2023 applied AI conference, you can learn more by going to AppliedAIConf.com. This full day in person conference is the only and largest artificial intelligence conference held in the upper Midwest. It will be in Minneapolis, Minnesota on May 12, we will have more than 20 speakers with two tracks covering everything from Ai, business applications, chat GPT, computer vision, machine learning, and so much more for being a listener to this podcast, use the promo code podcast when purchasing your ticket for a 50% discount. So here's the details, go to AppliedAIConf.com. That's AppliedAIConf.com, to see the full schedule and register for the only and largest artificial intelligence conference in the Upper Midwest on May 12. And don't forget to use a promo code of podcast when checking out to receive a 50% discount. We look forward to seeing you there. And thank you so much for listening. And now on with this episode.


Seth Clark  1:07  

So when you're developing DevOps solutions, there's a certain amount of consistency that you get, because code will just keep running, it'll keep doing its job, whereas machine learning models don't have that same luxury because the world changes around us. So when you take a machine learning model that you've trained on historical data, and you expect it to do something, especially that data has something to do with like how humans interact with one another. It turns out people are strange and unpredictable and we change and the patterns that we've created in the world change as time goes on. So typically, those machine learning models get worse.


AI Announcer  1:40  

Welcome to the conversations on Applied AI podcast where Justin Grammens and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied AI dot m in enjoy.


Justin Grammens  2:11  

Welcome everyone to the conversations on applied AI Podcast. Today on the program. We have Seth Clark from Modzy. Seth is an MIT trained engineer who just likes building things. He's a head of product and worked with a team of brilliant designers, developers and data scientists building the future of ml ops and AI at scale. When he's not in the office, he's probably soldering something in his Makerspace checking in on one of his beehives, or fixated on a bad product experience that he wishes he could fix. So thank you says so much for being on the program today.


Seth Clark  2:37  

My pleasure. Thanks for having me.


Justin Grammens  2:38  

Well, I'm always curious to kind of learn the origins of companies, especially startups, because I think you're a relatively new company and sort of understand how you all came together. I think there's four co founders, if my memory serves me, right, so kind of maybe could tell our listeners a little bit about, you know, the origins of Modzy, and how you all came together. 


Seth Clark  2:55  

Yeah, happy to do so. I'll start with my background, which is a bit non traditional, and then kind of tell you how we all found ourselves together. So my background is actually in ocean engineering and naval architecture. I have a master's degree in yacht design, which I got in England, the University of Southampton after going to MIT for my undergrad, I thought all this time, I'd be designing luxury sailing yachts and high performance sailboats for America's Cup. But when I got out of grad school with kind of a focus on computational fluid dynamics, naval architecture, the luxury goods industry took a big hit because I think I graduated three days after Lehman Brothers collapsed, it was an interesting time to try and get into that field. Well, what I found is that all the work I've done doing computational fluid dynamics, basically tricking computers into doing hard math problems, turned into a whole new field, which became data science. So over the course of, you know, the past 1015 years, I found a kind of a new, interesting path for myself, taking a lot of those same skills, and then transitioning into the development primarily, of products for data scientists, data analysts, and then software developers working with data to build new products, solve problems, and better understand data. So through that journey, I ended up actually working as a consultant with a large consulting company, leading a team of developers and data scientists. That's actually where I met all my co founders, many of them have worked through a lot of other small, medium, large sized corporations. At that consultancy, we worked with government agencies, fortune 500, companies, nonprofits, primarily doing machine learning data science and engineering projects. And through that work, met one another kind of identified this common set of problems we kept running into around what I call the last mile problem getting a model off of your laptop and into your production system. So we beat our head against that problem a number of times and decided, You know what, let's try and do something about it. So that's that was the genesis of monzi.


Justin Grammens  4:46  

Awesome. So when was the company founded? I guess officially when when when did you leave this other consulting company and inform this? 


Seth Clark  4:51  

Yeah, so our official founding is December 2021. So we've only been, you know, out and around for a little over a year. We had been building the product a little bit kind of in house before then. But yeah, 2021 was our official launch out into the world.


Justin Grammens  5:04  

And I don't know if you mentioned it along the way. But you know, you're talking about data, helping data scientists become better and stuff like that, that last mile problem, I mean, this whole idea of operationalizing things, you know that the term DevOps has been around for quite some time, but it feels like ML Labs is that piece for machine learning. And I think that's what your product sort of tries to tries to help people solve. How do you guys do that?


Seth Clark  5:23  

You know, the parallel to DevOps is very apt, it's the exact same concept. What is unique about machine learning is that, whereas software will basically once you build it once, it will keep doing its job until something changes in the system that you've built. So when you're developing DevOps solutions, there's a certain amount of consistency that you get, because code will just keep running will keep doing its job. Whereas machine learning models don't have that same luxury, because the world changes around us. So when you take a machine learning model that you've trained on historical data, and you expect it to do something, especially that data has something to do with, like how humans interact with one another. It turns out people are strange and unpredictable. And we change and the patterns that we've created in the world change as time goes on. So typically, those machine learning models get worse. I've never seen a scenario where they get better over time, I'm sure it's possible. Typically, they get worse over time. So what makes ml ops a little bit distinct from DevOps and why it requires some additional solutions is that you're constantly racing against the clock, recognizing the fact that the models you've built yesterday, aren't going to be good enough for tomorrow. So we've helped our our customer solve that problem by creating tools to make it a lot easier to take the models that you've trained in this moment, and then put them into a production environment, make it really simple to connect those machine learning models to other software. And then when you need to inherently or inevitably improve these models, re iterate on those models, you can do so really quickly. And those connections are still in place. So that's, you know, in a nutshell, at a high level, what we focus on, and really why ml Ops is a, you know, kind of almost a subset of DevOps.


Justin Grammens  7:03  

Gotcha, for sure. Yeah, that makes that makes a lot of sense. And in general, I tell people, software is never done, because there's always new features, always new things need to be added. But especially around data, of course, there's new data that's going to be coming into the system. And so do you guys train the models and the cloud?


Seth Clark  7:19  

Yeah, so our platform provides no training whatsoever, the way we looked at the problem was really around the challenge not being in creating models, you know, between sort of academia, and some of the largest tech companies in the world, producing these frameworks to build just awesome models, being at a pretty robust and pretty, you know, enterprise ready state, we didn't want to come in and try and disrupt the workflows that data scientists have already developed to create a model to take data to transform that data to turn it into a representation of our universe that you can use to make a prediction. That whole ecosystem is actually in pretty good shape. So as far as creating new models, there's not as much to be done there. What we wanted to focus on instead is to say, Okay, if you're a data scientist, and you have a set of tools, in a way you like to turn data into predictions, that's great. What we're going to do is take the output of what you've done this model asset in really technical terms, you know, the weights, files and the Python scripts you've written, and we're going to turn those into scalable services that can actually work in conjunction with a larger piece of software. So really, the handoff from what a data scientist does well, which is modeling data to what an IT architect and a software developer does well, which is build apps, we connect that those two pieces, those two worlds together, ideally in a in a seamless way.


Justin Grammens  8:37  

Nice. Yeah. Well, we'll have links off to your website, stuff like that in the liner notes. Absolutely. But I think the other thing I think I saw on your website, as I was looking through your product offering was, I mean, you guys are also focused on sort of getting the smarts to the edge. Is that true? 


Seth Clark  8:50  

Yeah, that's absolutely right. The term edge, it's interesting term, because it means a lot of things to a lot of people. And one of the opportunities we saw was helping technology teams, essentially just choose whatever the way I like to think about is whatever computer they want to use to actually run their machine learning model, they can choose that computer, in the simplest terms possible. What that means in in a big enterprise is often that you might have one or more cloud providers that you use. So business unit A and Business Unit B, they may have different cloud providers for different uses. Some teams actually have a need to run stuff on premise inside their own in house lab. You know, believe it or not, we actually have a lot of customers who have their own in house labs, because the data that they're working with for certain use cases is really sensitive, really proprietary, and they don't want to put it in the cloud, and an even sort of more traditional interpretation of edges. You have a small device somewhere else in the world that either doesn't have good network access, or just doesn't have a lot of hardware processing power, where you can reliably send data to and from that device. So in all those scenarios, we really wanted to help our customers be able to take the model that they've trained, and then show Choose where it runs, and then manage all of that in one spot. That's how we look at edge. And it really encapsulates kind of any scenario where you have a concern about data security model latency, meaning how fast it works, and Network Connectivity.


Justin Grammens  10:13  

Absolutely. Absolutely. Yeah. You know, before we started recording this, you and I were sort of talking about tiny ml and, you know, your background is with the MakerSpace. You mentioned, my background has been, I've done a lot of stuff and Internet of Things. And that's kind of how I got into machine learning. And a lot of ways is because I helped companies capture a lot of data, using microcontrollers, Arduinos, you know, Raspberry Pi's, sensors, all that sort of stuff. And then companies started coming to me and saying, Well, now we want to do something with it. What are some use cases, I guess, that you guys are looking at? Whether it's IoT or not, but what are some companies that you're working with? And how are they utilizing your, your, your technology?


Seth Clark  10:46  

So there's a number of of ways to use what you might call edge AI ml. One really good example, is the use of computer vision in manufacturing. So there's a lot of scenarios in a big production facility, where you want to use computer vision to either improve a production process. So one application is the use of quality assurance, essentially, using a computer to take a look at parts as they're coming off of a production line and say, yeah, those are looking great or no, those aren't looking so good. That's a pretty well worn use case, there's actually some companies that exclusively focus on taking imagery and video from a production facility and then saying, All these parts are great, but this one needs to be replaced. But there's other cases, there's other scenarios where the use of machine learning and computer vision can just improve operations. So things like predicting whether or not you're going to have enough component parts at the beginning of the day to finish all the things that you're supposed to build. So if you need to build 100 widgets, you might need 10,000 component pieces in your facility ready to go. Or you can have all of them there by the end of the day. So the use of machine learning to predict the availability of parts. Even using machine learning to do things like having a camera stare at a big rack of material gives you a real time count of how much rebar you have left, you know how much raw stock is available. So these are a number of scenarios where inside of a manufacturing facility, you have a real need to run machine learning models in real time, on your real time data for particularly a high production facility, you need that to happen really fast. So it's much more convenient to run those models right inside the factory, rather than sending all of that data to the cloud, and then getting the results back. That's one scenario. In a totally different world. In the telecommunications, world. telecom providers have a lot of hardware infrastructure, they got big servers, basically at the base of every giant cell tower around us. Certain kinds of mobile applications, use machine learning to do a lot of complicated math and GPS navigation route recalculation, when you miss your exit. To do that, really effectively, it's actually helpful to run those machine learning models right at the cell tower, believe it or not even just the additional time it takes to go from the cell tower, through all these fiber optic cables out to US East one, deal with Mathare and send it back to you, that amount of time can be enough to make a bad user experience within just a normal mobile app, or in some cases, it might not be safe to use the application. So in the telecommunication space, we're working with customers to help them take machine learning models and send them out to the base of a cell tower or in some cases, they actually install equipment inside their customers, buildings and facilities. So it's another scenario where it's really nice to have those models running really nearby. So two examples of of a couple places where these edge applications are really driving business.


Justin Grammens  13:34  

Yeah, for sure. I actually teach a class on machine learning and Internet of Things. It's a graduate class at a college here in St. Paul. But, you know, I sort of tell the students, there's sort of three reasons as to why you're going to want to run these things at the edge, especially around tiny ml, I mean, one is, sometimes these things are run on battery power, right. So you actually don't want to use the extra battery power to actually send it to the cloud, right, always have a ping back to the cloud. And the thing is, is you know, you now you can run stuff on much smaller processors, right, you can get cheap, low cost processors that can go ahead and run this stuff at the edge. And then the third, you kind of touched on a little bit is really security, if you don't send to the cloud, and it's inherently secure, right? If you're sending data back and forth, so it's always like only send what you're going to use. So in a lot of these cases, I think you're sort of touching on, you know, one more than the other, you know, but it really sort of all sort of comes together. And I think there's an interesting future ahead, where we can sort of marry these low cost, low power, secure devices that will be sort of out all about and all around us. 


Seth Clark  14:31  

Yeah, I completely agree with the way you characterize it. The one other category we sometimes talk about is just network availability. When you live in a city, it seems like there's always internet forever, for always, and you can always count on it. But if you're in a remote oilfields, if you have a drone that you're using for, you know, building scanning, if you want to run a machine learning model on an autonomous underwater drone, there's literally no way to actually talk to some of these devices. Certainly not Wi Fi. Probably not maybe even Bluetooth and you know, systems like low raw Wan, they're still developing, we still don't, you can't always talk to every device, depending on where you are in the world. So having the ability to say, alright, if I can't send all this data back, can I get the information I need in the moment? Even if the network grid collapses all around me? It's pretty cool to be able to work in scenarios like that as well.


Justin Grammens  15:20  

Absolutely. And you even think about just the self driving car, right? You're right. It's there's it's going to be going into areas that are remote, and there's no way it's going to be able to connect. And, and and that's, of course, when latency comes in. Right? You don't want the car to be like, is that a pedestrian at head? And let's ask the cloud, right? 


Seth Clark  15:36  

Yeah, you don't want buffering to happen.


Justin Grammens  15:38  

That's yeah, exactly. By the time that that's happened, yeah, you've already run over the person. So very, very cool. But then that also, it has an interesting scenario to where you're right, you're doing underwater drone, for example, kind of continuing on that example. And then now you've collected all this data, but yet you want to build it back into your model. Right? So now you come up for air. Now, now you actually are connected or are able to be connected, and now you need to retrain. And is that is that a scenario where you guys would come into play? I guess with regards to? 


Seth Clark  16:05  

Yeah, absolutely. So one of the things we built into our product is essentially a data store of all of your predicted results, and the the live data that came in from your production system. So you know, we see ourselves as being a conduit to improving models over time, this continuous improvement process. So once you've used your model for some period of time, through a number of techniques that data scientists are really familiar with, you can measure things like model drift, to see if it's still doing its job, basically, is this model predicting things as well today, as it did a week ago, when that's not the case, you can get access to all of the predictions that you've made, and the data that those predictions were made out of made on and then connect those back to a another data labeling tool. So we help our customers through a variety of integrations, basically connect their production data back into the model development process. So you really can kind of get this 360 cycle of building model once getting it out there, you know, knowing that it's not going to be perfect to start, you're going to be likely testing on some maybe open source or public data to get up and running. But over time, you'll be able to keep cycling and improving your model, because you'll be doing it in the world in which it's going to be operating. So in that, you know, underwater drone example, or edge offering provides the ability to sample the data and store locally, you know, a subset of the real data that you're getting, as soon as you have network connectivity, you can then get access to that sort of production prediction data, say that 10 times fast, right from your central hub, and then keep iterating on those models.


Justin Grammens  17:33  

Cool. How do you guys monetize this? Is there a free trial offering, or what's gonna be the experience, if I go there and sign up?


Seth Clark  17:41  

We're really focused today on enterprise customers, as that's where our product has found the most traction and most values for larger enterprises, particularly ones that have complex environments, where they have a lot of systems, a lot of cloud providers, a lot of devices. And we essentially price based on just the number of of models, you're running simultaneously. So if you only have a couple of models in production, that you know, the price scale is down for a small lab pretty efficiently. And then it scales up, the more models that you're running simultaneously live in the real world. And it works for both edge devices, cloud providers on premise, we can essentially deploy and run anywhere. Yeah. And we provide, you know, access to demos, and some of our customers will, will do trials to try it out. Absolutely.


Justin Grammens  18:23  

Nice. Okay, cool. We guys are still are still pretty new company. Where do you see yourself in the next sort of three to five years? What are some of the new things coming out in the ML ops, you know, ecosystem? Where you guys think, is this whole area sort of headed?


Seth Clark  18:37  

That's a really good question. You know, and if I could predict this accurately, I think I'd have a lot of other jobs I could have out there if if any of these predictions were right, but you know, one area at a macro level that we look at, kind of all day every day is this idea of running AI anywhere. So we have a lot of techniques and tools to help our customers run their machine learning models on a lot of different disparate systems, and connect them and redirect the models and redirect the data. But we want to continue to add to that ecosystem of places. Basically, every kind of computer that exists, we want to get to a point where you can run a machine learning model on that computer. And we'll help you do that. So that might include the expansion to things like FPGAs other hardware systems that today we don't have support for that. We have support for arm and x86 CPUs and GPUs, which covers a lot of territory. But there's some exotic hardware that do really well in certain AI applications that we want to expand to. We'd love to add more support for even smaller edge devices. We're not quite at the point of running on microcontrollers. So that's an area that we're interested in pursuing. You know, I think another area that's really hot right now that we we see. Starting to move into the enterprise space pretty quickly is the use of large language models, LLM 's like GPT 3 and GPT 4 for enterprise use cases for the development and management of employee To satisfaction systems, the use of those tools to help generate content for big companies to do their marketing campaigns, our expectation is that more and more customers are going to be trying to operationalize these giant models, and then tailor them to their own needs. So that's another space that that we're looking at. So I think those are two of the areas that we were pretty bullish on.


Justin Grammens  20:20  

That's cool. Very, very cool. If I would have predicted where I was at today, you know, three, five years ago, I wasn't right. You know, no matter where I think I'm gonna land, there's always something new that sort of comes up. That's interesting, sort of, you know, you just kind of watch the market, you guys raised I think, a Series A is that right?


Seth Clark  20:36  

We've raised a seed round right out of out of the gate in 2021, we're still working well, against that we've got a number of, of enterprise customers. So we just continue to, you know, build up our traction, really build as many relationships and help as many customers as we can. And, you know, we'll probably be looking for some additional fundraising in the in the future here. But today, we're doing great, just with the fundraising we've done so far.


Justin Grammens  20:58  

It's good. Well, I was gonna kind of ask, Was it a time consuming process? Did it take you guys a while? I mean, again, this is a hot space. And it seems like and I'm not exactly sure where you your company is based? You seem very virtual, I think, right?


Seth Clark  21:09  

Yeah, that's absolutely right. We're actually 100% virtual company. So we've taken advantage of the fact that so much work in the tech world than not have to be co located. There's certainly things to be lost and missed about that. But for a small company like ours that's really focused on just driving our product, making it better improving the quality of it, we can do all of that work remotely. And so we've taken advantage of that to try and keep our costs low, and really just focus on our customers. So that's been our priority and our kind of main mission for the past year plus. So that's been great. You know, as far as fundraising ecosystem, yeah, our seed round went very smoothly, that economics since matter, now, probably February of last year, or certainly different. I think it has been hard for a lot of other startups going through that fundraising round, I think, you know, based on what I've read on TechCrunch, and Forbes, I think everyone's expecting another tough 2023 in the VC fundraising space, but we're less focused on kind of those macroeconomic challenges do exist. And again, focus more on just making sure that our product is connecting with our customers and our users. And if we're delivering value, we feel confident we're going to be we're going to be in good shape.


Justin Grammens  22:18  

This is sort of off the cuff. But I mean, moving from a services, you know, model, because that's what you were doing, right? You're working for a service company, you're going in and helping companies as consultants, helping them rebuild their stuff to now. Now you're offering a product, he became more of a, I guess, a product owner, product manager, and now you get a chance to sort of hold the reins a little bit. I mean, how has that been a little bit of a mental shift for you?


Seth Clark  22:38  

Yes. You know, within the services within the consulting company, we were in, I had actually been doing product management, essentially doing product management for data science tools that are okay, can other consultants would use for customer client delivery, the product management piece was not new, necessarily. That's something I've been doing for quite a while. But doing so doing this work outside of the context of a services, engagement is different, you know, I have a natural tendency to want to be very intimately involved in helping our customers get value be successful, you know, find ways to take our product and really kind of get the most they can out of it. And when you're building a software company, you have to focus on scale. So it means taking a lot of that energy and then focusing it back into okay, what can we do to put it back into the product, the tools that someone can use to get that value and goodness just on their own, without having someone hold their hand, which I think is a good, a good direction for that energy. But yes, my consulting background certainly means that sometimes I want to get my hands dirty, and actually help customers in ways that you don't have time for necessarily when you're building a product versus when you're in consulting.


Justin Grammens  23:44  

Yeah, fair enough. You know, one of things I like to ask people about too is like know, if you were just coming out of school these days, what are some tips or tricks, you know, people that are getting into this field, where areas you think that they should study or classes they should take or books they should read or anything like that?


Seth Clark  23:58  

I think the field of data science has largely demonstrated the fact that to be really talented, it doesn't require the traditional makings of a four year engineering degree. If you are willing to put the work in, there's a lot of tools out there to teach yourself the combination of Python and linear algebra, you need to be successful, based on the kind of shoulders of the giants that have built out a lot of this technology, from Facebook. And from Google, there's a lot of resources available, if you're willing to take the time and really learn it. You know, I think the sky's the limit. If you're coming to the space, and you see something that looks interesting and exciting. I don't think there's the same kind of, you know, academic boundary that there might have been five to 10 years ago, to get involved in a space where before, if you wanted to do computer vision before the advent of deep neural networks, through Alex net, you're gonna be spending five years on your PhD writing some, you know, image processing code and C++ is a pretty big hurdle to really understand the nuances of how to make that happen. That's not the case anymore. I think what's more important, more valuable, is really being able to communicate things well. And I remember when I was in engineering school, my engineering professors telling me that communication and writing skills were my most important asset and just being flabbergasted by that concept. It's like, why am I doing all this differential equation work and calculus and structural analysis, what like, what's all of this for, but now I actually really, I really believe in that, that being able to take a complex idea, break it down, communicate it in a simple way, take simple ideas, build them out into complex technology, and back and forth, that's probably the thing I would recommend investing the most time on, because if you can do that, well, he can solve all kinds of interesting problems, because you're gonna be able to bring people along with you.


Justin Grammens  25:43  

That's great. Yeah, very, very, very well said. I think, you know, one of the reasons that I enjoy at least the topics of artificial intelligence, because I feel like it can apply to just about anything, right? I feel like it This podcast was focused on Python. Yeah, it kind of ran out of content in a little while, it's like, we've already talked about Python. So this sort of broad category, I think, around artificial intelligence, I think is nice, just just my perspective, regards to why I really enjoyed talking to a lot of different people that are in this field. And, and I think you maybe you didn't touch on that, but But you you were kind of alluding to, it's it kind of be more of a generalist in a lot of ways, right? If you can communicate and you and you can, you can understand concepts at a large level AI is not the last thing and 10 years from now we're gonna be talking about something totally different, there's going to be another technology, you know, Blitz is going to be happening, and you're gonna want to be able to get on board with that, too. 


Seth Clark  26:28  

Yeah, I completely agree. The way I see it going as deep neural networks are not the end. This is one super cool technique that a lot of very talented, smart, hardworking people have developed and poured their time and thoughts into to create value from data, which is awesome. It's an exciting place to be right now. But it's clearly it doesn't solve everyone's problems. What I think is exciting now is that the excitement, the energy, the momentum, the technology, the open source projects that have been built up around deep neural networks have generated kind of a mindset shift, where no matter what department you're in, what like what company you're in, what business you're in, what part of the organization you work for, you can use data to improve the way that part of your organization does its job. And that's something new that that was not always the case, just the idea that data could be used to improve things is kind of a novel concept. And it's taken a while for it to propagate across an entire company. But that's the part that I think will will be evergreen is going to last even as new technology approaches come and go coming back to that fact of saying, Alright, before we make a decision just based on our gut, and making educated guesses, we can actually use sort of a mental process, you know, thinking or thought technology to apply whatever tools to the trade or hot to help us make things better. And that's the part that I think is really neat.


Justin Grammens  27:49  

Yeah. And this is maybe an open ended question, things to think about. But I mean, as you're going into organizations, or you're working with customers, I do get any sense that people are worried about artificial intelligence, sort of taking their job, or I don't know having a huge impact on their day to day life, you know, how is AI going to affect the future of work?


Seth Clark  28:07  

I think there was more of that. I'd say like five years ago, I felt I found and I heard more of that type of concern five years ago, I think over the past five years, there have been enough swings and misses at trying to replicate the complex stuff that humans do, that a lot of those concerns have been waylaid by and large. Now, automation, and machine learning will disrupt certain jobs, certain industries. But I think the, you know, underlying value that humans provide isn't really at risk of being disrupted anytime soon. And I think folks are starting to see that. The one area I will say, though, that is, you know, becoming a new area of question. concern is, these large language models are pretty darn good at creating maybe a first cut of content, whatever that content might be, whether it's imagery or text, increasingly audio, even 3d models, I think there is some question as to will this end certain careers or certain jobs? I think there is a little bit of concern over that. I also personally think that's likely overblown, my expectation is that those technologies will be more like tools where one person can get more done. But at the end of the day, you still want a content writer, a copywriter, you still want to have a visual artists, you still want to have a 3d modeler doing work for you in a way where they're going to be able to craft and customize the things that you need. But with these additional tools, maybe one writer can do the job of what 10 writers used to be able to do. So probably some reduction in staff, headcount reduction and a lot of industries. But I don't think the wholesale loss of major fields is likely that's my personal guess.


Justin Grammens  29:49  

Yeah, no, no, it's I 100% agree. And I think even more the reason as to why that people need to understand this technology and use it to their advantage. Yeah, right now you're gonna have one worker being able to generate you know what? 10 work. as you used to do, you better start to learn about, you know, like Chat GPT is sort of like the hot thing these days. Right? And, you know, you're talking about music and art, you know, I mean, it's just, it's just crazy the artwork that's being done IV, I've even had it right, you know, simple programs in Python, you can basically say, write me a function to do this in Python, and it will literally write it out. And I've been very impressed with Lt. It's just the code. Obviously, it's just pulled it off the internet, and somebody else had written it. And that's where the understanding and in some ways, the truth is, you're gonna need to, like look at it and be like, Is this really the right way to do it? Because it's not saying it the right way. It's saying it just did it. Yeah, that's saying it's the best, right?


Seth Clark  30:37  

It's saying this is the statistically most likely answer according to the data I had available as of January 2021. It's a statistical engine. And it's really good at that job. But yeah, there's there's a part of it. That's the interpretation, the understanding of should I do this, even if it does work? Is this going to ultimately solve a problem that is worth solving? Some of those are questions that, you know, we're eons away from figuring out how to teach a computer to understand sort of real intent. So that's why I'm I guess I'm not so worried about them, completely, sort of devolving the fabric of civilization.


Justin Grammens  31:12  

Yeah, yeah. I totally agree. And kind of remains to be seen, I guess, with any tool could be used. Yeah, possibly, or disadvantage? Well, how do people contact you? What's the URL? The website? Yeah, yeah, absolutely.


Seth Clark  31:23  

So we're at modzy.com, we've got a number of resources that you can, you can check out there. We also have a channel on YouTube, where we do, I'd say, every two or three weeks, we'll do tech talks, where we talk about sort of different aspects of machine learning, machine learning operations, reviews of different open source tools, techniques, and things you can do if you're trying to, again, use machine learning in a real application. So definitely check us out there. You know, I post occasionally on LinkedIn as well. So we would love to talk with anyone who's interested in this space. Really, anyone who likes to take advantage of data to build interesting projects or tools, I have my own home lab makerspace, where I'm always tinkering at something. So we'd love to chat with other fellow tinkerers out there who build things for fun in their free time. 


Justin Grammens  32:07  

Oh, man, yeah, you and I gotta talk after this podcast, for sure my basement is littered with hundreds of IoT projects yet, I've just sort of started


Seth Clark  32:15  

you can see I'm here in my home lab, I got kicked out of the formal office, within about a month of the pandemic. So I've been working out of my basement with all my soldering iron and bandsaw, and electronics, components back here behind me


Justin Grammens  32:27  

Fun stuff, fun stuff. Yeah, that's one of things that I've always been, you know, fascinated around with, especially sensors that you can get a lot of data pretty quickly. Now, whether or not data you want, you know, remains to be seen. But that's the beauty of sort of the physical world is getting all this information, all this data, because a lot of times people are stuck with a problem set that they don't they're not able to get the data for.


Seth Clark  32:46  

Yeah, absolutely. Right. There's no shortage of being able to collect data now, which is awesome. And then that's when the art of data science becomes even more useful of being able to decipher what do you actually need. And often cases, it's you know, there's a lot of noise. And then you can focus on what what signal do you need, and then use machine learning to help you distill that down into something you can do amazing things with, it's an exciting time to be alive and to you know, plug a sensor into a Raspberry Pi or an Arduino board and all sudden, you've got this kind of magical device that you know, would have been a dream 10 years ago. 


Justin Grammens  33:18  

Yeah, yeah. And now we got the models to make it smart. That's right, certain devices more and more intelligent. Well, great. Seth, thank you again so much for being on the program. I wish you guys nothing but the best you guys are in a very hard space. So be curious to see what what we can talk about here in the coming years. But definitely ml Ops is is a fast growing space because like you said, people are just you know that you can generate a model on your laptop but that doesn't do any good. It really needs to get out into production and sort of crossing that chasm I think is a lot of there's a lot of companies that are looking to do that. So very, very happy for you guys. Glad glad you guys are are have been able to raise some money here. And we'll continue to keep in touch.


Seth Clark  33:52  

We'll just and thanks so much. And thanks again for the invite. I love what you're doing at the podcast. You consistently bring just awesome topics that I think are worthy of conversation great speakers, I feel lucky to be among them. So thanks again for the time and the conversation. This has been fun.


Justin Grammens  34:09  

All right. Take care. Have a good day. You too. Take care.


AI Announcer  34:13  

You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening