Conversations on Applied AI

Greg Hayes - Applying Design Thinking and Machine Learning to Solve Business Challenges

July 27, 2021 Justin Grammens Season 1 Episode 23
Conversations on Applied AI
Greg Hayes - Applying Design Thinking and Machine Learning to Solve Business Challenges
Show Notes Transcript

Wow! Where to start. What an amazing interview and opportunity to have Greg Hayes on our program. We had a great conversation on applying Design Thinking and Machine Learning to solve business challenges. We talk through how to bring business stakeholders, R&D teams, and experimentations to apply MLOps at scale with an iterative approach.

Greg is a technical leader in Machine Learning Engineering and Advanced Analytics with a strong interest in open source platforms written in Python. He has more than 20 years of experience leading global multi-disciplinary technology teams, and collaborating with global stakeholders to identify and align on new opportunities.  He is currently a Data Science Director at Ecolab and is responsible for leading the selection and deployment of technology platforms to create and operationalize Data Science products at scale.

Greg served for many years as a team mentor as a part of the First Robotics Competition with his passion being to encourage the interest in science and technology, and to show kids that science and technology are both fun and rewarding. That’s awesome. I thank you Greg for giving back to help kids in that way and for being on the program today!

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events! 

Resources and Topics Mentioned in this Episode

Enjoy!
Your host,
Justin Grammens

Greg Hayes  0:00  
Several years ago, I had the opportunity to go out to California and go study design thinking as a process. And I become a huge proponent of that for problem solving, we try to leverage that into our process. So as part of the design thinking process you again, you have a hypothesis and you want to iterate very quickly, ideally with customers in the wild, as opposed to setting up contrived workspaces or experiments. And that ethnographic research really drives value quickly in that front end of the domain space in the problem solving process.

AI Announcer  0:36  
Welcome to the conversations on applied AI podcast where Justin Grumman's and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.

Justin Grammens  1:07  
Welcome everyone to the conversations on applied AI podcast. Today on the show, we have Greg Hayes, Greg is a technical leader in machine learning, engineering and advanced analytics with a strong interest in open source platforms written in Python. He has more than 20 years of experience leading global multidisciplinary technology teams, and collaborating global stakeholders to identify and align on new opportunities. He is currently a data science director at equal lab, and responsible for leading the selection and deployment of technology platforms to create and operationalize data science products at scale. Also, Greg served for many years as a team mentor, as part of the first robotics competition with this passion being to encourage the interest in science and technology, and to show kids that science and technology is both fun and rewarding. That's awesome. Greg, thank you for giving back to help kids in that way, and also for being on the program today. 

Greg Hayes  1:56  
Thank you so much. It's great to be here.

Justin Grammens  1:57  
 Awesome. Well, great. I gave a little bit of background about you. I know you sort of started off in as a chemist, maybe you could talk a little bit about how you got from that into where you are today in data. 

Greg Hayes  2:09  
Yeah, I think a lot of people in the data science space come at it from a variety of different backgrounds, I don't encounter a lot of people that come in directly through data science. My background is, as you mentioned, it's in chemistry, I have an advanced degree in synthetic organic chemistry. And I spent a number of years in the industrial coding space doing everything from short term development to long term r&d, as well as working with global teams. My entry into data science really comes from the design for Lean Six Sigma space, I had done a lot of work in computers and mathematics. in undergrad, I always had a passion for it. And our organization started pursuing design for Lean Six Sigma, to try and accelerate the development process and take more of a statistically driven approach to development. And it was great, but one of the things that I came to realize during that process was we had a huge amount of organizational knowledge that was being effectively ignored during that process. So I started looking for ways to leverage that institutionalized knowledge that maybe wasn't quite as formalized, but it was valuable nonetheless. And that took me toward machine learning. And then that interest is grew. And it became a passion that I had just continued to explore. Cool. So you kind of came at it through six sigma? Would that be a fair way to say it? Oh, absolutely. Those are large problems, you know, that people need to solve. And I know, as you and I have sort of been talking about, I think this might be the general theme around this conversation is, you know, how do you solve these large scale problems? But obviously, maybe working in a beginning, starting out in a small scale? How do you think teams can come at approaching big problems around data, big problems around process, but yet, sort of maybe start out with a MVP proof of concept? How do you sort of think about that as you organize your teams? Well, first of all, I mean, you have to be able to identify the problem. You think about the the problems that are the challenges in machine learning today, the two biggest issues are first, what is the right problem to solve? And then assuming you can get data and build a model that will add business value, then how do you operationalize that model at scale, the ends of that problem are kind of the two of the biggest opportunities from my point of view, at least. So if we assume that we've identified a problem that might have business value, we really encourage our teams to start by finding data that will help drive the decision making process and really work with in memory size data sets. I think that's the question you're getting at is do you go after huge amounts of data first, or do you these start with kind of small, fairly well curated data sets, and we encourage the teams to work with their business partner

Learners generate a business hypothesis or a hypothesis around the problem, and then find data that may help us confirm or refute the hypothesis with really an eye toward in memory data, because it's just much easier to work with, it's much simpler for the teams to deal with and iterate quickly. And when you're in that phase of the problem solving process, you really want to iterate quickly.

Justin Grammens  5:22  
One of the things you mentioned was was like, the right problem to solve. So how do you know you're on the right track? Is this very collaborative type perspective? Like, okay, we're gonna be bringing in all these people that actually understand certain domain that we're working at?

Greg Hayes  5:35  
How do you approach that, several years ago, I had the opportunity to go out to California and go study for a short period of time design thinking as a process. And I become a huge proponent of that for problem solving. And we try to leverage that into our process. As part of the design thinking process you again, you have a hypothesis, and you want to iterate very quickly, ideally, with customers, and really observe the customers in the wild, as opposed to setting up contrived workspaces or experiments. And that ethnographic research really drives value quickly in that front end of the domain space, kind of the the problem solving process.

Absolutely. I've heard of design thinking there's whole, like you said, this whole courses on it, there's whole methodologies around it, you touched a little bit on it, maybe you could elaborate a little bit more for people that maybe haven't heard of it.

So at a high level, design thinking is really about identifying a problem that you believe may add customer value, or may add value to your business, or just needs to be solved, finding the people that are actually dealing with that challenge. And working with them collaboratively, observing them in their own spaces, seeing how they come at solving the problem. And then developing solutions around that. It really follows the agile mindset, when we talk about software engineering, or software development, where you iterate very quickly, and really see the customer as they work, get their feedback directly, as opposed to a waterfall process where you may kind of go off into a lab and build something interesting, you've got exactly what the customer wants, you put it in front of them. And they say, that really wasn't what I was trying to do, right actually interacting with them collaborating with the customers getting their continuous feedback as part of the development process.

Justin Grammens  7:24  
Yeah, cool. I did do a design thinking session. And it was with a corporation here in town, and they brought in a bunch of people, it was like a full day long event. And it was basically customer interviews, right? What problems are you having, and was really focused around consumer electronics, things going on within your home. So then we interviewed this person asked them a bunch of stuff, they left and then we use, like duct tape and boxes and fake sensors, quote, unquote, you know, and it's kind of scattered them around. And Chris sort of made this mock kitchen for them. And then they came back the next day. And we sort of said, Well, what do you think about this, right? So it was really fun to kind of take somebody's concept, and then sort of essentially use off the shelf pieces of equipment, quote, unquote, you know, they were, like I say, it was all tape and boxes and stuff like that. But we were able to sort of create this environment for them, and then ask them, so what do you think about this, right? And it was very interactive, I thought it was cool. And I'm sure maybe that's what you were doing when you were there. Because at the end of the day, you kind of want to provide something, an MVP of some sort, doesn't need to have electronics inside it yet, but at least they can get a feel for sort of like what it's going to look like, or feel like,

Greg Hayes  8:30  
the real goal of the design thinking process is to get the customer feedback quickly. In the most ideal state, you're actually watching customers or potential consumers of the product that you're trying to build, as they would interact with the ethnographic research. That's pretty tough to do. But if you can get customers engaged in the process, even if it's bring them into interviews, build something, it needs to be low fidelity. The idea is to do that very difficult iteration early in the process, keep it low fidelity, get their feedback, and then incorporate their feedback into your design process as early as possible.

Justin Grammens  9:06  
Sure. So now, taking that to the next level in my head, as I'm thinking about, you know, my IMR all about the Internet of Things, right? And then putting sensors in people's home appliances and taking it to the extreme right. So at the end of the day, I start thinking about that, okay, let's start putting sensors unless you're actually collecting real data, because you can only go so far, I believe, and maybe correct me if I'm wrong. With regards to just questions, people will oftentimes answer what they think you want to hear, rather than actually getting real world data. So the more we can sort of connect these things around us, as a organization, as a company, as a product owner, as an engineer, you can start making all these things, you know better. Absolutely.

Greg Hayes  9:43  
And I think you made a comment there about people will answer if you interview them, you ask questions, I'll answer one way but when you actually observe them in the wild, you oftentimes find they do something completely different. And that's really the heart of design thinking is seeing how they actually interact with the product as opposed to seeing how they tell you they interact. Back with the product, for sure.

Justin Grammens  10:01  
And that's that's kind of what I keep telling companies is you got to get more visibility into what's going on with your products out in the field. So I guess so you talked about two things at the beginning, right? You talked about defining the problem is a real problem. Okay? We think we have a real problem here, we're gonna do a experiment on it. But sometimes you can't get the data, right? Do you have situations where you haven't been able to get the data? And you've said, we'd have to go down a journey to try and get the data? Or maybe we'll get the data one day, but we'll kind of put that concept or that experiment on the shelf until we can actually get the data. How have you seen this sort of play out in organizations that you've worked at?

Greg Hayes  10:35  
I think what you're asking is, if you can't get the data to test the hypothesis, how can you formulate or restructure the problem so that you may be able to solve it a different way? Is that

Justin Grammens  10:47  
Yeah, or at least move forward? Or have you gotten to points where you just said, it's a dead end for there's we're just gonna sort of, you know, what, what are all those things? There'd be so many barriers for us to get this data. But I guess maybe the first question is a little bit more realistic. Yeah, I guess how would you move forward? If you're like, we'd really think that this is valuable, but we don't have the data yet today? How can we play that out? Yeah, so

Greg Hayes  11:07  
in that scenario, and we've dealt with those many times, of course, that scenario, we actually go into the lab, and we generate the data ourselves. So we've done that some some very low fidelity experiments, we have a hypothesis, we think we can solve a problem, we think it might be pretty valuable for the business, but we're really way on the front end of the problem. It's very, very conceptual. We've actually done work where we'll generate low fidelity prototypes to put a sensor in something or use a sensor that exists in another device, and repurpose that to generate laboratory data is very idealized, we know that, but we have a hypothesis, you know, we can still test the hypothesis, even if it is not ideal. And then we can take that and then go to our business partners, stakeholders that we think might benefit from this, and test that idea with them and see what their feedback is. That in and of itself is an iterative process. And we've we've done that process a couple of times now, where we'll iterate back and forth with stakeholders on the idea, maybe not the customers directly initially, to get their feedback, sometimes the stakeholders that will say, you know, not really valuable for us, and that's okay. Right, we can set that aside. Other times, you know, we shot that idea around and stakeholders say, you know what, that's pretty cool. But oftentimes, what they'll do is they'll say, What if you did x instead, right? So then we go back to the drawing board into the laboratory, low fidelity experiments, make some adjustments, test that hypothesis again, and iterate back and forth with our stakeholders until the stakeholders go, you know what now I think maybe we should talk to a customer about this, or we should go talk to one of our business partners. You know, if we're working with an r&d team, we should go talk to her, you know, our marketing department, get their feedback. And then we iterate with those two stakeholders. And it just kind of grows and we've seen that play out a few times now.

Justin Grammens  12:49  
Yeah, is there anything you can talk about at all publicly related to the program here is about applied AI. So I just wonder if there were any specific applications that you can speak to?

Greg Hayes  12:58  
We have a couple of examples. I think a lot of people would be surprised to learn that as an organization. Ecolab, specifically in this, this instance, we have a lot of sensors out in the field, because we go and and work with our customers. So if you think about Ecolab business on the industrial side, we do a lot of work around water quality, maintenance of equipment, water conservation, and we really integrate with our customers to try and help them achieve their goals of reducing their energy footprint, preserving or improving the maintenance of their systems. Sure. So it helps them helps them achieve their goals. So we work a lot in that space.

Justin Grammens  13:40  
Yeah, for sure. Well, I know. Ecolab is a huge sponsor at Target Field. And I think there's millions of gallons of water that's conserved or re reused I guess as a as it flows through the field and stuff like that, which is a really cool story. And I'm assuming that's a pretty modern stadium. I'm assuming there's a fair amount of sensors and a lot of industrial activity going on in that location, specifically, I guess. Yep. The only I think about with Ecolab obviously is soap, right? I feel like everywhere I go to wash my hands somewhere. There's always an equal lab dispenser there. And I know you guys have done some experiments and stuff like that in that area, right? Yeah. Okay, so you've got these experiments, you're working, you're onto something here, you've actually improved it a series over a number of iterations. And you're dealing with stuff that's in memory. So you've got some sort of confined data set. But I wanted to get in talking a little bit more about Okay, how do we see the difference between a Raspberry Pi and data that you're pulling off of a couple sensors in a lab? And even if it's not contrived? Even if there's only four or five units out there in the field, for example, you're getting real data, then how do you actually sort of move that into something that can be used? You know, I know You talk a lot about ml Ops, maybe you want to define a little bit what ml ops is and we can start talking a little bit more about about how it's applied.

Greg Hayes  14:49  
So I think about ml ops as a process to iteratively, develop a machine learning models and deploy them to production at scale. So again, I think a common Theme through all of these different topics is the iterative component, as opposed to, you know, the waterfall process that's historically been used by a lot of organizations. So when we think about ml Ops, the ideal scenario, from my point of view, is we want to take our data science teams, once we it from prototyping, and we're prototyping, we're working within memory, we're writing functions, we want to be able to migrate the code that the data scientists are writing, more or less directly into production. Because if we're going to develop iteratively, we may start with low fidelity models, we may start with simple models, and involve those tools to add business value, eventually, more sophisticated models, more sophisticated feature engineering, those types of things. We don't want to be in a process where we build, rewrite, deploy, or even worse, have no connection to what's going to go into production. So in the ideal world, when we start bringing in live data, we're working with that data as it arrives. But we're working with it again, starting in memory, and then slowly refactoring to distributed compute if and only if necessary.

Justin Grammens  16:04  
Gotcha. Okay. So based on, like you said, being very iterative with regards to data that's coming in, are you retraining models and on on the fly in this mechanism, so it depends.

Greg Hayes  16:15  
So what we want to do is, we have our data scientists, they write their Python functions, and we want those functions to effectively be what gets deployed into production. So they write their functions for feature engineering, they do model building concept development, eventually, they serialize those models, and then deploy them, they may serialize those models and load them at runtime for batch inference. They may also serialize those models and then expose them through an API endpoint. But as part of these leveraging pipelines to do this, as part of these pipelines, we also have to monitor those models in production, we have to build a train, retrain those models, leveraging automation, we have to be able to document what the metrics are, that are important to drive whether we do swapping of models. So we're really trying to drive automation, in the model development, deployment and execution phases,

Justin Grammens  17:05  
you know, when you need to retrain the model, so are you kind of continually running some test data against it to know that it's not performing as well?

Greg Hayes  17:13  
Yeah, absolutely. As part of our management process, we deploy the models to production. And we have metrics, as well as tolerances that we want to establish around the models. So we monitor the performance of those models against new data, and log those metrics. And then we monitor the metrics. In the most sophisticated cases, we will set up hot swapping mechanisms. So if we see a model that begins to its performance begins to degrade relative to its metrics, based on some threshold, we'll retrain the model, evaluate the performance of the newly trained model against the model that's currently in operation and swap them out as needed.

Justin Grammens  17:49  
That's awesome. And you say we so I'm assuming there's some human in the loop going on in this case, but but maybe not. So go ahead. Yeah, let

Greg Hayes  17:56  
me know. Well, when we, when I say we, I mean, collectively, that is the strategy that we take, in some cases, you know, we have human in the loop, right. So monitoring that model's performance. And we have the opportunity to step in and say, allow a human to be part of the decision making process, if it seems appropriate. But what we don't want is we don't want to have abstract criteria around model selection. Now, as we go through the development process, we want to work with our our business partners and the product owners to define what kind of metrics we believe will add value to optimize around when we build a model. And we want those to be well agreed upon. We don't want to have different people having different ideas about what constitutes acceptable model performance, you know, we want that to be defined by the business in collaboration.

Justin Grammens  18:47  
Yeah, I guess where my mind was going was, could you see a world at some point where you don't have the human in the loop anymore? Because the computer and the AI actually make a better decision than a human and just let it happen? automatically?

Greg Hayes  19:00  
Yeah, absolutely. In many cases, that's where you want to be. But it depends a little bit on the use case, I think, and the comfort level of the teams, as well as the complexity of the models.

Justin Grammens  19:11  
Well, some of this sort of begs the question that I'd like to ask people, like, you know, what do you think about the future of human basically the future of work? I think, for humans, in some of these cases, where do you feel like our jobs are being automated away, and that there's going to be cases where these models could obviously retrain themselves, they could watch the data that comes in, they could redeploy new versions, everything like that. So yeah, I don't know what your perspective is on that.

Greg Hayes  19:36  
So I think there are probably two answers to that question. for that. I think we have two separate questions in there. One is do I think that, you know, we could have models that retrain themselves swap themselves at as needed, that sort of thing? And the short answer to that is yes. But you also asked, Do I believe that we're, you know, automating our jobs away? And I don't think that's the case. I think that we're changing the nature of the kinds of jobs that people might need to do. I think AI machine learning, it allows people to focus on what people do best, and enables them to be to do those types of things.

Justin Grammens  20:10  
Yeah, I mean, I, as you're talking, I was thinking about the whole design thinking, and just the whole creativity side of that, when you do these sessions like I was involved in, and then you've done many of them before, those aren't things that are going to be automated anytime soon, right? It's the you know, Now obviously, the data that comes out of that, and all the crunching of numbers and all the machine learning, like that type of stuff that computer can take over. But I think the creativity, the curiousness, the interviewing, all those types of stuff, and taking a look at, you know, how a person is using a system and being having the best understand them, I think I'm with you. Like, that's not something that's going to be replaced. So our jobs are moving away from sort of the nuts and bolts down in there, doing a lot of the monotonous stuff a computer can take that stuff over, but it's gonna have a very, very tough time running a design thinking session, right? Yeah,

Greg Hayes  20:59  
actually talk with my kids about this, the role of AI and the future of automation. And if you if you went back 150 200 years, you would get nails were made by blacksmiths. And eventually, you know, blacksmith stop making nails because that process could be done by a machine. And I think that's just a natural part of evolution isn't the right word. But it's the natural process?

Justin Grammens  21:22  
For sure. Well, one things I like to ask people is, is how do you define AI? Or machine learning people that are on the program? Do you have a short one sentence two sentence thing he or even like somebody asks you, Hey, what do you do? When you tell them that you do in your job? I think

Greg Hayes  21:36  
of machine learning as allowing software to learn from data. And AI is a more generalized form of that. It's much more abstract. And I think the definition of AI is it tends to move over time, five years ago, you know, we wouldn't have considered something like an Alexa to be even really possible. Now, having talking at something and haven't talked back and respond is something that seemed like a reasonable manner, and have these things spread throughout or so many homes across the world. But now, that's just commonplace, you know, we don't even think about it. So there's a conceptual component to AI as well.

Justin Grammens  22:13  
Yeah, for sure. So yeah, allowing software to learn from data. I think that's really good anything, because we're in this state now where we can get so much data, and all of a sudden, these things can start to just learn a lot more.

Greg Hayes  22:25  
Yeah, they can. It's interesting, because, you know, we've been having conversations lately, in parts of our team about how much data you really need to learn how to train a model, how much information, you know, at what point does the amount of data that you feed a model begin to have diminishing returns, and that varies a little bit based on the problem domain, you're working in the kind of estimator that you're dealing with. And then the outcome that you're looking at. In some cases, we see that, you know, just because you have more data doesn't mean that your estimator actually learns more from it, then it doesn't make sense that I mean, you're not adding value, but to feeding by feeding more data to something that isn't actually becoming more generalizable, as well as more accurate, right, whatever the metric that you're working against is just by feeding more data.

Justin Grammens  23:11  
Sure, what is a day in the life of somebody who is a data science director,

Greg Hayes  23:16  
I spent a lot of time in my role working with Python, I focus on the technology stack. So I work a lot with our team members, trying to enable them to be successful, as well as other capability owners in the product development process. You know, I get engaged heavily in solution architecture, discussions with other people across the organization, as well as with my peers. So if I look at the organization that I am part of, we have capability owners for the technology side, as well as portfolio owners on the business side. And people in those roles are collaborate to make sure that the portfolio is aligned to the business requirements, and that the technology then the technology strategy, then supports the business environments. And then across functional areas, all of the capability owners are collaborating to build solution architecture, as well as team structures that will support our customer outcomes.

Justin Grammens  24:13  
Well, you mentioned Python, what are what are some other tools that you guys use internally within your team?

Greg Hayes  24:18  
Well, as a language, we leverage Python, we do a lot with the Azure Stack and cloud computing. We collaborate a lot with the engineering teams on not just what kinds of sensors we will use, but also how that data, the data from those sensors will land within the platform. If we look at some of the other tools that we use, we use cube flow. So we have a technology platform that leverages Kubernetes, and then all of the components that come with that. So nucleo is a serverless platform. We're writing Python on that cue flow i q flow pipelines for pipeline development, and we're actually starting to look at feature stories as well.

Justin Grammens  24:57  
Very, very cool. How many years have you been They're at Ecolab for years actually today. Oh, wow. Congratulations. That's cool. That's good. I'm assuming you've just seen over the past four years, just a lot of newer technologies coming out with regards to just this whole pipeline automation stack? Where do you see it going in the future? I mean, are we kind of getting off to a point where you're like, I don't feel like there's much more else we can do? Or I guess what's the next phase of this?

Greg Hayes  25:24  
Right now, if you look across the functional area, data science, the definition of ml Ops, it means different things to different people, we're still trying to define what constitutes a good writer idealized ml ops process. And the idea of applying DevOps to machine learning is relatively new. If you look at something like cue flow, it was developed with that in mind, but there are a number of other platforms out there that have slightly different approaches. So I think that's going to take some time to really work itself out. Because not all DevOps concepts apply directly to machine learning, a lot of data scientists, they're focused primarily on training an estimator. But if you think about building a data science product, it's not just about the estimator. It's also about the feature engineering that you're doing. And validating that the code performs as expected, and that it's robust to changes in the data. There's a lot of software engineering that has to happen there. So what the balance is between how much of that is scientist, right? How much of that data engineer writes, how much of that machine learning engineer would right? And how those things get deployed? through a CI CD process? I think it's still evolving. And I think it's going to be several years before, really the best practices get discovered.

Justin Grammens  26:42  
Yeah, for sure. My background has been in software for my entire career. And I just take a look at, you know, all the application stacks that I've developed. And you know, we have somebody who's a master at the front end, somebody who maybe knows the middle tier people, they know, back end logic, you know, business logic, but then you can get into the database layer and tuning databases. So you got all these specialized skill sets along the way. And I think that's kind of what you're saying, right? You've got people that are data science experts that like they know, Python, or they know Jupyter Notebooks inside now. But that's just one small piece of the entire puzzle, like these things need to get deployed with API's and scale up and down and all that type of stuff. So there's just an entire infrastructure that needs to go around this. It's similar, but it's different, right?

Greg Hayes  27:23  
Yeah, I think that's why, you know, I find myself in a lot of discussions about solution architecture, data engineering, land data, but, you know, once it gets into the platform, then how do you transform it into something that a machine learning model can learn from and or score off of, so it's not just about the model, it's about the entire product that you're building. And all of the components of the product have to work together from the data ingestion piece, to whatever component, you know, whatever parts model or models that you're building may be part of. But then front end visualization, and iterative development, right? You're building a product, the machine learning models are one component of the product that is going to be deployed to meet some customer needs.

Justin Grammens  28:04  
Well said, as people are getting into this field, you got into it, like you said, sort of through six sigma. But I mean, other people that are up and coming, maybe they're coming out of college or whatever, and they're interested in getting more into it. What do you suggest, like if there are interesting books that you have read or things you've seen online? conferences, if they ever happen again? Or I guess I should say, when they happen again, after we get out of this COVID stuff? But yeah, I don't know if there's any advice that you would see? Or To be honest, if you're hiring on your team? What are the types of skill sets you're looking for people that come in with?

Greg Hayes  28:32  
In terms of skills? What sorts of skills do I personally consider valuable for a data scientist above and beyond your people that no psychic learn? Maybe you know, TensorFlow, mean, some of the frameworks that are commonly used, perhaps things that are oftentimes aren't as well thought of, do people know, do they know how to use source control? Can they leverage source control? Can they collaborate with others in software development? Can they write unit tests? Those are the sorts of things that I think are going to be more important as time goes on. And also, can you move from inmemory to distributed compute, because when we talk about scaling models, and engaging in a collaborative development process, if we acknowledge that machine learning models are a component of a broader product, then those are all capabilities that you have to bring to bear to be able to iteratively develop and add value right to your customers. And those oftentimes, if you go to take classes on Coursera, or other things like that, those oftentimes aren't really the point of focus. For sure.

Justin Grammens  29:36  
It feels like it's some of those soft things, I guess, those soft skills that are equally important, along with the technical ones that will help people along the way. Well, how do people reach out and contact you Greg, if they're curious to know more? You on LinkedIn?

Greg Hayes  29:51  
Yeah, I'm on LinkedIn. So you can find me I think it's Gregory B. Hayes. So that's a that's a great place to reach out. I tend to respond pretty quickly, always glad to hear from people and make connections with other folks that are interested in this space.

Justin Grammens  30:06  
Awesome. Well, on a more personal note, I mentioned at the beginning about FIRST Robotics, I know I'm be on here talking about applied AI, but I'm just gonna kind of go off script a little bit. Could you talk a little bit about that how your experience was, and maybe why somebody might want to get involved with it.

Greg Hayes  30:20  
Yeah, so we actually moved to Minnesota, when my my kids were really just entering their teenage years. And they heard about FIRST Robotics, got kind of interested in it, and went to a couple of the initial meetings, and just absolutely got sucked in. And by virtue of them being involved in it, that I got interested in it. And I probably can't say enough good things about what first provides for kids that have a passion, they find an interest in robotics in computer programming, in mechanical engineering, electrical engineering, it was a great experience for my kids. And a good experience by extension for me as a mentor in the program, going to a FIRST Robotics Competition, the first time I went to one, I was stunned, because it was like going to a college basketball game, lots of excitement, kids just absolutely having a ball. The great thing about first is it gives kids who have an interest in that sort of thing, a place to really explore the boundaries of science and technology and what it can do for them in the future. There aren't a lot of places as kids that you can go and collaborate with others on a scientific endeavor over a long timeframe. And also, at the same time simultaneously, being faced with the pressures of a product development process, at least when, and I'm assuming it hasn't changed, because I'm not in it anymore. But when my kids were in it, you had a first challenge was announced. And you had a limited timeframe to do the bill. Once that time was up, you had to bag the robot and you were done. You couldn't continue building, right. So you have the time components as well, as part of what it's like being in the real world. I probably can't say enough good things about FIRST Robotics is really a great experience for me, working with kids and also great experience for my kids just in terms of really developing a sense of what they enjoyed as it relates to science and technology.

Justin Grammens  32:17  
That's great. Yeah, I will definitely be sure to include some links to it. I'm assuming anybody can just search around FIRST Robotics are all over the country, maybe even all over the world.

Greg Hayes  32:25  
Absolutely. All over the world. If you want to see some something that's kind of fun and exciting. Go into YouTube and just search for first robotics competitions. What these kids accomplished is just amazing.

Justin Grammens  32:36  
That's cool. Were you a part of the donkey car? As I recall some of that work. Dan McCurry was doing.

Greg Hayes  32:42  
Yeah, we were at a meetup, ai meetup. There was an interest in self driving cars. And I had actually, as a birthday gift, I had asked for a donkey car kit. So I built a donkey car started kind of playing around with it. And at this ai ai meetup, we were talking about self driving cars, and it came up and I shared my experience. I think that there were a number of people in the local meetup that really got pretty interested in it. And they took it and have shared some of that with local kids groups as well.

Justin Grammens  33:12  
Yeah, I know. There's an AI racing Lee, the Dan has kind of spearheaded and been doing some very interesting stuff. So it's kind of that mix of physical hardware, a little robotic car, and then, you know, bringing together some AI and machine learning. Can you try and drive these cars around the tracks? I'll be sure to put some links to that fun project. We built one at lab 651 as well. So yeah, I remember us doing some testing and stuff like that setting up a little track at our office and playing around with it. So fun stuff. And yeah, I know that even like the donkey car stuff is it's all open source. And there's sort of a worldwide movement to get more and more people involved in just building stuff and making things smarter.

Yep, I think it's the basic basic idea is build your way forward. Well, that's good. Well, cool, Greg. Yeah. Is there anything else you wanted to mention before we sign off here?

Greg Hayes  34:00  
No, just thanks for having me on. I appreciate the opportunity to chat with you. And thanks again. Yeah, for sure. For sure, no problem. All right. Well, take care and we'll definitely be in touch soon.

AI Announcer  34:11  
You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn if you are interested in participating in a future episode. Thank you for listening

Transcribed by https://otter.ai