Conversations on Applied AI

Sabina Stanescu - Building Better Products Using Responsible AI and MLOps

June 01, 2021 Justin Grammens Season 1 Episode 21
Conversations on Applied AI
Sabina Stanescu - Building Better Products Using Responsible AI and MLOps
Show Notes Transcript

How does one ensure that data models are performing the right tasks as the right time? We've heard a lot about MLOps and in particular Responsible AI. In this episode, I speak with Sabina Stanescu on a number of applications she has worked on in Product Management address many of these points using MLOPs tools and techniques. Besides being the Director of Product Management and Data Analytics at Altair, she enjoys being a lead instructor of Data Analytics at BrainStation, has generously volunteered her time to mentor at a number of Coding Hackathons and Ladies Learning to Code events and speakers regularly at multiple conferences and meetups.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events! 

Resources and Topics Mentioned in this Episode

Enjoy!
Your host,
Justin Grammens



Sabina Stanescu  0:00  
Whatever we build has to be reproducible and traceable. Especially in a world of, you know, responsible AI. ml ops has to help with those areas, we have to be able to see why we made certain decisions and recreate those decisions perfectly. That's one of the goals of ml ops for sure.

AI Announcer  0:20  
Welcome to the conversations on applied AI podcast where Justin Grumman's and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.

Justin Grammens  0:51  
Welcome everyone to the conversations on applied AI podcast. Today we have Sabina Stennis, school director of product management and data analytics at Altair. Prior to joining Altair, she was a lead Product Manager of machine learning at points where she was identifying areas of business where machine learning can improve business outcomes and devising actionable plans for engineering to make these solutions a reality. In her time outside of work she enjoys being a lead instructor of data analytics at brain station has generously volunteered her time to mentor at a number of coding hackathons, and ladies learning to code events and speaks regularly at multiple conferences and meetups.Thanks to Vienna for taking the time to be on the program today. 

Sabina Stanescu  1:27  
No, thanks for having me on. 

Justin Grammens  1:29  
Excellent, very cool. So you know, I give a little bit about information with regards to sort of where you're at today, I don't know if you wanted to share a little bit about what got you to where you're at and how your career has sort of unfolded. 

Sabina Stanescu  1:40  
Sure, it's kind of an interesting journey. Because I started out in academia, my background is actually in biology and science. And I did my master's degree in basic science. So I've gone a long way from basic research to applied work in industry. And I started out my journey at a company called angle software, basically working with a product team. After a year, I moved into data science, because I thought, you know, I really wanted to work with data. I work with data during my master's learner, a lot of stats and programming and are. So I did data science for a few years as a practitioner, and then moved into product at points, as you mentioned, I was the lead product manager for machine learning there. And I really start falling in love with with product management and just being hands on and delivering value and just doing anything possible to bring that to reality. And so that's sort of like the motto of a product manager and being like a mini CEO of your own product. And then I went back into data science at Altair and being a practitioner and managing a team of data scientists there. And Altair started doing some interesting work with ml ops and basically building out a new platform for data science and machine learning. And basically ml ops and I moved again into products. So I've been going back and forth between practicing data science to working on a product that works with machine learning, and delivers value. So there's different areas of value that can be delivered in different ways. And it turns out, I really enjoy sort of the product viewpoint on that. Gotcha. No, I'm similar in my career to where I like sort of taking a high level approach and looking with regards to, you know, what is the forest look like overall, but then going kind of deep dive for a period of time and kind of getting deep into the technology. You mentioned, you know, sort of getting into data science, you know, was that a big term? You know, like, what was the timeline? I guess around then? Yes. So when I was doing my master's, Harvard Business Review article came out that talked about data scientist being the sexiest new job title by the year of 2018. So at the time, it wasn't really that something that was taught in school, it was more around, you know, statistics was around, data mining was around, but the word data science was sort of just coming into its own. And it was a stats prof that brought up this article to the class I was in for statistics with R. And when I saw some of the applications for statistics, I thought, yeah, this is the place to be this sounds like a lot of fun. And I want to do this in my life, in my career. And basically, that's where it started, it started out with an idea of this new job title, and I was lucky enough to be there at the very beginning and and sort of get into it before the huge rush. And nowadays, it's taught everywhere in school courses online. There's a plethora of resources out there that anyone can read and get into, but at the time is a little bit more up in the air. 

Justin Grammens  4:36  
Sure. Do you think you know you mentioned about your background is in biology, do you think having experience in the life sciences, Physical Sciences has contributed to what you work on just kind of a different perspective than maybe some other people that are in your field? 

Sabina Stanescu  4:50  
I think it has helped in terms of considering the systems thinking approach, at least, the field that I was in is all about basically, you could call it systems for like ecology, so how do large natural systems work together, they're very complex, and you can try to model their behavior. And that applies in businesses as well. So the intertwined systems, you know, whether it's their natural or manmade, they're very, very complex. So I think that's helped me, I also have a background in psychology from my undergraduate. So this has helped me in the sense of more of like personal relationships and thinking about how people interact together, what are some biases, perhaps are trying to address biases, for example. So it's interesting, and you don't have to be a computer science background or hard math background or statistics background to succeed in a role like this? 

Justin Grammens  5:43  
That's great. No, I think about the product manager, obviously, having to deal with different people's personalities, I guess, might be a little bit more politicking going on just depending on the size of the company. So I could totally see that sort of psychology background playing a role into this.

Sabina Stanescu  5:57  
And I do have a few colleagues actually, that are in bio background that work with me. So I'm actually not the only one. So it's, it's quite cool. Interesting.

Justin Grammens  6:06  
Yeah, I played drums, and I was a musician early on. And it's kind of funny how I got into software, because it's most people think of software and math is what I graduated in. And so it's like, very sort of, like, you know, very like, rigid, but it's actually, I think you'll, you'll find this too, I guess, with data science and product management, it's more of an art than a very sort of like fixed skill.

Sabina Stanescu  6:26  
Yeah, I find there's a lot of creativity that's needed and a lot of flexibility in how you think and how you decide on goals and how to get there. And it's not a linear path that you can take to solve problems. So it does help to have more of an leaning in a sense to get to your destination. It's not all just math. 

Justin Grammens  6:44  
Yes, for sure. So you mentioned ml Ops, I guess, maybe for those that you know, are new to the program, or maybe you don't know what that is, maybe you could define that a little bit and sort of give your perspective on it.

Sabina Stanescu  6:54  
 Yeah, so ml ops is all about basically taking, I've seen a really great quote, of going away from artisinal model development or machine learning, model development and going into more of like a production, like factory production of machine learning models, what does that mean? It's basically taking models and operationalizing them, basically, at high quantities, making a reproducible system where you can always reproduce any model, you can always find the data that created that model, you can trace any of the inputs, you can validate all of the outputs. And you're basically creating a system that's resistant to stress resistant to attacks. So basically making a robust process and system to get your models out there in the world to make some sort of decision. And to make a difference in some way. Whether that's, you know, improving a bottom line for selling more product, or looking at preventing breakdown of machines on the factory floor, whatever it is, it has to work in a reliable way. So it can't just be on a notebook on your computer, because that's not going to make a difference anywhere. Excellent. Yeah, it sounds like it's a hot term. These days, I seem to see ml apps showing up in a lot of things. So my background is in software, like I said, and I'm starting, you know, been getting into data science more and more recently in machine learning AI over the past couple of years. And I was like, Oh, it's cool. You know, I can basically run a Google colab thing or Jupyter Notebook. But like, sort of tying that into an API, like something that in software engineer or other programs would use seems like there's a big sort of gap there. So it seems it sounds like it's part of that. But then also more or less like production, icing this thing, because there's a big difference between sort of a rapid proof of concept and something that will evolve and learn over time. Yeah, there's a big difference. And the field is evolving. And when I worked at points, we built up our own ml ops system and set of processes and tools, basically borrowing from DevOps. And you'll probably see articles like this, you know, how did we get from DevOps to ml ops? And what works and what does it work? And what can we borrow? And what new tools do we need? It started out before the term ml ops actually existed, which is you just have to embed your model into the business somehow. And the business itself has a set of requirements and SLA. So if you have an e commerce website, you know, your website hopefully doesn't go down from you know, spikes in traffic, while your model because it's embedded in that system has to follow the same rules. And if there are errors, you have to be able to recover gracefully from those errors. Or if there are attacks, again, you have to be able to deal with those attacks the same way you might want to not be affected by a SQL injection on your website, your model has to follow the same sort of security ideas for any kind of system that's out there in the world. So I think that sort of thinking has been evolving and, you know, people have been building their own tools and there's more and more open

Source now that's available to help organizations start to cobble those pieces together instead of building them from scratch. Yeah. And what are some of the tools that you're using day to day? Are you more on the open source side or proprietary, a little bit of both building your own in house? What? What's your team's perspective and a lot of this. So in the past, as I've said, when I worked at points, we built our own in house tools. We made our own pipelines in Git lab, and we built our own sort of traffic routers and monitoring and reporting. Now, it's more around we're in my current role at Altair is we're building tools on top of open source. So we're cobbling together all of the different elements of the ML ops lifecycle into like one platform, we're sitting on top of open source, there's just a lot out there. And it's really good. I guess the challenge is cobbling it or like gluing it together in a nice way that helps you get to your goal. So one of the tools we're working on integrating right now is Selden core, for example. It's a great tool for having production quality deployments of models on Kubernetes. We're also working with ml flow, which is great for a Model Management, so having multiple registries and repositories, tracking versions, things like that. So lots of great tools out there. And it's just a matter of making sure you have all the tools for each part of that ml ops lifecycle. Well, how big is the team? Are you finding challenges with regards to larger smaller teams? Or is this do you think that this sort of spans no matter what size your organization is? So I think this depending on on the team size, so right now in product management, I have a team of two product managers specifically focusing on on building ml Ops, and we have quite a large development team that that takes on different feature development. So from an engineering perspective, it is quite a large team, in terms of actually our end users that would use such tools, you know, you can have very small teams, that would be using the same tools as very large teams. And they would have to follow the same sort of lifecycle together models in production. What difference between big teams and small teams is how many hats one person has to wear. So in a small team, you might have a data scientist that does everything from gathering the data to training the model, to basically getting the model in a staging environment and testing it there. And then, you know, there's just one other step with someone putting that model in production. In some teams, there's a lot more stratification, like different people work on the data pipelines, other people work on the models, other people work on approving the models, other people work on getting those models into a deployment stage, and so on. So I think all the steps need to be done for whichever organization you're in is just how many people do the steps and how many handoffs there are as differs between small and large teams? Yeah, that makes a lot of sense. I've worked at large organizations where you just have a very focused role. And you just you kind of do the piece that you do and other people around you to do it. And now I'm in a small company. Yeah, you're right. You just have to do whatever you can do, just to sort of keep things going, which makes it interesting and exciting, but also can be challenging, with a lot of context switching and wanting to pick up new things. Yeah, I think my job today, even though I'm in a pretty large teams, a lot of context switching and sort of zooming in and out in different areas of the product, different areas of work and research. That leads me to my next question, I guess, yeah, what like sort of what is the day in the life of somebody in your role, The Day in the Life can be anything from working on the vision. So as I said, ml ops is my area of focus. So I have up to three or four years worth of vision in terms of what I'd like to do, and then starting to go down from there on planning, like themes, and then planning individual features and epics that we want to take on and develop within our team to attending meetings about releases, and looking at blog statuses and looking at where they're blockers for the team to basically doing more research. So it goes from tactical to strategic very quickly, from meeting to meeting. Wow. Yeah. Sounds Sounds like exciting role to be in, what are some applications, I guess, like kind of in the real world where some of these models are actually deployed in us? So like, what are some customer use cases? Maybe that Altera is involved in? Yes, so there's a lot of different use cases, depending on the industry. So in the credit industry, for example, you might have credit risk model that's running continuously. So anytime someone let's say is applying for a credit card. Let's say Amex application online, you put in your information. And if you get auto approved, for example, there's a model in the background that's basically scoring you and determining, you know, based on your income and your job and your credit score and a whole bunch of things. They'll make a decision and they want to automate that as much as possible. They don't want a human looking at

application, they just want, you know, yes, no, and how much credit you're gonna get. Other applications can be for engineering. So there's some interesting work going on now with looking at simulation data. And there's a huge life cycle in designing products. And a lot of time is spent on rerunning simulations with different parameters. And that can take a long time from an engineer's perspective. But if you can have a machine learning model in the background embedded in the product, it can actually create predicted simulation results based on past runs. So you don't rerun the simulations every time you make a single change. So you can embed it in products, you can embed it in services, like getting a credit card, you can embed it in systems, like pharma, for example, or any kind of factory work where you're just doing like predictive maintenance. So lots of different applications, depending on the industry.

Justin Grammens  15:53  
Yeah, very cool. I think that's the beauty of machine learning AI, this whole field is is as long as you can get the data and train the models, it feels like there's the kind of the world is your oyster, there's just so many different applications and where you can go.

Sabina Stanescu  16:06  
And one of the challenges is figuring out if you have the data, what can you do with it? And do you have the right question that you want to ask over the data? And once you figure that out, if you have the right data to ask the question, it's all about doing the rest of the work, I suppose.

Justin Grammens  16:21  
Yeah, I guess that's, that's what I was gonna ask is like, what are what are some of the challenges that you have? Do you find sometimes? Yeah, like, how do you attack this beast? Do you ask the questions first, from the business side? Like, what are some things that you want to answer and then go on a data hunt? And maybe realize you don't have the data to answer that question? So then go back? Or do you find a bunch of data, get a bunch of data, and then sort of maybe cobble together questions you can get out of that? What's been your approach?

Sabina Stanescu  16:43  
I think it goes both ways. Sometimes you have nice, pleasant discoveries, as you're looking through data that you didn't know you had, and you're like, Oh, this could actually help with this particular use case. In our business, I didn't even know we could do that. But more often, it's the business has a certain need. And you go and look to see, would you have the data to build a model that would help improve that. And then there's the whole process of you start modeling on a bit of sample data you back tested, you look at, you know, if we had a model in production that did this would actually have made a difference. So there's an important step to actually look at whether or not it would make a difference before you you put in the huge investment to actually get such a model in production. And sometimes what you want to do is test that against simple business logic. Because if you test it against simple business logic, and you get the same or even better results with the business logic, there's no point in investing in building out a model. If you don't have enough data, in some cases, or, you know, you just have good heuristics. So I've done that in the past where basically, you just need to find a way to test business rules versus an actual model. And many organizations do this, even before they go in and production as a model. They'll see, you know, how well our business rules doing? And if we back test the business rules versus a model, we have done better?

Justin Grammens  18:08  
Sure. And when when this testing happens, is it financially based, you know, hey, I've been able to remove this amount of human labor, for example, to do it, I'm assuming there's quality related to right. So it's all these credit cards have been flagged as rejecting everyone, then you obviously something's wrong there too, because that's gonna cause a bad customer experiences. What are some of the areas that the guards to sort of how you're validating if you should move forward?

Sabina Stanescu  18:31  
Yeah, it depends, of course, on the use case, but let's say we have a fraud example. And we have some simple rules to try to catch fraudulent transactions that are coming through. And one of the things you might look at is if you're flagging a lot of transactions for manual review, how many people do you actually have on your team to manually review it? So if this model is flagging everything? Can you actually keep up with SLA to turn around transactions in a certain amount of time? But you might also want to say, Okay, are we more interested in catching high number of smaller transactions that might be fraudulent? Or are we really focused on capturing big transactions and different models might have different outcomes? So it's all about what are your most important metrics around that? and machine learning and might be like, what's more important that you have more or less false positives or false negatives? In some cases, one is more costly than the other?

Justin Grammens  19:29  
Yeah, sure. So is that all sort of plugged into a formula? Do you have a lot of bean counters that are kind of trying to figure all this stuff out to say, yes, this does look like it's financially moving forward, or looking just kind of going with more gut instinct, I guess, in some ways.

Sabina Stanescu  19:43  
In this case, you basically just want to have good reporting. So it's all about agreeing on your metrics ahead of time. So before you actually go in production, and being able to track those metrics, especially if you can track them in near real time. It's nice to see like okay, this is how we're We're doing right now let's say we're tracking percent actual fraud caught, or something like that. And maybe it's like total dollars of fraud caught. But we're also tracking like, let's say another metric is load of our team. So maybe, depending on what the department's goals are, you look at all of those metrics and decide if it's doing well or not. But it's all about being able to see what's happening. And then you can make decisions based on that.

Justin Grammens  20:25  
Yeah. So do you do a lot of cases where you're, I don't know, for lack of a better term, kind of doing a B model testing, where you'll put something in production. And maybe this is looping back to the whole ml ops thing, right. So you're testing something and logging being you know, tweaking dials just a little bit to see how it performs, is that something happens kind of on a day to day basis, when you're dealing with millions of credit card transactions, it sounds kind of risky. But you know, maybe that's the way to do it.

Sabina Stanescu  20:48  
Yeah, a B testing is very common. And I've used it a lot in my career as a practicing data scientist. And you typically don't change. If you're doing AV testing, which is let's say, we have two models in production, half the traffic goes to one model, half the traffic goes through the other model, you generally just don't change that super fast. It's meant to be usually a long running test. But you're just monitoring those in real time. It's a little bit of like set and forget, unless you have some sort of triggers with like alarms on something going terribly wrong. But you wait until you gather enough data to be able to do some statistical testing, and see if one is significantly better or worse than the other based on the metrics you're interested in, in some cases, where you do want to switch and react very quickly using something we call multi armed bandit testing. And this is where you can have the traffic automatically adjust and send more traffic to the best performing model almost in real time. So multi armed bandits work on a basis of a reward. So you can set a reward to be something like conversion. So you have an ecommerce website, and you're looking at this model is giving this discount or this offer. And this model is giving a slightly different offer based on your profile, you know which one is leading to the best conversion on that offer. And once you get that feedback data in, obviously, you're not going to get it quite in real time. But there's a little bit of a delay from seeing the offer to deciding whether or not you're going to buy it, that data loops back. And then the bandit can make a decision to say, Okay, this is the best performing model, let's send most of the traffic, let's say we decide 90% of the traffic goes to the best model and the rest 10%. We save for testing multiple models. So when you're doing something like rapidly changing, right like that, you can go with that approach instead of a b testing, which is more static.

Justin Grammens  22:37  
That's fascinating. Yeah, that's cool. I feel like you know, to orchestrate, all of that would take quite a lot of work, right? You've got load balancers and stuff like that. You've got e commerce systems, we've based on purchases that have happened, right? So there's actually been a purchase that's happened. And probably obviously, you want to take a look at the big purchases, right? So it's all these big purchases people are making Well, how do we get people to buy, you know, bigger things. So just the entire orchestration, and the amount of I guess, from my world, just from software engineering standpoint, like the amount of people that we have need to be involved in all that to make happen. It's awesome. But it sounds like it takes some thoughtful planning and getting everybody sort of on the right page and everyone working together.

Sabina Stanescu  23:11  
Exactly. And, you know, the tool that I mentioned, the open source like Selden core, for example, has multi armed bandit options available, they've got two routing options, the epsilon greedy, and Thompson sampling, I believe. So these are two ways to basically figure out how you determine a winner. And then in my previous role at points like this is years ago, we built those, like a simple epsilon router from scratch, essentially. So before those tools existed, you have to actually code those traffic routers yourself, and do all of the steps to manage that. So it is quite a lot of work and planning. And then also building out those data pipelines to calculate the reward is very crucial, and also a lot of work to wire together.

Justin Grammens  23:54  
Yeah. So where do you see MLS going? Then in the next, you know, five to 10 years? Is it just gonna be it can be layers upon layers, where things that like you said, in your previous role, you had to build a lot of stuff from scratch. Now, it's kind of built in open source? Do you sort of see and get it getting it easier and easier. So just in research, or any sort of like generalist will be able to build on top of all these systems that are being created today?

Sabina Stanescu  24:15  
Yeah, I think it's definitely going to get a lot easier. We'll have more open source tools and more platforms that will bring those tools together, and sort of make it a you know, click here to set your traffic and click here to add a data pipeline. That's what I'm aiming for, to build something that's generalizable and usable without having to build everything from scratch. There'll be a similar journey, I think, to machine learning where you get more and more tools that make it easier to work with coding, like let's say you have things like notebooks make it easier packages, like Kerris make deep learning easier to work with. I think the same will happen with ml ops.

Justin Grammens  24:53  
Excellent. You just sort of see the advancement of all these tools keep building and that's the beauty of open source. Anyone can contribute.

Sabina Stanescu  25:00  
Yeah, there's also a lot of startups, at least in 2020. There are a lot of startups that received funding around ml ops. And one of the predictions actually from Forbes, I believe, for this upcoming year for ml ops is a lot of consolidation of different startups. But it's interesting, there's a lot of interest in ML ops as a field, a lot of open source tools, conferences, it's becoming a very hot topic. Gartner cares about this for their data science and machine learning platform evaluations. So it's definitely going to be very much needed in the future. And it will become standard, well, maybe not standard, but it will become something everyone needs to do well,

Justin Grammens  25:38  
yeah. And I assume, you know, it's the size of the data sets and all the complexity. And it's not getting easier, like these problems that we're trying to solve these challenges, I guess, in the world of machine learning, they're not getting easier anytime soon. So you just you need to build smart inspire systems to keep up with them.

Sabina Stanescu  25:53  
Yeah, and not only that, whatever we build has to be reproducible, and traceable, especially in a world of responsible AI, ml ops has to help with those areas to basically, you know, we have to be able to see why we made certain decisions and recreate those decisions perfectly. That's one of the goals of ml ops for sure.

Justin Grammens  26:15  
That whole sort of ethical side of it, and being able to, like you say, do a lot of logging and understanding a little bit more. So it's not just such a black box.

Sabina Stanescu  26:21  
Firstly, it sort of intersects with explainable AI. But there's also this sort of reproducibility and being able to trace like, we did these kinds of transformations on the data. This is the pipeline that led to this particular data point, it was used in this model that was created in this environment with these packaged versions. And then if I were to just redo that, I would get the same prediction every single time. So that's not like a fluke. And I can prove it. And you know, if you add layer on explainable AI, you can start to explain why not just tell but why that decision was made.

Justin Grammens  26:56  
Nice, probably the the science and the proof side of data science, making sure that it's, it can be explained, like you talked about,

Sabina Stanescu  27:03  
and especially important if these decisions affect people's lives. Such credit is one of those very key examples. And the credit risk industry has been explaining their decisions for a long time. So they're sort of pros at this. So just expanding that to any kind of decision making would be a really great goal to have. Yeah, you

Justin Grammens  27:23  
know, when you were talking about, I guess, making decisions, I was thinking back to the movie Moneyball, and I'm not sure if you've, if you've seen that it's really around baseball, and what happened with some small market teams being able to actually apply data. But a lot of the scouts at that time, were looking at players like physically and saying that person is going to be good, let's put them on the team. Or, you know, oh, this person drinks a certain type of juice, for example, they're gonna be good. there was all this stuff. And there was no real data science that was going on back in the early days of baseball, they didn't really think about it. And once they started putting this like to actual hard stats on people, then all of a sudden people started waking up, but it was just funny how they would I get signed contracts for for people based on all these other attributes? Yeah, gut feeling? Yes. Yeah, exactly. Well, so you know, you mentioned about affecting people's lives. I just wanted to touch a little bit on this, because one of things I'd like to ask people about to come on the show is is like, what is your perspective on the future of work, and not so much the consumer who's going to be applying for a credit card, but let's say maybe those jobs of people that maybe were doing the credit validation manual process in the past, and there's a lot of conversations around as this technology gets better and better? Are people going to be losing jobs over it? Is our society gonna be able to shift and transform? It's a big question, but I'm just kind of curious, since you're kind of in the thick of it, you're creating some really cool products that I think are, you know, sort of changing the world? What's your philosophy of it, what's what's your thinking, as we sort of kind of replacing humans with a lot of these algorithms.

Sabina Stanescu  28:48  
I think in a lot of cases, we may not be replacing humans, but just augmenting their jobs. So sort of taking away the boring repeatable tasks that a machine can easily do. And when I say machine, I just also mean machine learning. So models are really good at fitting on narrow problems and generalizing from the data that they've seen to creating predictions on new data. And they do that really well. But areas where cross domain knowledge is needed, at least Currently, we don't have a lot of good solutions from a data science and machine learning point of view. But they're in some cases where we're automating, maybe factory floors, or warehouses where we have many robots that are picking items in a warehouse and packing them in a box and automating that away. So it do think those jobs will be affected and loss will still have jobs around managing, let's say, that warehouse so more managerial roles, more maintenance roles. So like actually fixing up robots are developing the software, obviously, for those machines, maintaining the warehouse itself, so that will still be needed, but a lot of unskilled labor in those cases will be lost. So I think in the short term, there will be job losses, it will happen slowly as adoption of these technologies is not easy, and it's quite slow. And any kind of transformation is going to take a while. I guess the question is whether that transformation is slow enough to keep up with changes in education that would prepare people for the jobs of the future. And that's what I don't really know, at this moment, just how slow or fast either of those can happen.

Justin Grammens  30:27  
Sure. Yeah, you know, I think a lot of people look at AI and machine learning, and they think it's, it's kind of all been done, you know, they think that there's this sort of generalized learning that's going on. And in reality, from my standpoint, like, these models are tuned very specifically to very, you know, narrow areas, and we're nowhere near what we can do what a human can do look at a painting and it can find things that are in there, that a machine, still no way is going to be able to do. So it feels to me, like maybe there's this feeling in the general public, that there's just a lot of scare out there that it's gonna be that these machines can do everything all the time. For everyone, it's like, that's definitely not true. Definitely not true. And sort of like, you know, the general media maybe plays to some of that stuff, too. You know, you see all sorts of futuristic articles and stuff like that, that are talking about things that are really decades away, but their pitch like they're sort of here now.

Sabina Stanescu  31:12  
That's right, as you said, the areas of application where machine learning really succeeds, they're very narrow, and they have to be very well defined. And anything outside of that, you have to have humans actually looking into it connecting the dots, in some cases, providing more value on top of what those models do. So a great example I've heard talked about at a session at rottman, at U of T was, you know, at the time of the word processor coming out, people thought, well, Secretary jobs are going to go away. But it turned out that they didn't. And that's because word processors took care of only one aspect of their job. But there's still other aspects that we're processors can do maybe setting up meetings or coordinating events that any other kinds of things. So software or machine learning can be really good at solving small bits of someone's job, but not the whole job. In most cases.

Justin Grammens  32:04  
Yeah, for sure. We talked about the next I guess the the next generation of people, right, the basically the people that were training up to sort of get into this field. I mentioned during the the intro some of the mentoring and some of the groups you've been involved with, which I think is phenomenal. I think it's I really, really have a lot of respect for people that are trying to mentor get a part of hackathons. That's a part of sort of my core as well, too. Is that something that you see as a Bible, I mean, obviously, you're doing it. But you're seeing that as something that is, you know, going above and beyond what maybe normal schooling is providing to these students,

Sabina Stanescu  32:36  
I think it's very important for students to see people actually in the field coming in and helping them and teaching them, it's not just your teacher that you see, every day for your classes, you have someone that's actually doing coding for their job, for example. And we really appreciate things like Hour of Code, where I've gone into grade four and grade fives, and there's a lot of great programming exercises for their age. And, you know, just making sure that they feel included, and they feel like it's a normal part of learning and thinking about a job. So just getting that started at a very early age is very important. But at the same time, it's not just that it's it's also like, ladies learning code is more geared towards adults and adults that want to get into coding and understand coding. So both are sort of important. It's sort of prepping for the future and sort of helping people that are already there in the workforce, it's important to give back, at least for me to give back to the community and be involved in those kinds of endeavors and just helping and, of course, it is not for everyone, not everyone's interested in taking those kinds of courses. But if there are people that are willing to learn and get into the field, I definitely want to support that.

Justin Grammens  33:51  
Yeah, that's really really cool. Sabina, are there courses at all like like online ones that you've seen, or people that maybe within your organization that have taken that? Or books, I guess anything that you might recommend people start with or want to get into this field?

Sabina Stanescu  34:04  
Oh, that's a great question. So there's so much out there, and from Coursera to Udemy, to the Khan Academy. So if you're starting really from the beginning, just understanding basic statistical concepts, maybe it would start at the Khan Academy and start learning about, you know, how statistical testing works, how machine learning works, is very, very high level. And then there's specialization courses on Coursera or Udemy, for example, where you can start to deep dive into particular topics for product management, there's a little bit less clear on how to get started if you're going that route. Actually, one of my team members was asking me about what's the educational background of a product manager, you know, how do you succeed in product management, and it's a little bit even less defined. There's people from all kinds of backgrounds from MBAs to biology, and it's more around practicing. So there are APM programs available. So Assistant program managers or product manager programs, for example, I believe we have one in Toronto and one in Montreal, where you can get placed within a company within a product team and sort of observe and learn from a real product team and how to Product Manager. So there's less around courses, but more around getting hands on in that area. Excellent.

Justin Grammens  35:24  
Well, you know, I'll be sure to include links to all these things you're talking about in the liner notes as well. So people can get involved with some of these classes, or some of these organizations, you know that our code is actually that's a worldwide organization, to my knowledge, and so really, really great stuff. How do people reach out and connect with you? I know you're on LinkedIn, I'll be sure to put a link there is that the best place for for people to find you and connect with you?

Sabina Stanescu  35:48  
LinkedIn would be the best place. I check it often.

Justin Grammens  35:51  
Good. Are there any other things you wanted to talk about regards to just general artificial intelligence, machine learning, you know, other projects that you've seen that you found fascinating, or topics that you might want to talk to listeners about?

Sabina Stanescu  36:03  
I think one of the things I've been thinking about is what's making a big impact on our lives, or has the potential to make a big impact on our lives. And, you know, the work from Google with alpha fold on predicting protein folding, I think is absolutely fascinating. And trying to better understand our own biology and better understand genetics and how they lead to certain phenotypes, or how do they lead to certain diseases or conditions or just understanding our bodies better? So I'm really fascinated by that work. And I'm really looking forward to see how that technology evolves and helps us understand basically ourselves better. Wow,

Justin Grammens  36:46  
what are some of the challenges right now? Is it largely a data problem? Are the algorithms not there yet? Maybe what are some of the details around the project that you're talking about?

Sabina Stanescu  36:55  
So I think it's just been a very hard problem to solve in the past. And I don't know as many details about the algorithm itself, but I have seen as had some success, and it's led to some open source adaptations and collaboration. And it's sort of like opening up doors. And that's what's very exciting about it, that it's sort of created this chain reaction of additional research and additional improvements. That's what's most interesting to me, I've been a little bit more disconnected from basic research being in industry. So sort of looking back at that and seeing where we're going next. And by Next, I mean, in sort of mid to long term is interesting to me as well. Well, that's, that's

Justin Grammens  37:38  
really, really cool. Thank you so much for being on the program. I think it's, it's been great to sort of hear your background, I think, in some ways, an inspiration with regards to all the mentoring that you're doing, helping students and obviously been able to be very fluid in your career in regards to jumping in deep into data and now doing product management. Not sure what the future is gonna gonna bring. Maybe with this new biology research thing going on at Google, who knows, maybe it'll end up there someday in the future.

Sabina Stanescu  38:04  
Yeah, yeah. It's it's been an interesting journey. And I never know in advance what we're gonna end up it's always a surprise when it happens to me, the river of life takes me I tend to go,

Justin Grammens  38:14  
it sounds good. But yeah, keep keep following your passions, I guess is what I tell people. Make what you do every day to day, something that you know, you truly have fun with, and you'll be happy in life for sure. I totally agree with that. All right, Sabina. Well, thank you again, I appreciate your time and look forward to keeping in touch with you. Thanks for having me on.

AI Announcer  38:33  
You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn if you are interested in participating in a future episode. Thank you for listening

Transcribed by https://otter.ai