In this episode, we speak with Jake Mason, a Senior Data Scientist at Carrot Health. Prior to his current role, he was at United Health Group where he developed and lead teams building real-time machine learning applications. Outside of his professional work, Jake has been a cofounder of two awesome meetup groups here in the Twin Cities. The first is "Analyze This" a group that focused on creating data science challenges by teams from data scientists in the community to help non-profits or other organizations achieve what they would have not been able to do themselves. Likewise, his next group, StarEightyTwo was on a mission is to help individuals learn analytics while creating positive change for nonprofits, businesses, and the public sector.
If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future Applied AI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!
Resources and Topics Mentioned in this Episode
Jake Mason 0:01
I think at a core level, I mean, a lot of these algorithms, you know, you're just finding a somewhat complex, you know, potentially complex, hopefully accurate mapping, you're trying to model a process right, so you're finding a mapping between an input and an output.
AI Announcer 0:21
Welcome to the conversations on applied AI podcast where Justin Grammens and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.
Justin Grammens 0:51
Welcome to the conversations on applied AI podcast. Today on the program we have Jake Mason, Jake is a senior data scientist at carrot health where he's doing some fascinating work in areas of predictive modeling. Prior to his current role, he was at United Health Group where he developed and led teams building real time machine learning applications. Outside of his professional work, Jake has been a co founder of two awesome meetup groups here in the Twin Cities. The first is analyze this a group that focused on creating data science challenges by teams from data scientist in the community to help nonprofits and other organizations achieve what they would have not been able to do themselves. Likewise, his next group, sorry, two was on a mission to help individuals learn analytics while creating positive change for nonprofits, businesses, and the public sector. Jake is also a graduate of the University of St. Thomas, a fellow Tommy like myself, where he majored in economics and had a minor or has a minor in Applied Statistics. Thank you for joining me today. Jake.
Jake Mason 1:43
Pleasure to be here. Thanks for having me, Justin. Awesome.
Justin Grammens 1:45
Cool, man. Well, so I gave a little brief introduction of you, yourself. I guess I'm a little curious to find out, I guess, what's the trajectory of your career? Ben?
Jake Mason 1:53
Yeah. So it's kind of an interesting way of getting into analytics, data science, Ai, whatever you want to call it, you know. So you mentioned the degree, I got economics and statistics, put them together econometrics, basically. And, and really where I came from, with that was I had an internship, and from Omaha, Nebraska, and had an internship with a local predictive analytics consulting company there after freshman year, I was a finance major going into school is like I'm sold, I'm going to do the banking deal, and blah, blah, blah. And then I built my first, you know, econometric modeling, I was like, Okay, well, when I go back next September, I am changing my major as quickly as I can, and you know, trying to stay on the graduation path. But yeah, that experience was really foundational, it was amazing, too. I think the model we built was predicting quarterly, like net exports, just the change in US net export to something like that. But it really grabbed me as to how amazing you know, we just did a little bit of math, and, you know, is this a linear regression model, but you know, the potential of that really dawned on me. So, you know, went back to school got a degree, of course, internet optim, United Healthcare through their technology development program, you know, one of many such programs here in the Twin Cities, for some of these bigger corporations, developing new graduates had some great experiences there. You know, one of those being in the IT operations machine learning group, so we did more projects than I can count related to, you know, IT infrastructure, just basically helping support the uptime of our several hundred services. So a lot of, you know, unique, challenging projects with that and stuff related to predicting the likelihood that a change to a production service causes an outage. And, you know, passing that on to the appropriate reviewers making sure they're knowing you know, which which teams are kind of trending downwards, in terms of their success rates, etc. With changes are generally more likely to cause outages.
Justin Grammens 3:47
So is this more on the lines of like DevOps ml? Yeah. So
Jake Mason 3:49
there, there are a variety of use cases we took upon anything related to like blue screens of death, Windows machines often happens. Yeah, no kidding. ran the gamut. That was a that was a long year, a lot of late hours. But then I worked at the health plan quality organization and you know, health care for a while. And now I'm doing a very similar set of work at carat where we provide a software as a service platform for managing the growth of your health plans, the quality as it relates to the commitment star measures associated with your plans, and then the health of your membership. So kind of a three module suite. They're
Justin Grammens 4:24
cool. When you say start start measures. Can you define that? I guess, I don't know a whole lot about this course.
Jake Mason 4:29
Yeah, so star measures are it's a kind of a suite of measures related to health plan quality. So like an example that there are 10s of them that go by various acronyms and the world of acronyms Basically, there are 10s of them, and some of them like I can name a few here. So one is the proportion of your members who have diabetes who who control their a one C and have it under under a certain threshold there. So the higher that rate, you know, a your members are no more More likely to be healthy B, the plan will be reimbursed more by CMS just the government? Because that's, you know, to be able they want to incentivize them at the proper management of those tests. Yeah. And it's medication adherence. So, you know, you have a hypertension, you know, how often are you taking your hypertension medications? Are you adhering to that regimen? And if not, you know, a lot of what we do is we build predictive models at an individual level to say, well, this person has been chronically non adherent, or they have, they're always taking their meds and we want to appropriately identify those individuals and and make sure we're, you know, getting in contact with them at the right time.
Justin Grammens 5:35
Okay, cool. If you know, I've had a number of these podcasts here that have been really in the area of healthcare, and maybe it's just the fact that we're in Minnesota, and there's so many healthcare companies, you know, around, but it feels like, there's just, there's just a ton of companies that are applying machine learning, I guess, in this in this healthcare space. And I guess, based on your background, obviously, you've worked for two healthcare companies, right?
Jake Mason 5:58
Yeah, just it's kind of serendipity that I ended up in healthcare. I was actually talking with someone yesterday about this career trajectory. And I mentioned that it was the, you know, UnitedHealth Group was the company I interviewed with my internship and they offered me a job. And now here we are just in healthcare. But yeah, like you said, there is the land of 10,000 Lakes land of seemingly 10,000 health care companies.
Justin Grammens 6:22
For sure, everything from startups, right. So care, it's pretty small group in comparison to something like optim or Uhg.
Jake Mason 6:28
Yeah, I mean, going to Kara was almost a almost 100% decrease in the the size of, you know, the number of people working there. It's a massive, massive difference.
Justin Grammens 6:39
Sure. Sure. You know, you mentioned having a background in economics. I'm just I'm just curious, has that has that helped you in this role? When you guys are starting to look at maybe some economic pieces behind health care? Do you think it's giving you a little bit more knowledge in the space? That maybe maybe just a straight computer scientist, you know, wouldn't wouldn't have
Jake Mason 6:55
it? depending on who you ask? You know, there are so many different answers, you'll get people tell you about what's the best major what's the best realm of academia for data science? I would argue economics is a great starting point. I think it tells me in a couple of ways, you know, one thinking about this is economics, one on one stuff, but like the marginal value of incorporating something new into a system, right? I don't know if I would have gotten mad. Granted, I wasn't a CS major. So I can't exactly speak to that. But that is just drilled into you over and over and over and dry your economics courses. So it's kind of just something that's always with you there. Well, is it really worth it to incorporate features x, y, and z into this model that are expensive in a certain way to get? Yeah, also, I think it gives you a good foundation. It's like another form of statistics, in a sense, you know, it's applying statistics in the economics realm. You know, it was a pretty rigorous course load in terms of the types of models you're building, really from the ground up, it's really a good foundation, I think, for starting to think about some of these other problems. And it helps to think about, you know, what are the benefits, like the cost benefits of what you're building? I mean, basic kind of doing this kind of work?
Justin Grammens 8:06
Yeah, no, usually, if there's not a return on investment, most companies aren't really interested in putting in a lot of effort, or at least if there's not a return on investment in in some period of time, right. So companies are willing to sort of take take the head when they build out a lot of these projections and stuff like that. Do you have a sort of a definition of artificial intelligence at all? So to put you on the spot, I guess, like an elevator pitch, I guess, or even just, you know, this whole area of data science, I guess, you know, what, how would you define it?
Jake Mason 8:33
Yeah, yeah. So like, data science, inherently multidisciplinary field, right, we're taking, you think of the Venn diagrams that people have put out there as statistics, math, computer science, business knowledge, right. And I think data science really is the intermingling of those three. And, you know, it's rare that you would find an individual with all three of those skills. But you know, having two of them is a really good start the other the unicorns that have all three, but, you know, it's really data science is applying that scientific method, which is, you know, kind of a nebulous thing in the first place, but it's really just being rigorous. Having a constant curiosity, sort of an unrelenting scientific approach to every question you have, as it relates to your business, provided you have data for it, or could obtain data for it. What's the saying, you know, in God, we trust and for all others bring data, right. It's something Kevin, but, you know, and then AI, I guess I don't work with a with a lot of, you know, ai technologies directly. I mean, in the sense, there's such a wide realm. But, you know, I saw this tweet by Francois chalet, the guy who created Kerris, you know, the deep learning framework, and he, you know, his opinion on it is it's not quite artificial intelligence. It's cognitive automation. So it's basically the encoding and operationalization of human generated abstractions, behaviors and skills and I think at a core level, I mean, a lot of these algorithms, you know, you're just finding a, a somewhat complex, you know, potentially complex, hopefully accurate mapping, trying to model a process, right. So you're finding a mapping between an input and an output? You know, it kind of boils down to, but I also think there might be a lot of emphasis on the Hot word, the artificial intelligence word. I think the degree to which certain technologies are artificially intelligent, certainly differs. But at the end of the day, I think it'd be, you'd be hard pressed not to say, you know, something is or are not intelligent at all, even though they might seem like, you know, simple logistic regression models, it's just a weird, it's a different form of it, you know?
Justin Grammens 10:41
Sure, sure. Thinking back about the sort of the cost savings, you know, businesses, like I said, Will will invest in these new technologies, if there is a cost savings to them, you know, in the projects you've worked on, or things that you're seeing currently, like in in the space, are you? Are you guys working with regards to like, I guess, automating a wage jobs in some ways, you know, I'm not sure if the projects that you're doing, you know, are related to that. But that's sort of a fear, I guess that people who are building these systems, especially in the business space, are like, Oh, great, now machine can do all these jobs that these hundreds of the people used to be able to do. And, and obviously, this data that you're working with in the healthcare field, there could have been a lot of people that would just pour over it manually. Now, we can do a lot of these predictions. Do you you have any sense? I guess, what this might impact, I guess, future workers coming out of school and coming up through through through the ranks? Like you have done?
Jake Mason 11:35
Yeah, outstanding question. I don't see, like the a lot of stuff I've dealt with, I don't see a lot of displacement of work, you'll see a lot of, you know, people in this space, talking about how it's like their their tool for enhancements, right? They a lot of models, you know, kind of boiled down to some sort of rank ordering you're trying to do with, you know, you're trying to basically sort through whatever records you're dealing with, whether it's members or it incident tickets, whatever. You're trying to prioritize them, according to some measure, right. So, you know, in the case of star measures, for example, right, depending on the measure, of course, but we're basically trying to identify people who are going to be non compliant for that. Right. And, and health plans already reached out to a number of people as it relates to these measures. And and, you know, in some cases, those approaches aren't as targeted aren't as effective as they could be with with it with the use of a predictive model. So what the model does is, you know, it bubbles up to the top those individuals who are, you know, likely to meet your criteria, right, those people you really want to be in contact with. And so I don't know, if it's as much of a Hey, you know, we can just get rid of people X, Y, and Z. Well, now you're giving this tool to those individuals who are likely very well acquainted with their domain, and you're giving them kind of it's like a superpower, you know, to use kind of a cliche term, but you're giving them this, this sort of advanced mechanism to kind of sort through their data and make those right decisions. So it's kind of an enablement, I think,
Justin Grammens 13:12
cool. Yeah. This I'm not sure if you've, if you've heard of or read the book, it's called reprogramming the American dream. And it's by I think, his name's Kevin Scott, I think he's the CTO at Microsoft. And it's pretty, pretty interesting book, because he's got a deep background in machine learning, deep learning, you know, he, I'm trying to remember what he graduated in. He's got PhD, I think in in data related field, but long story short is, is and I haven't finished the book, but I'm well into it. And it's really about it's a complementary skill set, like you're talking about, and all these sort of fear monger is about every you know, it's gonna basically get rid of all these jobs, he has a much more positive spin on it. So highly recommend if people are listening to this to check out that book. It's called reprogramming the American dream. And I'll add some of that stuff in the liner notes. When I end up publishing this podcast. I'm not sure if you've read any books particular, you know, in this in this field that you would maybe recommend to people that are looking to get into the field at all Jake,
Jake Mason 14:05
I just add that add the one you just mentioned to my litany of books that I've meant to read this quarantine but haven't yet got around to. I'm looking at a stack of Nassim to lab. The trader you know, a former trader wrote a lot of books on just statistics and things of that nature. And so I haven't opened them yet, but I know those are pretty well recommended. eagerly awaiting opening and reading those some other ones. Daniel Kahneman, Thinking Fast and Slow that was a sort of an economics book signal and noise by Nate Silver I mean that's that's great. I you know, you can he can tell you I kind of lean towards the sort of applied use cases, economic point of views, things of that nature. So those are all sort of foundational books that I've been I'll the things I like, and if you can find them, they're old books, but like old textbooks by kind of a Statistics, you know, great, basically, you know, as it relates to data visualization, or, or whatever, my hands on a book from, like the 70s, related to data visualization, and it's, you know, it's dusty, and but it's nice to kind of like a for runners book. It's, you know, they're they're foundational tools, I think these books, so,
Justin Grammens 15:19
yeah, yeah, some of that some of that stuff never changes, right. So yeah, it's done well, it will sort of stand the test of time.
Jake Mason 15:25
Yeah. And oftentimes, those, the people writing those books have, you know, if it's a later version, they've had 10 goes at it, right, they've had, you know, 10 times they publish it or whatever, and you're getting, like their 10th attempt at, okay, this is really what it is. And you know, at that point, you can really be be assured that it's probably solid information, you know, for sure, well, what
Justin Grammens 15:44
is what is the day in the life of somebody who is a senior data scientist at healthcare startup company look like,
Jake Mason 15:51
you know, so one of the luxuries of the job is, and this is a, you know, kind of a cliche thing to say, but no day is the same really, you know, one day I will be, you know, depending on where I'm at, in the certain project, I will be hands down, you know, I've got my text editor on the command line on one side, and I'm just, you know, editing, running things, building models, looking through the performance of miles, validating them six ways from Sunday, etc, etc. Those are days I love. Personally, I just like having that heads downtime, and just being able to save time and silence through projects, other days, which are also usually good days, you know, working with stakeholders to really flesh out their use cases. So I'm taking on a lot of new endeavors, new new product offerings. So, you know, as a part of that, we've got to work with Well, what's the vision of the product entirely, the model is not the only thing fitting into the products are offering is basically a suite of dashboards and different analytic tools. And so the models are definitely a part of that. But you got to understand where the models are gonna fit, what are the limitations of the data available? etc? Yeah, no day is the same. But it's always learning something new. And as I said, Before, I don't know if you could succeed if you didn't have a relentless curiosity, and always willing to ask questions, because, you know, as a data scientist, you're you're brand new to whether you're a statistician, whatever you call yourself, you're brand new to some sort of domain, right. And so you need to be able to some, ultimately, you don't have to speak at the same level as the people who own the product, but you need to be able to kind of translate what they're saying into, you know, into a model, and then what the model looks like back into what they want to hear, you're always always on your toes. So nice.
Justin Grammens 17:36
Yeah, I like that idea of always learning something new. In fact, just to share a little personal story here. Just yesterday, my my eight year old asked me that if you could have any job in the world, you know, what would it be? And I thought about it for a while. And I'm like, you know, I don't know if I could give you a actual occupation? Because it's so varied. But I know, whatever I do, I want to make sure that I'm learning something new. Right. So I think that's sort of a core tenant, I think of anybody who's curious in life. So that's great to hear that you've got that passion as well. And you're right, I think it's sort of a foundational thing that you have to have when you're a scientist, you know, I mean, you're a scientist in the area of data, but you're always exploring and trying, trying new stuff and learning stuff along the way. I was curious, what sort of toolset you guys are using there. You know, you talked about having a, you know, editor and stuff like that, what's what's sort of your your tools you recommend people start to learn?
Jake Mason 18:29
Yeah, I am a plain Jane, sort of programmer, I like just Visual Studio code and one window and then the terminal on the other. Others really prefer things like Jupiter notebooks, our studio, we're primarily in our shop, which was a change coming from United, where I use Python quite a bit, although we are trying to enable more of a sort of language agnostic platform to a certain degree, of course. So yeah, a lot. A lot of folks prefer our studio. Yeah, I don't know, I like the simplicity of the editor and the the command line there, of course, you know, it's really difficult to visualize things in a command line, it's hard to visualize some sort of ROC curve. I don't know if there's a software for that quite yet to visualize right from the command line, but that is a simple way of doing things. And so I you know, I recommend, usually for people looking to get into the field, you know, knowing like a scripting language, Python, and r is paramount, 90 plus percent of shops probably use that knowing the command line is really I would say, it's almost just as important. It's probably spend more time there just using different Unix tools and things that get, you know, the typical software engineering sort of skills are increasingly becoming increasingly more important. So having that programming knowledge is definitely needed.
Justin Grammens 19:49
Sure, and those are all skills you sort of picked up right you came out of school with a with an economics and I guess a minor in Applied Math. Right, I remember correctly so you kind of picked up the computer first. gramming along the way. Yeah, yeah,
Jake Mason 20:01
I only took one formal CS course. And everything was, I would, you know, pretty much self taught. I mean, I learned a lot of it on the job, but a lot of asking questions and actually answering questions on StackOverflow. That was a really good way of learning how to, there's no better way of figuring out whether the answer is right and posting on the internet for people to see and correct. So doing that really helped me to grow quite a bit.
Justin Grammens 20:25
Yeah, no, I'm pretty much self taught as well. My undergraduate was in applied math and physics, actually. So I, you know, started programming when I was a little kid on an apple two, but I didn't actually go for the CCI degree, and just ended up going more of a liberal arts track, but fell in love with programming and saw a lot of applications once I got out of school. So outside of your professional career, I mentioned these other meetup groups, you know, that you helped to co found that goes back boy, man, that was probably what 20 1450? Yeah, maybe I can't remember when analyze this, you know, started but, you know, what was your thinking around that, you know, getting getting involved in a community group like this, when obviously, you you just got out of school, or we're getting out of school and starting your career? Well, that's
Jake Mason 21:07
a fair now that we think about it. That's a kind of serendipity that I got involved with the original, analyze this. And my friend, ran the student alumni group at St. Thomas. And he goes, Okay, you get first pick of who you want to talk to, I said, Oh, this is for business school kids, mostly, like, just put me to someone looks looks like they're in technology. And lo and behold, it's just within five minutes, we're talking shop, and it was like, Oh, my gosh, this is uncanny is and so the next day, I think, you know, we went down to a bar in Northeast Minneapolis. And, you know, that was the original founders of that group. And I was very happy to be invited along and honored to. So yeah, analyze this, you know, we did a variety, probably five different analytics challenges with local Twin Cities, nonprofits. So we're basically serving two needs there. So one, you know, providing a platform for raising the level of talent analytics statistics in the Twin Cities, and providing a platform for people wanting to learn that skill set to one Connect, and B, work on actual projects. But then also working with a nonprofit partner to, you know, identify some need they have, it's vital to their business, you know, assess the quality of the data they have around it and and speak to it. And see if we can come up with some sort of analytic solution, often in the form of a some sort of predictive model. They're trying to identify people they want to prioritize in terms of donors or new renewals for like the Science Museum example. So yeah, we worked with when you were involved with that, and we work with the Science Museum, Minnesota variety,
Justin Grammens 22:43
and you want to talk talk about that, like that project, you know, is it was more around the marketing dollars, right, basically, Who should we market to? Because they seem like they're the most ones to want to become a member?
Jake Mason 22:54
Yep. Yeah, absolutely. So working with air to work with the Science Museum on two occasions, I think one was trying to identify who is likely to be the biggest spend donors and prioritize kind of the bigger fish. And then follow up project. I think this was starting to, we build models to identify who's even likely to donate a second time versus who's who's going to be a one and done so yeah. And then putting those together, we kind of, again, this is getting at the economics education, we were thinking about, okay, we have these two predictive models, one predicts a continuous outcome, the amount of dollars, we predict you two, given that you donate before we're predicting what you're going to donate again. The other ones probability between zero and one, you know, what's the likelihood you donate again. So I was like, we were thinking how we put these together. And we actually just ended up multiplying them to get sort of an expected value of a second donation, right? Because we're predicting what that second donation would be with a margin of error, of course, and then to, you know, the likelihood to donate. And that was actually pretty effective in terms of sorting descending Lee and going through people at the top of the list there and giving them the appropriate level of touch, right. They have different levels of prioritization, they do the the sort of spa treatment versus sort of mailings that are that are cheaper. So that's what that was used for. It's actually actually if I remember, right, I just described twinsies Habitat for Humanity. I wasn't talking. Yeah. And I was mixing, you know, doing several projects and mix them up. But that was habitat. But Science Museum, we did a very similar project as well. So
Justin Grammens 24:28
yeah, so analyze this. Yeah, it was basically a quarterly challenge, right? We'd have these groups to come together. People could sort of use whatever they wanted. Right? Which whichever sort of technique, and, again, I'm just kind of taxing my memory a little bit. You were much more involved. But I mean, we're some groups using deep learning, and then what would be, you know, now known machine learning deep learning techniques, and other groups, we're sort of using the traditional logistic regression, right?
Jake Mason 24:51
Yeah, absolutely. So a lot of teams. We, the membership of that group spanned the gamut from complete beginners to, you know, C Then professionals who just really wanted to kind of flex their muscle and then give back. So yeah, a lot of the models were more simple, like, logistic, multiple regression models. But we did have several, I think as we go later on into the challenges, several people using more deep learning type approaches, there might have been one challenge if I remember, right, where, you know, word embeddings, were used as part of a text data set that we had available. But I remember one, Kevin and I, we did this baseball challenge with inside edge local sports analytics company, and we did a trying to predict some, I don't remember what it was like a fantasy outcomes or something trying to basically see if you could bet on from fantasy baseball perspective. And Kevin and I, Kevin church, and I, we worked hard, I mean, just day after day in and day out trying to build a model from by hand. And it was it was like, logistic regression was fine, or linear regression. And then one of the competitors. He's a PhD from the U of M now works at Microsoft. He comes in the day of, we're presenting, and he presents this Xg boost model that just blows everyone out of the water. And we were talking to him afterwards. He said, Jared, how did you do this? How much time did you spend on this? And he goes, I just did it last night, like it took no time at all. And, and that really opened my eyes up to the potential of, you know, more sophisticated ensemble models like that more machine learning, getting away from the more traditional statistics. Of course, there's a time and a place for both, but that was a kaggle competition. So when you're optimizing for a certain performance metric that was really beneficial. So,
Justin Grammens 26:34
you know, I, I recall, and this was the start at two days then was like, I mean, did you do a project where you're trying to predict birds, I guess, kind of getting into image recognition?
Jake Mason 26:44
Oh, yeah. Well, you better memory than I do. Yeah. So we did 32, we ran a lot of prod more like pet projects. I think we did work with a few local organizations. One of them was mapping prejudice, which you might actually have seen in the news quite a bit lately. Recently, for sure, which is that's like, basically, the de laws around people. Yeah, basically, they run a platform, a volunteer run platform, trying to identify racially restrictive housing deeds in Minneapolis. So that's a really interesting crowdsource project, we did something related to aerating the like, kind of the reliability of the voters and making sure people are sort of agreeing with some sort of base standard for helping to better educate some of the volunteers. But But yeah, we also ran a an image recognition, this was sort of a project in which we wanted to really enable people to, you know, leave, work and get their own pet projects off the ground. So I had really wanted to get into image recognition. And so I worked with some colleagues of mine. And we ended up this point, deploying one of the YOLO, like PJ readies YOLO you only look once from a Drake song, but you only look one models, pre trained, you know, weights, pre trained and everything, but we just deployed that onto a Raspberry Pi. It's so funny, though, I think if I remember, right, the way that model was trained was off of like, small like toys, basically. So like little if I'm, if I'm working really well, now that Okay, that's the use case for it. Actually, he just showed the example of it being used by direct, you know, holding up a toy horse and doing that. Yeah. But that's kind of how we used it to move. It was a good good challenge to playing that kind of model on a on a Raspberry Pi and kind of starting working with a smaller device.
Justin Grammens 28:25
Oh, yeah. Were there any other projects? I guess, around that, that you would that you want to share?
Jake Mason 28:30
Yeah, I think we've covered most of the one that talked about previously. I mean, the rater reliability, one was really fascinating. Looking at metrics, like a Cohen's gamma does others kind of, say different things here, but Cohen's Kappa, which is a measure of rater reliability, and it was a unique project, at least for me, right? That's not really a predictive model in any sense. But you're looking at, here's the set of deeds someone has tagged as X, Y, and Z they, whether whether they found that language in it or not, in the deed or not. And here's what, you know, one of the predecessors of the project what what sees he's an expert, what she labeled it as, and, you know, so you can pretty well assume that she's probably getting the right. And so it's fascinating to be able to measure all the, you know, several thousand volunteers, you know, in this multi dimensional space, the degree to which they agreed with that expert rater. So, continuously being involved in the community, running these meetups and participating in other meetups is really, it hints at that continuous learning, right? Whether it's, you know, you're running the project yourself, you have to manage, you know, who you're working with, and kind of dividing up some of the work to actually doing the work yourself and consulting with organizations that don't have these kind of capabilities built in. So that's one of the biggest, I think, takeaways is that, that continuous learning and, and really engaging with these groups to begin with. We have connections to certain you know, Kevin has had worked with some of them in the past, but oftentimes With the mapping prejudice example, I sent an email I said, here's the story do here's what we do. And here's I think, why this could be valuable to you. And we worked with them to to define a use case, that really is something they tie value to them. No, the worst someone can do is say no, in these types of situations, right? It doesn't. And a lot of cases, they'll be pretty receptive to it. Not in every case, but they'll be receptive to it sort of an expert dies for cheap rates, pizza, you know,
Justin Grammens 30:29
yeah, yeah. And like you said, it's sort of like a win win win type situation, right. So even the people that are getting into this when I participated, you know, I just, I learned a lot. And I'm by no means an expert in this field. And so all of a sudden, I'm working with people that do do this on a day to day basis, and I'm helping out a nonprofit. And they're benefiting by having Yeah, staff of people working on a project, the problem that they definitely don't have the resources internally, so it's kind of like everybody wins. Yeah, but the entire thing. I guess having you know, Ben, one who sort of started a number of community groups here, in town, it's, you know, I, I tell people, you know, if you're any sort of project you're working on in technology, or I mean, it could even be even outside of technology, quite frankly, if you have a passion and something, chances are, there's somebody down the street that has that exact same passion as you. And at the end of the day, if you can raise a flag up the flagpole and say, Hey, you know, I'm working on this problem. Do you want to work on it with me? Or do you want to meet and talk about, you know, artificial intelligence and and its applications, which is what we're doing right now with this, this new applied AI group that we're running here now, it was started earlier this year, people will, you'll be surprised people will respond. And it will be like a chain reaction, you know, you can be at the nucleolus to sort of start this but then so many other fun things happen, people that you never would have thought would have cross paths, cross paths in these in these groups. Yeah. And whether it's a monthly meetup or a conference, it's just it's really exciting to see.
Jake Mason 31:57
Absolutely, yeah, that's a great summary. And you're you're putting together people from I mean, you think about, you know, how data science and how analytics is practice, you've got mostly people who are statisticians, generally. And so, you know, as a function of that they're working with so many different in so many different industries, so many different individuals, and you put them all in a room together. And you're I mean, you're bound to find one people who agree with you, more importantly, people who disagree with you, and you're able to, you know, learn from them, and get get a perspective. And the networking is also really great, too. I mean, there's no substitute for, for getting out in the community and sort of making a name for yourself.
Justin Grammens 32:38
Cool. Well, we talked a little bit about, you know, books, we talked a little bit about maybe classes and stuff that people should take in colleges there. I mean, if you're if you're somebody coming out of school today, I guess there are other conferences are there, what sort of resources do you think would be good for for somebody sort of coming up in the ranks today?
Jake Mason 32:57
Absolutely. So conferences are great. In the Twin Cities, specifically, you know, many places a great group for that, of course, you know, we're not putting on any major in person conferences for the foreseeable future. But we're still doing things online, different ways to engage to that and learn from others. I've been to conferences in Boston before, that had been really great as well, like the Data Science Conference was a really great one that I went to out there. Other than that, I spend a lot of time kind of on different online forums, mostly Hacker News, trying to troll through that, you know, why combinators news aggregator technology, obviously, is kind of the focus of that, but also, as it relates to machine learning and everything. Really, I think a lot of new frameworks, new technologies, new papers, if they're worth reading will often be published on there, and you'll get in the discussion in the comments there. Oftentimes, I know is probably bad, but don't read the article, or don't even not even not even just reading the headline or anything like that. But I just like to read the comments, because you'll you'll almost immediately get a reaction as to, like I said, earlier, people will say, Well, this is why this is debunked, or bla bla bla, and, of course, you know, go back and read the article afterwards. But it's always interesting to get someone's perspective beforehand, you know, some of the books I meant, I would highlight the the idea of helping others, though, you know, I mentioned doing some work on StackOverflow does to help my programming skills. But really, I don't know if there's a better way to learn than to have to explain something to someone like sort of like rubber duck debugging, in a sense, especially explaining it to like your mother or a non technical person, taking something like that, and being able to distill it down and check your assumptions. Define all your acronyms, so they understand everything. I mean, that that's a really great way to get an amazing amount of growth in a short time, I think.
Justin Grammens 34:56
Yeah, very good. Very good. Cool. Yeah. No, Nothing forces you to learn it if you have to be the teacher, right?
Jake Mason 35:02
Yeah, I know everything already. I don't need to do anything.
Justin Grammens 35:09
Cool. Well, you know, as we start to sort of wind the conversation down here, you know, what's the best ways for people to get a hold of you?
Jake Mason 35:15
Yeah, LinkedIn is really my main connection on Twitter or really anything like that. So just find me, they're pretty easy to find. Google foods. So
Justin Grammens 35:24
cool. Again, like I mentioned earlier, I'll have links and stuff like that in the podcast notes off to your LinkedIn page. Is there any other thing you wanted to talk about or share that I might have overlooked today?
Jake Mason 35:37
Not really. I mean, I just like the the one piece of advice I would blast out into the universe, as it relates to this profession is kind of like a humility, I guess, and sort of a lack of pride, learning how to ask an effective question. It was, I think, far and away the biggest step in terms of like, where I've, where I've come, and I've got a long ways to go, of course, but that was a huge sort of step change, I think the way I work with others learning how to, okay, I have a question about something, you know, doing my research, looking down all the avenues I can to, uh, you know, to a reasonable extent, right, you don't want to waste an entire day, trying to figure out something that you could have asked someone willing to help within a few minutes. But, you know, really doing your homework setting the the person who you're going to ask the question to up for success, you know, giving them the full context. And, and really having that humility and to, to not be worried about, you know, what, how they're going to react, and there really are no dumb questions. I mean, it could be but not worrying about that, right? If you're a beginner to something, it's much better to ask a bad question or potentially dumb question at the beginning of an engagement with someone versus six months in and they already thought you knew that and that changes the scope of everything you're doing. So ask early and often I guess would be my, my final advice. Great. That's
Justin Grammens 36:58
definitely great, great piece of advice. Well, great, Jake, looking forward to tracking you as you continue on in your career and appreciate again, the time sharing your knowledge and experience here with the conversations on applied AI, podcast community, and I wish you nothing but the best. Look forward to talking to you soon.
Jake Mason 37:15
Yeah. Thanks, Justin. Really appreciate the invite and pleasure to see you and talk with you. It's been a while but always a pleasure. So thanks. Thank you.
Justin Grammens 37:23
AI Announcer 37:26
You've listened to another episode of the conversations on applied AI podcast. We hope you're eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn to keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn if you are interested in participating in a future episode. Thank you for listening