Conversations on Applied AI
Welcome to the Conversations on Applied AI Podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of Artificial Intelligence and Deep Learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real-world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI.MN. Enjoy!
Conversations on Applied AI
Robert Parker - Building Intelligent Environments Using IoT and AI
The conversation this week is with Robert Parker. Robert has engineered some of today's biggest life-changing innovations from Amazon Alexa to Fire TV and Prime music. Robert has disrupted everyday consumer life with voice assistance, and entertainment technology. After serving as a Director of Engineering at Amazon for five years, Robert became the CTO of SmartThings, leading the company's product and engineering teams while building the proprietary platform. From SmartThings Robert teamed up with Alex to develop the newest transformative platform bright AI. As a co-founder, Robert develops the technology that makes industry disruption and extraordinary growth opportunities possible. Prior to creating groundbreaking technologies, Robert spent 18 years as a general manager at Microsoft, as an industry award winner and recognized technical leader, he holds more than 20 patents of his own.
If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!
Resources and Topics Mentioned in this Episode
- SmartThings
- BrightAI
- Kevin Ashton
- CSC
- MIT AI Course
- Khan Academy Machine Learning Course
- Stanford Artificial Intelligence Laboratory
- AI for Everyone by Andrew Ng
Enjoy!
Your host,
Justin Grammens
Robert Parker 0:00
A lot of the old school IoT, I'm really going to connect this, let's call it moderately good sensor maybe or slightly accurate sensor to the cloud. And maybe I can control it backwards. Instead of trying to say, you know, the goal that we would have now is to create an intelligent environment. People have talked about situations where they'd like that local Detroit and what what happens with theater that goes down or some of these other things I want to be able to continue to be able to operate in that environment independently. You know, that's when you start to be actually have an intelligent environment is there a while I don't have all these failure cases, I have something where I can actually perform a task.
AI Announcer 0:36
Welcome to the conversations on applied AI podcast where Justin Grammens's and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real-world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.
Justin Grammens 1:07
Welcome everyone to the conversations on applied AI Podcast. Today we're talking with Robert Parker, Robert has engineered some of today's biggest life changing innovations from Amazon Alexa to fire TV and Prime music. Robert has disrupted everyday consumer life with voice assistance, and entertainment technology. After serving as a director of engineering at Amazon for five years, Robert became the CTO of SmartThings, leading the company's product and engineering teams while building the proprietary platform. From smart things Robert teamed up with Alex to develop the newest transformative platform bright AI. As a co founder, Robert develops the technology that makes industry disruption and extraordinary growth opportunities possible. Prior to creating groundbreaking technologies. Robert spent 18 years as a general manager at Microsoft, as an industry award and recognize technical leader, he holds more than 20 patents of his own. Thanks, Robert, for being on the conversations on applied AI podcast today.
Robert Parker 1:59
Great, I'm so glad to be here.
Justin Grammens 2:01
Awesome. It sounds like you've worked in large industry, small industries, startups sort of everything across the board. You know, I mean, I touched a little bit about the trajectory of your career, I guess, you know, maybe you wanted to fill in some of the dots with regards to sort of how you got into artificial intelligence and sort of work your way into this space.
Robert Parker 2:17
I've been in AI for a long time back when I was at Amazon. And we're starting to use it for some of the emerging use cases. And this time, it was fairly situational. So one of the things that just happened in 2008 is was the credit card decline, the overall market had gone lower. So you had a bunch of these businesses that Amazon that didn't know what to do is one of them was they have eight credit card offers to people. And this is used to you just have a $5 bowtie and this this kind of situation. This is an obvious case where actually it was an industry, it was ready for something new because credit card companies did want just masses of people who are the same, they wanted particular cohorts. And they weren't particularly actually interested in offering a value or even a good belly. Because they really wanted sets of customers, this would go back to some of the things that worked before, which is they make offers to be bought just what they graduated, or they could have that customer for a long time. This is custom made for AI both on that side of it. And this is one of the insights that I've carried through for today to the other side to which is the customer is actually getting a pretty lousy offer, you get a $5 security credit card, of course record the change. But what if someone said to you, no, no pay for half your Prime membership, or what if someone said to you, I can give you five cent back because your spending habits really allow us to give you better offers and make some of that stuff. All those things have happened at the entrance. So basically, there are better offers, and people have really great credit cards that they can do. But this was part of that initial use case. And we had a team who started to do some of these things that went into some of the more complicated things that you sort of mentioned later, like understanding someone's taste in music, and whether they liked a song or if you wanted to have a song, that's classic rock and classic rock means different things to different people, these start to get to be much more interesting decision processes. And so one of the things that's been really great was opportunity to start, you know, 14 years ago in this and really grow with the space, independent of that one of the things that's always been exciting, kind of in the AI space is really take it outside of recommendations, we're just started. So basically, a lot of this was really used for recommendations or really basic sort of understanding by the basic understanding of it, some of the stuff in natural languages or on the voice side, as you really got into these more interesting tasks. That's where you really got to feel and appreciate are some of the things that we're changing rapidly and the technology and I think are really accelerating today.
Justin Grammens 4:44
So doing credit card recommendations or or even listening to music and stuff is is interesting. How did you then get into IoT?
Robert Parker 4:52
We're about the same time so what had happened. We're looking at things that Amazon really wanted started this hypothesis that want people to lose In the more music and as you're interacting with device like Alexa, music is certainly an important use case. However, one of the other things is going on as everything else in that environment. One of the things that was great about the Alexa shared use device and it's in a space. And then as you start to think about spaces that obviously IoT was really exciting part of that one of the things that I was little bit frustrated about a tie was really wanted to attach to a big device ecosystem that could really try to move things for people and have people participate. And that's why it's one of the things was really interesting thing for all excited to do is that we get the community involved, we really had this vision that an open connected community could really bring this technology into people's lives. And in really tangible ways. That was really exciting. But it missed a couple of things, which is where wherever it ended up at Bright, which is actually consumers weren't the best place to adopt all of this. We could go into, you know, a lot of reasons for that. But overall, you know, we really saw that there's this huge opportunity in enterprises. And really, one of the things that we focus on, Brian, which was even bigger opportunity is these larger, more traditional companies, which have fleets of people and machines, because the real complexity starts to come in, when you have real tasks, let's go back to what AI is, which is an ability to perform some of these complex tasks, those usually will involve people and machines in various ways. And then there's obviously software that will be components of that as well. But as you look at a complex task, it starts to span that the things that are doing most of those are, you know, some of these things like large rope based businesses, or we have, you know, things like someone who's installing a pool, all of these companies are, you know, a huge part of the economy. And a lot of what they do feels a lot more like it's large, complex physical work clothes, and a lot less like something that you might have seen in consumer IoT. Yeah, for
Justin Grammens 6:53
sure. And I mean, I feel like in general, consumers are pretty fickle. They don't really spend a whole lot of money, right, people are like worried about, you know, spending 99 cents on an app, which is like, a ridiculous because they spent $6 on a latte. And in some ways consumers don't see the capability, like the possibilities of what a smart home brings to that, right. So
Robert Parker 7:11
that was part of it. But the other part of it, which I think is equally true, and I think we'll come into the solution was actually the solutions and compete that well with what they had. So you have something which I like to call sub millisecond, you know, latency, super reliability liability. And then you compare it to so you know, my whole opera share, suddenly embarrassing story of a white wife said about the first prototype for Alexa, what I brought it home, she said, Robert, why on earth, but I want to talking trash get in our house. Luckily, growing up, prove that we got this update. That's, that's it's more exciting. But overall, a lot of the connected solutions there a lot of trade offs there. And so they weren't as resilient, worn as responsive. And so that trade off, I think, also impact sites. So if you had somebody that sort of unambiguously better, then all of a sudden, it makes it a lot easier. Now the other problem that you have is, just like you said, the replacement cycles, and the incentives are not well aligned, as well. So if we instead compare to a business or an enterprise, there are you have a lot more levers that you can pull, you can create something that might do a revenue lift, that might take you some time to realize you can do some things that affect cost, which might be immediate in terms of their trade off, you can do a mixture of both of those. And that really allows you to make some of these better decisions. Because like I said, at the end, what's different is that, from a technology perspective, we're finally having some of these solutions that can really do things that couldn't be done before. No, I'll just sort of give a couple of examples of what I think is really exciting there, as you look at an old school motion detector, what's called motion detector really wasn't a very good signal as to whether something's going on in a space, and you would generally tend to want to reason a little bit better, where you might have had an open close sensor on the door. So you could sort of say, well, if there'd been motion in this space, are the doors open closed recently, I might want to do something. But what I really wanted to know is whether there's motion and whether there's not. And as you found out, the places we live in are pretty complex, you have things like pets running through, I want to ignore that pet. As we get better sensors, which actually have a signal where you can start to look at these things and recognize and exclude things like pets and understand people and understand particular people and identify those people, then you can start to build real intelligence into the space. And this is something which, you know, if we look on the industrial sites a lot easier was it you'll tend to have a sensor that has these higher degree of both fidelity and an ability to sort of recognize that, you know, those types of higher order events, and that's what brings value because otherwise, you know, any of these routines was funny that what I would see in the old school IoT is that the routine was the opposite of intelligence was frequently doing the wrong thing. So My favorite words were things like sitting in front of the TV at all the lights go out halfway through the Great movie or at the dark spot, and all of these types of things, but it comes down to the situational awareness was not really there, the signal was not good. And that really is one of the things that motivated us and bright to look at this. So we why we think really strongly and I'll talk more about the edge of the intelligent edge and that piece of it. But we say, Here, we have a set of better than qubit sensing. And we have a set of sensors that we ourselves made, we of course support such as the other people make. But those things give you better, they're usually multimodal. So they have more than one way to determine whether something's happening. So it might look for example, we have a camera, that's IR and visual band. And so we might be attracted to by a heat signature might be dragged to visually might be doing both. But we have a way to bring these multiple signals together. And then usually the resolution is much better than a human being. And similarly, in terms of the measuring of the space, we have a set of LIDAR units, we can do, you know, millimeter, or sub millimeter measurements. And this gives you an ability to then say, hey, my intelligence doesn't have to be quite as acute. Because the the information I'm giving it is the right information, it can very easily test these hypotheses. And at that point, then it can start to do the task better, or at least as well as a human being. And that's sort of been the gap as a lot of these things would happen would be Have you had you had worse than human sense a combined with worst inhuman pattern recognition, and you end up with a really poor result.
Justin Grammens 11:34
Did you use the word old school IoT, or? Yes, yes, I love that. Because I think that is, I think, traditionally is what has happened with the Internet of Things it's been out for, I don't know, when the term was coined, you know, way back in 1999, by Kevin Ashton, but, you know, really started glomming on and getting a lot of, you know, interests, maybe 2014 2015 or so. And it felt like it was just a bunch of sensors being thrown together in a room, and then kind of like, we'll figure out what's going to happen. And I think maybe what you're saying is, is what happened wasn't so good. The user experience, at least for the consumer space really wasn't up to par with what people were going to be willing to pay for. You know, if you look at that sensors, people were really saying, you know, what's the most straightforward way for me to be able to get a signal there, you know, an example of sorts of some of the things that were slightly, you know, slightly deficient that you would often have is you might have a thermostat, and you might have an external temperature sensor, I have no idea where that external temperature sensor is, I wouldn't really right. And so this is there, I don't really have a really good signal. Like if I went back to physics class, and really tried to do a measurement here, you know, everything that was entered in by IoT, that professor would immediately just act said,
Robert Parker 12:50
You did not measure this well, is not repeatable, it's not accurate, also probably don't know when you did it. And so this is where if you if we restart that process and say, Okay, if I want a process where I would like to be able to reason against this, and so there, you know, sensing is this important part, and I'm going to be taking more of a scientific method and more structured method there, I'm going to then apply from to that perception, I'm going to try to apply some learning some reasonings and problem solving to this, then you get to a really different place, because that's, that's not really, a lot of the old school IoT was, like you said, I'm really going to connect this, let's call it moderately good sensor, maybe, or slightly accurate sensor to the cloud. And maybe I can control it backwards. Best case, I have some control flow on the on the other direction, instead of trying to say, you know, the goal that we would have now is to create an intelligent environment. So you sit there and go, and like, I've seen some of your other podcasts. And people have talked about situation where they'd like that local patrols say, Well, what happens if the internet goes down, or some of these other things, I want to be able to continue to be able to operate in that environment independently, and not require these things. You know, that's when you start to be actually having intelligent environments. There go, wow, I don't have all these failure cases, I had something where I can actually perform a task. The simplest of those just to take one example is to say, Hey, you want it to be able to enter and exit your house, you might have a lock as part of that. You sit there go, I need that entry exit to work 100% of the time, it's not okay, if the power goes out that I can't get into my house that that's a problem. So as we try to define these spaces, and it's especially true when you start to move this to a commercial or industrial space, because the stakes go up. But with that, that's where I think is really excited because we now have the ability to have edge computing, we have a bright AI hub, which tends to work as a local controller in these spaces. We have choices there for our customers, but anywhere from a couple of teraflops up to 80 to 100 teraflops of processing capability for AI gives you a lot of opportunity to sort of say what and how you're going to do it and I think that's really exciting. Yeah, for sure. So
Justin Grammens 14:59
I As we move out of the old school IoT into the new school IoT, you guys are focused a lot more like you said it at sort of compute at the edge or machine learning at the edge. Why do you think now is the best time for that? Are we seeing, you know, I'm big into tiny ml, so we're seeing like these embedded processors, now they can run on very, very low power that are able to do that is the cost of the sensors coming down. Now, you know, we're
Robert Parker 15:23
learning not only the cost, but what you can do. So like, I'll give you examples of some of the sort of fairly interesting sensors we have, because we somewhat model human, like I said before, but usually it's human plus. So I'd mentioned before, we have no combined visual IR camera that can also see you V. L, when you can do that, you can really have a lot of sort of visual spectrum sensei. But we also have something nanotube based something, we don't have a great name for this. So it's, we call it digital those right now. But what it's able to do is detect substances up to a couple parts per billion. So you've moved from that couple parts, 2 million to cover parts a billion, at that point, you can really find different ways to test a lot of these hypotheses. And before it would have been, wow, I might be able to only detect one or two substances, maybe I can grab a carbon monoxide. And that's it. And, you know, that point, you know, the hardware becomes really the limiting factor, among others is you have such a rock environment, then, similarly, like you said, the other challenge is, everything has to go the cloud, because that's previously, because that was the only place that you could apply significant computing resources to things and really realize it. Now you have amazing capabilities local. And so with that, combined with a lot of other things, like better radios, that that play into this, you have a fabric locally, that really can be resilient, can be fault tolerant, has this great degree of precision, so which you can deliver is totally different from what was possible previously. And the second thing that plays into that space is kind of one of the things which was unsolved of what I call ai 1.0. And we're sort of moving to I don't know, where people are probably up to 4.0. Now, but what was it a large challenge is that most of what AI was applied to was these large, relatively heterogeneous problems. And so if you compare that to the types of companies that we work on, and many others, you have a smaller set of companies, it's really AI for small data, lots of other people are sort of thinking about this. But one of our clients, as an example, is CSC, which is one of the largest commercial laundry companies, they do, you know, at Disney add Marriott, at most of your apartment buildings, they're about 200,000 locations across the US. Starbucks has five baths or 5500, or something like that, to give you a comparison. So a lot of locations, there's a lot of different places, I one of those locations, they might see 30 loads of laundry a month, at a big location, they might see a couple 100. If you are training something on a recognizer and just try to determine certain things about that, while you're just not going to get you know, it could get to billions of loads of lunch, this is very typical is that we often see this and what's really nice, then when you have that amount of computing power there is you can start to actually have algorithms that can be custom tailored to these smaller datasets, and manage them appropriately in a cost efficient way. For example, it could do some local training can train itself up, it knows how to recognize it, that that space. And so then all of a sudden, this becomes possible and feasible where it wasn't before. The second problem was the other half of that which is normally before, if you had to transfer all this data up to the cloud, I mean, that's real money. The thing that's interesting in AI is, you're better off the more data you have, the more data you have, of course, this can cause a lot of costs unless you can process it locally. So it allows you to sort of double optimize you, you can optimize for that variability, and be locally optimized. Plus, you're greatly reducing the amount of data that has to move from x to y. So with those trends, they start to say, Wow, I can apply this to way more problems, instead of you know, relatively small set of problems that are, as I said, generally a little more heterogeneous.
Justin Grammens 19:02
Yeah, I love the concept of distributing the compute, I guess out to all these all these little things, it just makes a lot more sense than sending all the data in centrally and having yes, you could have, you know, GPUs running in the cloud and all that sort of stuff. But you're going to quickly get overwhelmed, especially you know, and then in this particular case, maybe there's not a lot of data. But if slash, when there does become a lot of data, it feels to me very inefficient to have to keep sending that data to the cloud.
Robert Parker 19:26
One of the things is true as AI is really evolving, signals like visual are becoming big. So a lot of what we do with with visual based systems, you're just that a huge amount of data out there. Even if you're just kind of up to a couple of cameras, you just automatically at the point so we do local recognition, we do local recognition, 50 frames per second. On site. We're finding people machines, vibration, this kind of stuff. And the alternative is just staggered, even at you know the first level. Then like you said, the part that's amazing though, is that I can go to that next level of recognition. So one of the things that we do is start Under sort of recognizing people of which machine they're interacting with, and we get to understand whether the loading and unloading the machine, we actually understand whether they're staking out the machine by throwing a stock and one, and prepares them to do something else. But this is the part that's amazing is that if you're doing it locally, then it's really easy to pop on those other recognizers, I'll give you an example. One of the most fascinating things we found with this was that hypothesis was that the reason that machines were getting unplugged in these environments was because people were plugging in their cell phone, that was sort of the going, I bought this by the business teams, which makes sense, it turns out, not actually the case. So part that was interesting was now the most common case was the cleaning staff would go in and unplug the machine is like a plug in the vacuum cleaner, or whatever that the most common case that happened was that people actually figured out they didn't like what's going on with the machine, they were rebooting it. Okay, customers are really doing that. So this is where you get these get these kinds of insights. So we had a tamper detector, which basically started to go off. And with that, you were able to really start to understand the space, if you ended up going up to the cloud, it will really take you a long time to get those type of insights. Because you know, you're you're just looking at it to largest set of data across too many triggers. And it just the complexity goes way up.
Justin Grammens 21:20
I love that. I love that story like that. Because yeah, so many, the thing that I talk to companies a lot about in this space is is you have no idea how customers are using your products, just just generally across the board, right, you might have a survey, you might have a hypothesis, like you're talking about. But until you sensor eyes, the thing you really don't know, this is probably the
Robert Parker 21:39
best thing that we saw. Because this is applied across every single one of our customers is like insulate the pools, which is like this one of our customers that one of the biggest pool installers in the US had great business because it opened so that they've been installing more and more pools. And during that time, they also had a whole set of hypotheses. And most of what, you know, their field staff thought most of what their installers thought, largely correct. And it's just because like anything else, they just know what they run into. And then these things just they're sure of that. But part that's amazing about this as you went and talked to the field operations, they're like, we know this for a long time. We know. And then what's amazing is when you give them the insights back from the system, they never question, because really what it does is it actually makes sense, because the reason was the tricky part is and this is this is again, where I would differentiate sort of AI IoT from what we're calling old school, IoT. Old school, IoT will tell you what happened, if you're lucky. But let's assume for the moment, you were lucky, and it told you what happened. It doesn't tell you why. So you have no insight you can't actually diagnose generally cannot diagnose or fix the problem. But with these more complex signals, you're able to understand both the what and the why you're able to diagnose the issue. So when I can provide some of that data back to it, it was very much like we sample use case, I would say it's not just I'm telling you those as tampering because that I would have been just like that. But I can tell you, Oh, these are the types of tampering, they'll give you the prayer, then they're like, oh, that totally makes sense. I now totally understand why I perceived that. It was just the unplug cell phone case, because that was the only thing that was left in there at the end of the one thing I could count was that, oh, I got that workplace. But there were actually four other tampers that I missed, then you can really understand it in that way. And so this is where it gives you that next level of insight where you're saying, Oh, I understand some of the causality and the correlation between that. And and I can really get to that insight level.
Justin Grammens 23:35
Yeah. Well, so you you brought up the term AI IoT, which is for people that that are listening, maybe haven't heard of that, but it's the artificial intelligence of things, right. It's sort of this this next way of, of how Internet of Things and AI are sort of overlapping? Could you speak a little bit to to that? And I think the other aspect, too, that maybe we touched on? Maybe we haven't touched on yet, but it's really what's the value from a security standpoint, from a security model of having smarter things at the edge not having to send data back? So
Robert Parker 24:02
actually, I'm going to answer your second question first, because it actually feels it to this. One of the things that we look at is, as you were getting these better sensing going on, but what do we call better than human tenting at the edge, actually, you really care about the privacy and security of that data that So an example is people always care about visual data. But for that they cared about some of the other sensor data that was coming back. They're sensitive about this one of the things that's great as you make the image more sensitive, what we have, for example, as our cameras recognize your head and blow your face, and those features in hardware in firmware on the camera. And so this is pretty good because it's guards the privacy of the individuals. And we Jen generally try to recognize you as something that looks a little bit more like a Minecraft thing. But, you know, that helps you a little bit but then you say what about this, what happened was almost immediately as we started to have, you know, taps and some other incidences that we had to manage. Luckily, we had put it in technology where we sat there and said, Oh, yeah, well, we have that key stash. So we can actually undo that privacy filter. But you can have that chain of trust, where we have a situation where have we give this to law enforcement, they cannot do this. And then you can put in the right safeguards, you can decide, you know, who and how you're going to manage these things. But this is something where, you know, at that point, what we have is something where we've got hardware trust, established directly all the way from the device out to that data. At the same time, you still have that breadth of use cases. So we could say, oh, I didn't have to have a separate security camera, and I can really manage the whole system. But you need that in order to be able to handle the complexity of being in the environment where you'd say, hey, security, privacy are really important considerations. We take that very seriously on the customers and customers of the system, we want to protect them as well as we can. And that is sent out, we also want to be able to manage some of these situations that everyone joined the airstrip because like, for example, in the case that I sent the security case, the end customers cared about theft as much as other pieces of theft as much as anything else, because it might have been their, you know, objects inside that room, which were impacted. And so this is something where you say Wow, that really requires the edge up level a little bit, you can't have these not as intelligent pieces there. Because you really want to uplevel what's going on. The second thing that is really key in sort of AI, IoT is to say, a lot of the IoT framework started with just that connection problem, there's sort of data from x to y, if you started instead, from the premise that you wanted to inject intelligence into this space. And then you would do IoT very differently. And so this is a lot of the premises that really, the IoT piece of our system, is the thing that gives the eyes of the hands to the AI. And it allows us to do some of the things that you know, what I would say I can then skate old school AI systems struggled with, they just are a consumer of some data. They can't try learn fix, they can't test different hypotheses, went back to some of the problems that I described earlier, I pro my growers are trying to decide what music like this is interesting to us to give you a song that you don't like and make sure you don't like it. So the negative testing hypothesis as it is for the positive testing hypothesis, but if I can't reach in there and get you to try something, then you know, this becomes very hard. But this newer model will aim to say if I'm giving the AI ads, it can actually try that. In these operations of these machines. We test both positive and negative hypotheses, we find different ways to measure things. And in that we create, you know, this virtuous cycle, where you're really able to interact with it much like, you know, the places where AI has been super successful, like doing something like playing chess or go mad. They don't they wouldn't do that. If you did all you said is you could study previous games you could never play again. But you play a game of chess. All right. That's a real problem. So in this, what Aya T lets it do is it allows the computers to net start playing the game of chess and really interact in those environments in a meaningful way, which means that it is actually able to do the thing that you wanted to do to be AI and AI, if it's doing AI is supposed to be able to perform a task. Yeah, that's some level of complexity. Usually human like complexity is the way that we think about it. And so this really is the missing piece to that.
Justin Grammens 28:18
Yeah, I like to tell people and yeah, you've, you've hit the nail on the head a number of times here that in my mind, IoT is sort of reactive technology, it gets a bunch of information, and it feels like you actually have to react to something. Whereas you're in this, if you're in this AI IOT space, it allows you to be more predictive. You talked about the what and the why. And you're right, you're sort of framing the problem up from a completely different angle, not just like, hey, let me throw some sensors and get some data, then let's react what we find. It's like, No, we're actually going to try and have a better outcome here to begin with from the get go. And how can we do that? I guess you've touched on it a little bit. It's it's not only I guess, the technology is getting to a better point here now sort of 10 years into the Internet of Things, which feels like it's taken forever for it to finally start getting traction, I guess feels like every year IoT. Yeah, that's the next year, it's gonna be taken off next year. But in a lot of ways, it's even going away. I feel like it's more it's really more US approaching it from a different angle, or
Robert Parker 29:12
the other challenges has been the integrated execution. So the hypothesis early on was that it's actually valuable to have 100 people produce a temperature sensor, or whatever. But the reality is, actually, and when you look at this, this is true across almost any of the IoT sensors, and even across this different standards bodies. I mean, you don't know there's no difference between any one of these big ones that you could get or any one of the Z wave ones, or any other flavor that you might have, or fairly limited differentiation. What mattered was, does it actually work in the workflow that I wanted it to work in? The challenge is that as you move to the newer stage, you sit there and go, actually, it is fairly feasible for me who might own that workflow to pick sensors which are pro Create, to integrate them into your environment, whether that's a commercial environment or home environment, I'll stick with the commercial industrial environments, integrate those into those commercial industrial environments in a way that, you know, 100%, reliable, proper for that environment. And so once you do that, and do it that way, said, Actually, I didn't need all these other choices, what I did is idea one at work that had the right to the cost profile. And so what's really changing in my view is that there's a real opportunity now, now, it's relatively easy for you to work with the module providers to be able to get a device that meets those specifications, that then can be connected to the rest of the system, instead of one word be like, Oh, I'm the device manufacturer to make my economics work, I need to have my own cloud and probably my own app. And by the way, you can't get access to my data. And is about as useful to people's home as a nest thermostat without being mean about this. But they'll sit there and say, I must control all my data, even though it'd be really useful for you to know what temperature I I've tried to reach into your house, but I'm not going to tell you that because I'm smarter than you. This is kind of the the wrong direction, that a lot of this when instead of saying no, I'm going to try to provide you with the most insight possible in a way that can really be leveraged by the other components of the system. And really, in order to do that, you need the slightly inside out approach. But what you did need was, you know, was of like, it was beneficial to people to have 1000 different makers of light bulbs. And, you know, this is where, you know, I think the industry just jumped too far to that and actually was looking more at if we're going to talk about real old school, the analog world that was that had happened before it and say, wow, you know, this really was beneficial, because as people were building houses or revamping spaces is really important to have something that could fit in at a space, there are trade offs. And so I want to be able to just pick the exact rate trade off. That kind of thing doesn't work very well, and actually wouldn't be the way larger scale or corporations, organizations work anyway, if you look, they always have some standardization, because they said there's a yeah, that integrated execution is that is the high level, most important factor.
Justin Grammens 32:04
Yeah, yeah, absolutely. We were mentioned about cloud, I mean, what's, what's your guys's stance at Bright AI? You know, on that, you know, you mentioned you guys will have sensors and stuff, maybe you can share, I don't know if you can share a little bit about, you know, what you have coming out. And
Robert Parker 32:18
I think everyone thinks that the cloud is an important integration and point for you and adds value. And when is there it can add certain value. So I'll give you examples. Like, for example, when you want to transact against the system, the cloud has to be involved minus some offline transactions. But when they finally get processes a transaction, there's a bottom goes. And so there'll be these integration points where you need it. And similarly, when we have cloud cloud integrations with other companies, so Cloud is important. And you know, we have use cases that are part of this. But really, you look at this as augmenting that environment, we started that environment and work outwards, and we see enormous value. But it would be the same way, we're also we also see enormous value in our applications. So we have a generalized install our app, because hey, the only way we can get into those machines, and you know, to the hands of the field staff who need it, gonna have to have something that we can do. And in all of those things, which are really trying to say is, I want to make those workflows work really well. So when we look at, say, for example, installation, so a lot of one of the first thing that typically happened for us was that we tend to monitor other machines, sort of typical workflow for a bright one. So we have a set. So if it was h fac Heating Air Conditioning machine, for example, is just one example. We'll have one of our boards, which monitors the temperature and the pressure and a whole bunch of other stuff. And some of the ways that it monitors it is not invasive, that some of it is invasive, because whatever is the best way. So we just sort of, again, this is our partisan installation, of course, we want to look up a couple of these things, and we try to do is make sure that that's as simple and trivial as possible, including some of the things that in IoT are known to be not so simple and trivial, like joining them. And, you know, our perspective is we believe in single tap, we believe that doing these, you know, using some of the new technologies that come along, that really make this possible, so that you make it as easy as possible and robust as possible for a person to do it. And, you know, before I think a lot of people looked at most of those flows from the perspective of there were a variety of people, various stakeholders who had different conflicting goals. One of the stakeholders, you're always gonna talk is the security and privacy people. They're like, Oh, you're gonna have to make sure there's a QR code or might be some other code here, you're going to double check. And I remember one of the buildings that we had installed at Samsung, we were doing motion detectors, Susan, that made campus in Korea, I think there were 13,000 washrooms that had to be done or something like this. And you had to go scan the QR code and enter the number for every one of those 30,000. And it was all because you know, there's a security person say God is really important because I wouldn't want someone sneaking in like grabbing your washroom
in there. If you really looked at it from the environment you'd like to say, I could come up with a way that way. To ensure that I'd add physical access, yes, I can make it trustable. But I also could make it scalable. So I could do 30,000 leaves, if I was going to do it much easier that then won. And so this is where you know, we really focus as a company is to go, there'll be some mobile, if you can touch all of those pieces, a little bit of mobile, a little bit of cloud a little bit on the device side, you can then create that intelligence, which would do this. And then you know, AI can even help you out because AI can do something anomaly and other stuff where it could say, hey, wait a second, I've seen someone trying to solve three Washingtons, I wasn't expecting, maybe there's some reaction that I should take, much like people have done that for proximity to getting homes, if they knew that you're on the way home, that I don't need to challenge you at the front door. Another time when it's a little bit less than expected, I might view a challenge in a second, a second factor. So when you work together, and you have enough of these pieces, you can really start to make that complex adaptive system. And make these things simple, which is really important. Because, you know, as an AI company, one of the other things that we're doing, and I think most AI companies are like this, we don't tend to eliminate jobs, what we intend to do instead is say, Wow, you can support twice as many customers or three times as many workflows with the same number of staff that you add today. Or I make that person able to do way more or more importantly, right now, because this is one of the things we see in almost all of our customers, they have some people very well trained, and usually it takes them three to five years minimum level of training in that job. And then they have people who are retiring out of that position, it's now their workforce, it might have 20 years, we'll say, if I can make it somebody who could learn to do and do effectively at six months, that's way better for you. And essentially, we're better for the people. They don't have to spend five years attaining that that same level of skill. And so in this, you know, a lot of what we're doing helps do that, because every one of those weird edge cases that you took out was that tribal knowledge that they had to then figure out that added up to those five years. And so the simplicity that says, oh, yeah, I have a system where they can be expert in a relatively short amount of time. And it's relatively robots. And so that's going to require a lot of these ingredients coming together. That's where I think that, you know, Aya t really shines because you then can take that business problem, which is, hey, I want to be able to manage my workforce differently. I want us to becoming an expert to go way down, I want the transferability of those skills to go away. This is the other thing that sort of a lot of our customers now is you have that person who has at 20 years of experience with this, I can actually sort of immortalize a set of that much like, you know, Richard Fineman, that was course notes. And now physics students are better everywhere for that this is that sort of thing where you can do that, again and again, across all these use cases.
Justin Grammens 37:48
Yes. Fascinating. So yeah, that was one thing that I do like to talk to people about, you know, like, what is the future of work as these new technologies are being adapted? And it sounds like, yeah, you guys come kind of on the side of it's a complementary skill set. It's a complementary tool that humans will be using. In the future, it's not out to eliminate anything. And quite frankly, if only it's gonna eliminate in some cases, hopefully, most cases, I guess, it's all the busy work that you don't want to do anyways.
Robert Parker 38:12
So I think like you said, it's augmentative. I mean, this has been sort of the way it's been going really, since the industrial revolution is that at the end of the day, instead of looking at it the other way around, which is like, what's happening the population, you look at a single job, and you say, wow, like, if I can make a single person way more productive, then I, they could do more things, and more Ed, you know, that takes you down this very positive route, which generally is the way the technology has gone, is sort of like you imagined years and years and years ago, you probably didn't move with it, this would have been my great grandfather's generation wouldn't have moved within 10 miles, you know, where they work and say, Wow, today, you can be anywhere, that doesn't mean you have to go anywhere, but it means that things can. And so this totally changes sort of the Acade equations of elasticity a bit. And I'm sort of, you know, the thing that's going to be really interesting about AI, that's gonna be different from the previous ones is the speed of change. And so really, you're seeing industries where it's applicable, and what will happen is breaching industry will become applicable, more applicable, as it pointed tips over very, very quickly. But what you're seeing is that those things transition in a couple of years, not a couple of decades. And that's, you know, a huge change from pretty much every other technology transition that we've seen. And so because of that, the challenge for all of us, I think there's something we want to think about is, yep, people's lives and jobs will change. And we need to be more thoughtful, we need to be pretty thoughtful about that. It's something you can plan for and think about in this in this transition. It is positive, but it is a change. And you know, change needs to be managed. Yeah, for sure. Well said well said for sure. How do people reach out and connect with you Robert, you know, obviously they can go to bright that AI and I'll put all sorts of liner notes and information here for this podcast. Great way is just emailing me at robert@bright.ai, one of the best ways to get in touch with me and I really enjoy talking to all sorts of people who are trying new things in this area, because that's the thing that I think is also really amazing as you're starting to see people. So one of the things that we're doing one of our smaller projects a little bit scientifically focused is we're applying AI to astrophysics. In particular, a lot of the imaging that's happening in that space. And so you have stuff that is really cool happening, both from really a set of amateur type people who might have telescopes in their backyard, and, you know, do some limited sort of astrophotography, up to this hyper professionals at NASA. And everything in between. What's really nice is in all these spaces, they're these like sub communities. And so it's really cool to get dragged into. In that case, what was really interesting is that a lot of the more sophisticated algorithms that we've applied per se, for example, tracking of objects, or identification of objects hadn't really been applied in that domain. So they've been really looking at picture correction and things like, you know, I just want to see this picture much more sharply. And let me eliminate this Brightstar. Or, let me I just for the fuzziness of the atmosphere, it's up like this. And as a, as soon as you opened up some of that through connection, then people were like, wow, there's all this cool stuff I can start doing. And the part that's really amazing, it always is that millions, actually hundreds of millions of lines of open source are available for people. So you could start doing projects in this space at a very low barrier to entry. So this is where it becomes really, really cool for newer practitioners
Justin Grammens 41:42
will speak into our practitioners, I guess, people that are just getting into this field. I mean, how do you obviously there's lots of hardware, they can just start buying and just playing around with it is that the best way,
Robert Parker 41:51
the great news is, there's a whole bunch of different resources. I'm a fan of some of the courses that MIT and Stanford have. And those are ones that the beginning level, you have things like Coursera that allow you to be more structured around this, YouTube's always never about Khan Academy are never bad places to start to sort of like just get some of the little bit popularized stuff. And then from there, you sort of say what might strike your interest and then get into a project. So I think getting hands on is is really cool. My daughter had this really interesting idea that I've encouraged her to go off of so she's interested in Mark Prowl, and has was interested in sort of all things legal, or something's legal, and really was saying, hey, it'd be really cool to start to apply this to law, because she said, You know, there's a lot of things that you could understand from sort of a propositional calculus perspective. And really, it's a lot like sort of the computer centric proofs where you sit there and go, there are a lot of people found a little bit satisfied with the the four color proof this way, because they couldn't fully understand, you know, verifying it was was somewhat difficult. And they felt like you didn't necessarily get as much insight as you might have gotten the other way. But it was sort of this really useful tool, but then actually can take you further than that, because as you had the right tools, visualizations, you actually could solve all that promises and then go actually, this is even better. Because I can see all the constituent pieces, I can see exactly how this breaks down. And then you know, I can I can test to verify as I wanted, I might actually have more insight that actually I wanted to lawyers opposite, just have to trust you that you'll say that this is this is roughly accurate. You know, the saying this paragraph or contract between two companies, that understanding was in common and also met the appropriate laws. And in those days, this is where I think that's really exciting, because you can take these domains that, you know, are as out there as that apply some of the stuff and the stuff mostly exists to do a couple of steps in almost any domain, even something like that, in fact, so in this case, she started to look into this and found that there were a number of people who are doing small projects and in this area, and it's just a great time to be able to start to engage.
Justin Grammens 43:53
And maybe there's a lot of these cases is just getting access to the data, you know, finding enough of the right kind of data in order to train these,
Robert Parker 44:00
the thing that's really cool these days is this is part of how there's been a lot of push in the last 10 years to make a bunch of these data sources available both from sort of a legal and regulatory standpoint where they have open information acts that allow you to have access to this information and the fact that they're finally getting some level online. But with the combination of those things, what's been really cool is that people are often able to get access to information sources that would have been very difficult in the past. And so it's really there, the barrier to entry is fairly low. And with that you can really get started.
Justin Grammens 44:33
Yeah, very cool. I'm teaching a class this fall, I've taught at the University of St. Thomas here in St. Paul, Minnesota for a number of years on IoT, but I'm changing the course of this year, really focused on IoT, but it's like a Iot, basically, I'm going to have the students not only generate data, get data from the physical world, and that's kind of where the first course ended. It was like, you know, get the data, make sure you can get something from a physical sensor, you know, take it from the physical to the digital, show me that you've done then that and then we kind of ended there with this capstone project. And really, what I'm going to encourage them to do is that's just, you know, Step A, Step B is now going to be actually training machine learning model and then running it back down to the microcontroller having it actually behave offline. So allowing them to now generate their own datasets, right, I think it's, it's, it's going to be a lot of fun. So we'll be sort of exploring into that space. When it comes to career opportunities. I guess, you know, as we get sort of near the end of this, though, yeah, it looks like you guys are hiring like crazy. I'd like to use this platform to let people know, I guess, in particular, if you guys are growing,
Robert Parker 45:32
absolutely look at cruise, right, that AI and we really want all sorts of multidisciplinary people. So one of the things, that's great, because we as a company serve enterprises, and we're sort of full service on this, there's a lot of different ways that people are contributing. So we definitely the standard technical demands that people would expect. But we also need people who can cross those, in the end, one of the ways that we measure ourselves is the value created from that. So it's either business value, or some of those intangible people values that we sort of talked about, but we really look at, you know, how much better did that task get. And with that, there are a lot of ways to contribute. And as a result, we've valued at some of the things that are really true. And this is where we partnered up with some universities that are like this. So CMU has the same sort of multidisciplinary philosophy that we do, and say, you know, some of the people who are best suited to really attack this as people who can look at a couple of those and bring together a couple of those disciplines. So with that, I'd say, Hey, don't worry if your whole career wasn't in AI, don't worry, if your whole career was at a firmware engineer, get the lowest level one of these things, because a lot of that is become a little bit more commoditized. And now we start to that it's more of the applications word applied AI company. And similarly applied IoT, just like you're saying, in your course, when you're taking people and you're making them appliers. And that's really what's exciting.
Justin Grammens 47:03
That's great. Well, yeah, you couldn't have summed up our podcast any better by saying you guys are applying AI companies. And that's really this is what I love doing is just talking to companies that are actually applying artificial intelligence across multiple domains. So appreciate your time being here today, Robert, was there any other topics or, or any other things that you'd like to share? I mean, I think we covered a lot here, but always want to see if people may miss something along the way,
Robert Parker 47:26
I think we had a great introduction, this sort of the aya t domain, the one thing I would end up with is to say that this is something that we're passionate about. And I know that you're passionate about as well as that, and people understand this emerging field. So there's, you know, if we want to say something that really bridges that physical domain with AI, in a tangible way, this something is really going to start happening. And I think it's the evolution of both AI and IoT. But I think that it's really great that people can call attention to this, because I think that it's much more tangible than either of it sort of feeder technology is either IoT, or or AI by itself really doesn't have that level of impact. So the more that we get people to appreciate this difference, sort of like moving from dos text based ads to a GUI based actually sort of say, hey, I want to drive attention to people can do some gooeys. And, in the same way, sort of sit there and say, I want to draw attention, you know, think of how much powerful your user base can be when we sort of move it to the graph level. Same thing is, think about how much more powerful your IoT can be what when, you know, intelligence is infused in it. Similarly, think about it as think about how much more powerful your AI is when it can touch and feel things. And so that I think is, is really exciting, and really want to thank you for helping get people be more aware of this and build that momentum.
Justin Grammens 48:50
Absolutely. Cool, man. Yeah, you couldn't, couldn't have summed it up any better, if could have done that any better. So yeah, Robert, I appreciate the time today, I look forward to all the great things ahead for bright AI and for sure, put links and everything like that, like that off to you guys and see how we can help you guys, and how you can help I guess, as you already are doing, help companies deploy this new technology and make our lives better make the world a better place through your ear to technology. So thank you.
Robert Parker 49:16
Thank you, Justin.
AI Announcer 49:18
You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening