Conversations on Applied AI

Tom Doyle - Applying AI to Always On Sensing Devices

August 16, 2022 Justin Grammens Season 2 Episode 22
Conversations on Applied AI
Tom Doyle - Applying AI to Always On Sensing Devices
Show Notes Transcript

The conversation this week is with Tom Doyle. Tom is a veteran in the semiconductor industry working in many different areas over the last 35 years, including RF and satellites, analog and mixed-signal integrated circuit solutions, IP software, and sales. He holds a BS in Electrical Engineering from West Virginia University and an MBA from California State University. Tom is the CEO of Aspinity. Aspinity is a world leader in the design and development of analog machine learning chips that are revolutionizing the architecture of always-on sensing devices that run on batteries. Aspinity is the second startup Tom founded and is the first focused on a semiconductor chip.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

Enjoy!

Your host,
Justin Grammens

Tom Doyle  0:00  

Here's the bottom line. The biggest challenge or the biggest metric, for these always on sensing devices is always on power. And that's the power that's required to actually know what's going on. Do applied AI applied machine learning 100% of the time, at the lowest power level, right? And that's exactly what we're what we're striving for. And we're delivering today and we'll, we'll see a lot more of that. I mean, the world is, is wide open from that perspective, because we are so inefficient in how we do things today.


AI Announcer  0:33  

Welcome to the conversations on applied AI podcast where Justin Grammens and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.


Justin Grammens  1:04  

Welcome everyone to the conversations on applied AI Podcast. Today we're talking with Tom Doyle. Tom is a veteran in the semiconductor industry working in many different areas over the last 35 years, including RF and satellites, analog and mixed signal integrated circuit solutions, IP software and sales. He holds a BS in electrical engineering from West Virginia University and an MBA from California State University. Tom is the CEO of affinity. Affinity is a world leader in the design and development of analog machine learning chips that are revolutionizing the architecture of always on sensing devices that run on battery. It's been 30 is the second startup Tom has founded and is the first focused on a semiconductor chip. Thank you, Tom, for being on the program today.


Tom Doyle  1:44  

Thank you. Thank you for having me. It's great to be here.


Justin Grammens  1:47  

Awesome. Well, I said in the intro kind of a little bit about where you are today, maybe you could tell us a little bit about how you got started in the industry. And then kind of how you came into working with semiconductors and the work you're doing today at Aspinity?


Tom Doyle  1:59  

Yeah, for sure. I think you mentioned 35 years. So we have a lot to talk about. But I'll keep it pretty quick. You know, interesting time I've seen a lot happen over those 35 years in the industry, right? Always been in deep tech, usually in California. It's funny, you mentioned it. I mean, early on, I was in the defense industry, that's where I got some of my RF and early processing skills, if you will. And that was pretty much early in the, you know, infancy of really rolling out what we like to call, you know, really impressive compute environments. And so from there, you know, learning about that and applying it, I moved into the semiconductor industry and worked there probably about for the last 20 years, let's say 15 to 20 years in Silicon Valley out in California, mostly working around analog, right, focused on analog, you know, analog and RF capabilities. And then, you know, within the last, probably the last few years, maybe 567 years, I got involved with my university, where when I'm an alumni at West Virginia University, and I met up with two of my co founders, really, really smart guys, Dave, Dr. David Graham, and Dr. Brandon Lambert, kind of a perfect timing, they were looking at really, really efficient, you know, today we call it edge, but looking at wireless mode, wireless node sensing type applications. And, and I think we've pushed that with regards to understanding, you know, what's going on in the industry. I mean, I had that, that background and seeing a lot of what was going on in technology. And so that's really how we got involved. And we, you know, over the last 5-6-7 years, we've been taking this concept in this technology, moving it into commercialization, for the sheer purpose of really addressing some of the big challenges in the industry. And that big challenge actually is really all about sustainability. How do we deal with all of these products coming to market all of what we like to call Edge Products, right, but everybody else just calls them products, right? It's your phone, that your ear pods, it's everything that you have, right? And you know, in your home, on your body, in the factory, all of these, all of these sensors, right, all of these things, these devices that are collecting data, how do we deal with the sheer numbers of information they're generating? And actually the energy to be able to actually, you know, use those in our disposal very efficiently. And so it's been actually a very good, very interesting 35 years moving from defense systems, radars, and satellites all the way. Now I'm down into the smallest of the smallest of technologies that are going to be operating Mobley, and at the edge, right. So it's very fascinating. I love this stuff.


Justin Grammens  4:37  

That's awesome. I would assume maybe cost is one of the biggest changes that you've seen over time. 


Tom Doyle  4:43  

There's a scale, right. So cost versus performance by versus size capability. Absolutely. Right. That's been huge. I mean, cost is very impressive, right. And that comes with, you know, with some of the things that we've seen in the industry, but it's, it's surely the amount of capability already that we can put into the smallest form factor imaginable, right? So you can look at a different couple different ways, and that will drive cost efficiencies. The mobile phone is a magic thing to me, right? Knowing where we were 35 years ago, and what we've been able to, you know, make happen. And it's continuing to get better, right, if we continue to improve. And so it's quite impressive from that perspective. But yes, cost is a huge driver. But you know, performance is just unparalleled.


Justin Grammens  5:26  

Yeah, I just had somebody on the program that I just interviewed yesterday, actually, that is an author of a book on tiny ml, it's called the tiny ml cookbook. And as you know, a tiny ML is kind of become this, I don't know this, this phrase that people are sort of bantering around by seems to flow very much into what you guys do. And that's really, you know, machine learning at the edge. And people are talking about these new chips that are coming out and stuff. And in embedded edge devices, you know, you guys are focused on I guess you use this term analog ml, take a look at look at your site, what's, what's the difference? In your mind?


Tom Doyle  5:57  

Let's definitely talk about that, I think tiny ml was a very good descriptor of what we're all focused on. Right? So tiny is an interesting word, it implies low cost implies small size, and it implies efficient, right? So it's, you know, more part of that the Tommy ml consortium as well, and had been there for a while, because, as you noted, you know, we fit within that domain, and it is focused on, you know, the most efficient edge type computing that you can, you know, that you can offer. Now, how do we separate ourselves? How are we a little bit different? Right, so, we are in to the analog aspects of that, right. And there's some good reasons for that, you know, and so we do talk about analog machine learning, which means, yes, we are implementing machine learning models, you know, applied AI models that operate on fully analog circuitry, and a fully analog chip. And so I think that's, that's kind of unparalleled in the industry, very challenging on a few different fronts, but the value is tremendous. And here's the reason, when you think about the, you know, these all of these edge sensing systems, they're collecting data for us to have at our disposal, that data, that physical world that we want to look at, say it's biomedical signals, say it's audio in the home, say it's a machine that's vibrating, that might be going back, all of this information is natively analog. Right? So we take it, we convert it, and then we do too much with it before we know what's, what's their noise. And so what we're able to do is actually take that information in its raw form and analog, and actually run machine learning models on it to make determinations as to whether an event happened, right? Did somebody break my window? Does somebody say a word, there's that machine starting to go bad, I want to know that as soon as it happens. And by the way, I don't want to keep checking, because that's wasteful, right? So I want to be able to be notified when I have something relevant, but do not bother me. And I think the the way to think about that, too, is most digital applications are going to process, digitize and process a lot of information, just to make sure that nothing has happened. Right? Now, they will catch it when it does happen. But think about all the times that you you are applying machine learning models to data when there's nothing there. So think about I'm going to use a digital processing paradigm to look for somebody breaking my glass, you're taking 100% of the sound 100% of the time, you're digitizing that you're looking at it with an essentially a higher power processor, only to find out that nobody broke the glass. Right. So it's a different way of thinking we can let that you know that downstream component know, when we have a good indication there was a glass break. And we can do it in almost near zero power by using analog machine learning capabilities.


Justin Grammens  8:56  

Interesting. Yeah. And so that's that's obviously huge power savings, not having to convert it to a digital signal and have to have to use a different type of architecture to process I guess, processing power. Yeah. Now, of course, analog CAN BE ALL OVER THE all over the map, right? There's all sorts of analog signals. That could be no false positives, I guess there's just a lot of variability. I'm assuming you guys have a lot of intellectual property around that. But maybe you can talk about how you maybe address that concern?


Tom Doyle  9:20  

It's a really good question. I mean, I think, a couple things to note beforehand, right. So in the world we live in today, there is a need to find more efficiency, everybody knows what I just talked about. And so, you know, there there is also the knowledge that analog can be, you know, a much better means to achieve efficiency, right, if we can use it effectively. Right. And that's, that's what we're talking about, everybody knows, I would rather not have to digitize my data. I would rather look at the analog data, I would rather do things within the analog domain the in many cases more accurate, you know, more, you know, better performance for the for the power as well. But as you said, there are some challenges and these are well known, and we understand them very explicitly, right. So all of our backgrounds are actually within analog, you know, analog design, RF design, analog circuit technology, non linear analog processing, right, we have good background there with our team. And so we've been able to approach the actual situation, with knowledge and with the ideas and underpinning IP to be able to solve it. And, and just to make it easy, right to kind of understand, the one biggest thing that everybody deals with is, oh, I'm going to build an analog circuit, because of variability because of temperature because of voltage offsets because of these difficulties. My inputs are a long single chain, versus my outputs are not consistent. And with regards to what we do, so we approach that knowing that we designed technology, where we're able to take that circuit that's applied for a glass break detector that applied for voice or words, applied for vibrational information. And using software techniques and some of our underlying technology, mostly in the form of our own analog memory, we're able to do a few different things to make sure that that input and the output match across millions of chips. And we use that you know, our analog memory, to be able to store values from which, when we load an algorithm, or load a technology, a software algorithm or machine learning model, onto our core, we're able to make the adjustments so we can make adjustments for parameterizing the circuits, we can make adjustments or store values, such as weights for our analog neural network, but we can also make precision adjustments that allow us to deal with the variability. And I think that's, that's key to our technology where we can't do that in very many environments without I think some of the core technology we have, and namely our our own analog memory capability.


Justin Grammens  12:01  

Gotcha. And if we focus on like vibration monitoring, like specifically, I'm you guys have your own guys have your own chipset, I guess right? You guys have your own hardware? Yes. And then you have your own software, I guess, right, that runs on this. Do? Do you generate the models that companies like maybe like, let's just say I am a company that needs vibration monitoring, right? I have, I have a pump that is running inside a factory cost me a lot of money if it ever goes down? How do you engage with customers like that? And we'll we'll obviously put links off to your website and all that sort of stuff in the liner notes of this, but maybe I could get a little bit more details with regards to sort of how you engage with your customers?


Tom Doyle  12:39  

Yeah, I think you hit on some of it. It's actually a great question. So I mean, first of all, we provide a proprietary silicone solution to our own chip, right, we call it the AML 100. And it is, as I described, right, a fully analog chip with all the, all the accoutrements of, you know, perhaps like an MCU, or another processor as a spy interface for interface as inputs, outputs, you know, all the things necessary to integrate it into a system and add value. Now, on top of that, we have a solution stack. So we have an algorithm. So maybe you want an algorithm for glass break, or voice for vibration, that can be loaded onto our chip, and you know, it's compiled on the chip. And then we have firmware that integrates it into a larger system, right. So such as an MCU, a DSP, or something like that. Typically, in the short term, we're building some of those models, right, so we build our own glassberg model, we sell that as a solution set. In other cases for vibrations, very interesting. So vibration is a little bit of a different animal, because all machines are different. So we can build a glass break model, a metal detector, glass break, and we're very good at all different kinds of glass breaks, because it's a pretty standard signature. It has its challenges, but it's pretty standard in vibration. Typically, what people are looking for, like, for instance, on a motor, as you mentioned, they may be looking at different indicators could be a bearing going bad, it could be a race going bad, it could be a shaft that's off balance, those actual three indicators would send, let's say, magnitude changes, or some kind of change and a frequency level, like bearings have a very specific frequency, for instance, and you have tons of different bearings, right. And what they do today is they take three axis accelerometer data on that machine, they digitize it, which is very expensive, right, 14, minimally 14 bit resolution, they'll essentially then go to a DSP and run an FFT to look at the full spectrum, only to find out what they have are built at. Right. And in the case of what we do, we tried to avoid that, right? We try to do things earlier analog before you waste a lot of that energy, looking for a needle in a haystack, and will provide similar indicators to say, hey, in that frequency band, were detecting some changes that would probably indicate something is starting to go bad. So it's about noticing that Change are noticing a certain level of change in energy or a spike. That may happen in a month, a year, five years, we'll be on all the time, we'll be ready to recognize that and then we'll wake up the system and then run an FFT, or then send out a maintenance work or then do something. So it's, it's a much different paradigm. And, and the other part of it is to is, we want to be able to allow these end customers to be able to set the parameters from which to look right. So you don't want to look across the whole spectrum. They know when they put a machine in, in service that it's going to most likely fail within a couple of different frequency windows. So let's just track those instead of looking to do a full FFT every every hour, right? It's just a patently wasteful way of doing the preventative maintenance on machines.


Justin Grammens  15:48  

Yeah, absolutely. It sounds to me, like, I guess correct me if I'm wrong, but you're you're dealing with organizations that maybe have what I would call a dumb motor, I guess, and you're, you know, applying your technology and some, maybe off the shelf sensor, or maybe the sensors already integrated to essentially make their motor smart. What is the future look like for analog ml or your company? Or even just, I guess, you know, these these predictive maintenance solutions? Or are we gonna come to a world in the next three to five years, I guess that maybe you already have this technology, like already built in, like, I buy the motor, and it's already got all this stuff in it, is that where you guys are sort of headed is that we think the industry is gonna go?


Tom Doyle  16:23  

Absolutely. Right. So today, we're dealing with adding technology on the motors, but there's already indicators of companies that are embedding this in, because it's such a critical factor, right? Look, whether it's on a machine or in your home or at work, you need to know when things that critical nature are happening, right. And that's exactly the case, in order to do that you, you know, today's paradigm makes you do it in a way that you just have to always spend a lot of energy doing it. And that's what we're changing. And so, by the way with energy becomes size and costs, right, when you save energy to the levels that we're saving it. And again, we're talking, you know, micro amps of power to be always on and detecting, you'll find that now you're able to shrink the battery size, which is one of the bare cost items. And one of the bigger size items as well. In most of these deployments, it's a now you become very much small enough to be embedded. Right? Not only embedded one on a motor, but in bed at different points on the motor, so that you actually know what's going on. And you can be very much a, you know, from a predictive maintenance, not worry about that motor all the time and be comfortable that you're going to be notified when there's something of interest that's happening. And you can apply that to so many products in the world consumer, industrial, biomedical, it's really a game changing approach to how we see things out how we see this always on sensing happening, right. And here's the bottom line, the biggest challenge, or the biggest metric for these always on sensing devices is always on power. Right. So and that's the power that's required to actually know what's going on, do applied AI applied machine learning 100% of the time, at the lowest power level, right? And that's exactly what we're what we're striving for. And we're delivering today. And we'll see a lot more of that. I mean, the world is, is wide open from that perspective, because we are so inefficient in how we do things today.


Justin Grammens  18:27  

And then and I guess maybe the final piece of that is the connectivity out to alert somebody about this. And that's not what you guys do, right? I mean, you guys essentially alert that something happened. But somewhere there's another, there's a wireless radio somewhere or something like that, right, that actually does the notification out this has gone past the thresholds that we set.


Tom Doyle  18:46  

That's correct. And you're absolutely right. So everything is let's be as it let's have a staged, highly efficient process. For you know, how we do this always on sensing, right, the first thing is, I want to look at all the data, I want to determine if I have anything relevant. If I think I have something relevant. Let's make up the next stage, which is typically in our world is the digital stage. Let's look at it locally, again to run some, you know, some different looks, maybe the FFT as I mentioned, and then then let's send stuff out. But if you recognize what's happening, I mean, one of the biggest drivers in the industry is, you know, 5g that's going to open up the door for proliferation in a very big way. Right. So, you know, we're somewhat limited in our communications and, you know, sending stuff locally to a gateway and up to the cloud. 5g is going to open the door and provide a gas, if you will, to proliferation of just tremendous amounts of these edge products that are always sensing and always collecting data and giving it directly to you. Right so the thing you don't want is I don't want to know when a leaf blows at my home in another state because my alarm went on. I don't want to know if you think you might have a glass break. I want to know precise data And I want to I want it direct to me, I don't want it stored in the cloud somewhere. And so it's gonna be pretty interesting next two decades, right? With the proliferation of 5g, and the movement of all these battery operated always on systems that we're going to enable as well. So it'd be very interesting. 


Justin Grammens  20:16  

Yeah, for sure. Cool. Well, one of the questions I like to ask people is sort of like, what's a day in the life of somebody in your position? CEO of a company, you guys are doing some cool stuff, buddy? What's What's your day is typically filled with?


Tom Doyle  20:27  

Yeah, it's interesting. I mean, in my position, right. So I'm a technology guy, I'm, you know, my background is in engineering, but I tend to find myself looking at other things too much, and not getting enough time to spend on technology. But technology is our lifeblood. And so, you know, I do you know, when I'm in the office here in Pittsburgh, where most almost all of our team is here, you know, from a headquarters perspective, I get involved in some of the day to day looking at new technologies, how are we progressing with different aspects of our solutions? Are we able to do something in this analog core that no one's been able to do and make it repeatable? There's those aspects, and I love those. And the other aspect is just growing the company, I spend a lot of time looking at financials. Right? Always raising money, you know, working with customers, as well, you know, to kind of understand how do we bring value? How do we get return on investment to them? And, you know, those conversations are great, but But it's good to be in this position to be able to delve into the technology and and then go back to the financing and the customer side as well.


Justin Grammens  21:28  

Yeah, good. Yeah. I was looking at your site looks like you guys are actually hiring in a number of different different places. I'll be sure and put put some links off to your careers page. But it looks like you guys are based in based in Pittsburgh, and you're looking for a lot of talent in that area, or can people be located wherever How do you guys work? Typically,


Tom Doyle  21:45  

I think we, you know, more than doubled our staff during the during the pandemic, right, which is great. Always looking for the most talented people. Pittsburgh is great. In a Pittsburgh has been pretty much a transformative, you know, city, and in the sense of moving from, from steel and those types of jobs to high tech. And so we're able to find very talented experienced folks from the local universities, you know, Pitt, CMU and others. But we're always looking, right. So looking in Silicon Valley, looking in all the, you know, throughout the US, I mean, we're always looking for the great talent. And it's a pretty unique job, right? So we can offer something very interesting with regards to analog and machine learning. So, always looking for the right people, and, you know, a mix of people obviously,


Justin Grammens  22:30  

yeah, for sure. It is one of the things that I've been, as I've been interviewing more and more people, that's, you know, there's, there's sort of the, I guess, the logical left brain thinking, but there also are people that are looking for more creative, you know, how can we bring people that are maybe have more of an artistic, creative background into this field, because they can provide just as much value in some ways, as the programmers, because they're the ones who are kind of, you know, thinking about the creative solutions, and where this technology can be applied in everyday life?


Tom Doyle  22:59  

And you're absolutely right. I mean, to add to that, we talked a lot about analog, and it is scary. For some right, we make that easy. That's part of what we want to do and we are doing is releasing out to the world, right? The tool, chain and SDK that you don't have to be an expert, but you can get all the value that we talked about from your blog. And so you're absolutely right, we want creative people to think about, what's the craziest idea you can think of that you want as an edge product, right? It's always on. I mean, you know, people come up with a darkness ideas, and we want to make allow them to be able to implement those, right? We don't want to have to think about it, but we want to make it easy for them to do it.


Justin Grammens  23:36  

Yeah, and I think that's gonna be a huge differentiator. I think in the future, a lot of these like low code, no code, drag and drop type things where at least, you know, you're you might have to get down under under the hood to finely tuned a lot of things, but a lot of it is just kind of just trying things out. And the more iterations more reps you can do, you know, the stronger the muscles will be. And I feel like the companies are going to win are the ones that are going to make it just really kind of brain dead easy for people to start exploring, and trying things out. Kudos to you guys for having that same type of mindset. Well, as I was thinking about people coming out of school, entering the field, you know, hiring all that type of stuff, if I was coming out of class, you know, coming out coming out of university today or whatever, like, how would you suggest people get into this field?


Tom Doyle  24:19  

I think people come out of school today with plenty of opportunities if they study, electrical engineering, computer science, machine learning, you know, we always strive to point out that, you know, we're very different, right? And we want to bridge the gap between, you know, one end of the spectrum, analog circuitry, and, you know, this notion of machine learning, which is typically thought of as digital and so, I mean, people coming out of school, please, you know, look around, there are tremendous opportunities, and I would also encourage people to, you know, not get too siloed into everything that they learned, right and try to go to a job that fits that perfectly right, because the one thing I can guarantee is technology changes The things that I studied in university a long, long time ago, the basics are there, the core intellect is there, but the application space is just going crazy, right? And so you got to have a really an open eye. And, and, you know, I encourage as I do my children, I encourage anybody coming out of university to really kind of, you know, try to do something different, right, especially as you're young. I mean, there's tremendous opportunity for, you know, stem type, you know, technology type people. So you have the opportunity to try many different things before you settle on some expertise that you want to gain.


Justin Grammens  25:33  

Yeah, for sure. For sure. That's wise words, how do people connect with you, Tom? Are you kind of LinkedIn person, Twitter, I, if somebody wants to find you,


Tom Doyle  25:42  

LinkedIn is probably the best. I'm kind of old school. So um, you know, I can't keep up with everybody else. But no, email, LinkedIn. I think most people buy addresses out there. And you can you can reach out to our website. They know how to get me those emails to come in. But yeah, LinkedIn is a great one. Right? I think LinkedIn is a very powerful tool more geared towards corporate inter interactions, if you will. And connections, I really enjoy that quite a bit.


Justin Grammens  26:05  

Yeah, that's good. That's good. Yeah. So like I say, I'll be sure to have links and all that sort of stuff to your website. And after your, your LinkedIn profile as well, you know, there as we sort of are getting near the end here, are there any other like topics or projects, things that maybe I that I maybe didn't didn't highlight or bring up that you would want to talk about?


Tom Doyle  26:23  

You know, one of the bigger things that we tend to focus on in a broader level is, you know, what's happening in the world from a sustainability standpoint? I mean, it's, it's a huge problem. I think it's hidden quite a bit for especially for people like us in the technology fields. Right. So, you know, as I mentioned, I mean, when when I look at the way that we do edge computing, I definitely see in efficiencies, I think people are so used to, you know, constantly digitizing information, which costs power, and then looking at it before they know what kind of data it's, you know, inherently it's an inefficient process. And I think we've been hidden from that from because of great advancements in our technology, right? Moore's law being one, we really have to think about, you know, stop thinking about, hey, I'm going to charge my phone every night. So therefore, I don't have the power problem, I don't have a sustainability problem, I'm going to plug in my electric vehicle, I don't have the sustainability problem, we actually do. When we think about how much energy we waste. You know, when I get mentioned earlier, when I get notified that a change happened in my house in California, because at least blue, the amount of resources the guy used for that false alarm with all the way to the cloud, should bother everybody. And we're really, you know, in the bigger scheme of things as a company, we're really focused on that, right? So when we can, if we can use as low energy as possible to notify somebody when there's something of interest. That's a win. And I think it's a big win, you need multiply that across billions of units that are going to be, you know, always on sensing here in the next couple of decades. It's a huge, a huge challenge, and something that I think our industry really needs to focus on.


Justin Grammens  28:04  

I love it. No, that's they I 100% agree with that. I think we as an industry have had this mindset of sort of collect and send all the data all the time. And I think it's probably come out of this idea of of cloud, right, there's sort of infinite cloud storage capabilities, the data center will handle it. And I think you're right, that sort of proliferated, I guess down to hardware, and it needs to stop and needs, the thought is it's going to, it's not sustainable. So to kind of reuse the word that you were saying, it's not it's not sustainable, and people, because you're going to be having to replace batteries, pretty much, you know, every day and at some of these things are going to be in remote locations that you just don't get to them for years. I'm with you. I'm thinking about the next generation. I'm thinking about my kids, you know, the energy usage that is already going on today. Yeah, it's sort of it needs to be reined in. So I love that that is a core component of what you guys are trying to do as a company. I applaud you for that.


Tom Doyle  28:59  

It is and you mentioned it, I mean, big data is a little bit bothersome to me, right. So some people speak of it as it's a great thing, we were able to get a lot of data to the cloud. You know, we've all we've been in the last five or 10 years. We know that's inefficient, you don't want to do that, right. There's not big enough pipes, you're using energy, you don't need to but even in the ecosystem of the edge, there's a lot of efficiency to be gained. And we're focused on that right. Moving data digitizing data is the same as just wasting energy. Right. And so, yeah, we I appreciate the fact that you're on that same page. And we should add another one to talk specifically about that. I would love to do that.


Justin Grammens  29:36  

Sweet. Very, very good. Very good. Well, on that note, Tom, yeah, I appreciate you taking the time being on the conversations and applied AI podcast today and all the work that you and the team are doing. We definitely will set something up in the future. I love to talk with people and then sort of come back in the next sort of, you know, six to 12 months or so and see what's what's changed. So you've been doing this for a long time sort of a veteran in the industry and very much appreciate your input today. and your perspective on where things are headed and good luck with the future of affinity and all the work you guys are doing around analog ml.


Tom Doyle  30:08  

Thank you very much great to be here.


AI Announcer  30:11  

You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening