Conversations on Applied AI

Gian Marco - Combining AI and Low-Power Devices to Make the World Smarter

August 02, 2022 Justin Grammens Season 2 Episode 21
Conversations on Applied AI
Gian Marco - Combining AI and Low-Power Devices to Make the World Smarter
Show Notes Transcript

The conversation this week is with Gian Marco. Gian is a tech lead in the machine learning group at ARM and the author of the TinyML Cookbook published in April of 2022. At ARM Gian looks after the ML performance optimizations for the arm compute library, which he co-created in 2017 to get the best performance on ARM Cortex-A CPUs. ARM Compute Library is currently the most performant library for machine learning on ARM. And it's deployed on billions of devices worldwide from servers to smartphones. Gian holds an MSC with honors in Electronic Engineering from the University of Pisa in Italy and has several years of experience developing machine learning and computer vision algorithms on edge devices. And 2020. John co-founded the TinyML UK Meetup Group to encourage knowledge-sharing, educate and inspire the next generation of machine learning developers on tiny power-efficient devices.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode


Your host,
Justin Grammens

Gian Marco  0:00  

And I like to quote, Arlen Hollister from Raspberry Pi, he will say, Well, I expect animals in a shorter longer term to be the layer on top of the sensors to bring the smartness when the less intrusive way. And that is what I expect towards in the future. Expect tiny amount to be the default way to bring intelligence to how engineers to develop most multiplications we don't necessarily require internet connection. That is also what to expect.

AI Announcer  0:33  

Welcome to the conversations on Applied AI podcast where Justin Grammens and the team at emerging technologies know of talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied Enjoy.

Justin Grammens  1:04  

Welcome everyone to the conversations on applied AI Podcast. Today we're talking with Gian Marco. Gian is a tech lead in the machine learning group at ARM and the author of the TinyML Cookbook published in April of 2022. At ARM John looks after the ML performance optimizations for the arm compute library, which he co created in 2017 to get the best performance on ARM Cortex A CPUs. ARM Compute Library is currently the most performant library for machine learning on ARM. And it's deployed on billions of devices worldwide from servers to smartphones. So very impressive. Gian holds an MSC with honors in Electronic Engineering from the University of Pisa in Italy, and has several years of experiencing developing machine learning and computer vision algorithms on edge devices. And 2020. John co founded the TinyML UK Meetup Group to encourage knowledge sharing educate and inspire the next generation of machine learning developers on tiny power efficient devices. Thanks, John, for being on the program today.

Gian Marco  1:59  

Yeah, thanks a lot for inviting me actually is a real pleasure to be here. Justin. Real Pleasure. Thanks a lot.

Justin Grammens  2:05  

Yeah, sure. Awesome. Well, I gave a little bit of an intro during my intro on where you are today at arm, but maybe you could fill in a little bit of the trajectory of your career. And some of the work that you did, you know, leading from university all the way up to your current role today.

Gian Marco  2:18  

Okay, definitely can start from libram When I left the university, so I graduated in in hardware in a software and hardware co design in for developing computer vision and machine learning on embedded devices. And when a join arm, I have been lucky enough to enter the company when actually machine learning was starting to be deployed on edge devices. And with edge devices, I mean, primarily smartphones, actually, my background was a bit different because when I entered that word, machine learning was a bit different. We primarily looking for, let's say, specific features in images, like the histogram of oriented gradients. I didn't study for recent convolutional neural networks, definitely something that I learned at home and deploying the first models on on edge devices. And I remember when I joined on the first day, we we decided to create this library for machine learning, we didn't do it. And we do aim to provide the best performance for the critical operations that we have a machine learning workloads, in particular for convolution and networks for Cortex A CPUs and Malgin. So definitely it was I have been lucky enough, because I joined when ml was arriving, that's when it's really at the edge. And the problem there was how to make it faster. It wasn't the problem of deployment, we could deploy because my phone were already powerful enough, we'd already four gigs of RAM, for instance, enough memory, the only problem was performance calkin, we could make machine learning models running in real time on these devices. That was the big problem that we face. And we try to address at the same time.

Justin Grammens  4:04  

What was the timeframe? I guess, when you were joining?

Gian Marco  4:07  

Okay, I would say the first couple of years. So 2016, we started designing the library. In 2017. We released the first release on GitHub for the first operators for convolution neural networks. And yes, the optimization were mainly for multi GPUs and Cortex A CPUs, and regarding the matrix multiplication routine, in particular pooling layer, so not the whole, I mean, not all the operators that we can see right now, but just a subset of those scripts, the critical ones I see

Justin Grammens  4:40  

now, where they're both CPUs and GPUs in, in cell phones at all at that time was just all CPUs, I'm assuming.

Gian Marco  4:47  

Yeah, so smartphone, you have both. You can run on machine learning on both on CPU and GPUs, and actually, GPUs are really powerful and not just power in terms of computation capabilities, but also polling in terms of power efficiency. so that both capable to run machine learning workloads definitely in a very efficient way. And the intent of the library was to deliver these the value, right the value in terms of performance, which also meant trying to reduce the power consumption of the device toward running these workloads. I see

Justin Grammens  5:19  

and use, like you said, you're running them. I mean, arm is sort of the de facto standard, I guess, when it comes to small devices, I guess, cell phones and whatnot. Right? I mean, you guys have a huge market share in the space today, correct? Yes, correct.

Gian Marco  5:33  

Correct. Thing, over 90% of the smartphones are powered by an ARM CPU for sure. There are also quite a lot of Amali GPUs as well. So yes, definitely, armies enabling quite a lot of smartphones around us, that's a

Justin Grammens  5:48  

very cool, it must have been fun to sort of get into such a large company like that. And to be able to have such an impact, I guess, in building this library, and sort of, again, be able to open source it and sort of move all of this stuff forward. So really start I guess, a testament to the culture at the company to allow you to explore and do other things, right, that actually move the open source community forward?

Gian Marco  6:07  

Well, yes, personally, it was exciting because I left the university and I joined is this group Wade, Wade intend to build a library for both computer vision and machine learning, and at the end became just machine learning because computer vision converge to convolutional networks at the end, but it was a really exciting for me, because it wasn't just an implementation. So it wasn't just implementing operators was also a learning curve for me as well. So learning convolutional networks and learning new techniques for optimizing operators, on cortex ACPs, and Mali GPUs. Because the goal was, was and actually ease because the library is still there, still make the operators really efficient.

Justin Grammens  6:55  

And so then this kind of leads into my next point, I think we're going to talk a fair amount about here is, you know, the book that you wrote, The tiny ml cookbook, I think when you and I had sort of talked, talked offline, and it even says it in the book, it's really geared for sort of machine learning developers or engineers interested in developing applications on microcontrollers. So as you were getting into this space, you might have been coming at it from the other way, you know, I'm guessing right? You You were electronics engineer that was working on on hardware and embedded embedded work, and then really probably learned the machine learning sort of stuff along the way is that that kind of tree with regards to what came first in your career,

Gian Marco  7:29  

April's liked it exactly how they did. And here probably I need to thank arm because I have been lucky enough to work on mL from the service base to the very edge devices, like microcontrollers, for example, all arm powered. And what I brought to the committee actually did the background on electronic engineering and trying to optimize these routines. I mean, keep in mind the hardware capabilities. But definitely what I experience on the machine learning side of things, was a different story compared to what I have faced with freestyle with smartphones. Because as I mentioned earlier, with smartphones, the big problem was mainly, how can we make it faster? We could run it, I remember, we had Alex net, VGG, 16, alt inception, v3, I mean, one of the first models where we could run it on smartphones, probably not in real time, but the device was capable to do it. Well, when microcontrollers is completely different. So tell us why. If you don't know your hardware, well, actually, you cannot run anything on microcontrollers. That is the reason every time I say well, TinyML does is a reason we don't call in machine learning. We call it Tiny ML is the field of study where wants to know much learn embedded the embedded word at best, because if you don't know the hardware device, you will not be able to run anything on these devices, because they have limited computation capabilities for sure. And also limited memory as well. What if you asked me why they have these constraints, while because our low power, definitely if you want low power device, they do want to run on an on a battery, let's say for a month without charging it? Well, you need to sacrifice something. And definitely, probably you don't have the same hardware capabilities that you may find on a smartphone. And at the same time, you don't have the same amount of memory that you may found on, as I said on his machine.

Justin Grammens  9:30  

So you saw these challenges. You started working on this, but at some point, you decided to say I'm going to create a book around this. I mean, what was your motivation for creating a tiny ml cookbook? Then motivation

Gian Marco  9:41  

actually started from my background because as I said earlier, I graduated in hardware software co design for machine learning and computer vision. Back when I was at uni machine learning, it wasn't really that easy. You had to be an expert. You had to be an expert in Knowing the algorithms involved, it was difficult. So the data acquisition, it was a completely different story. Furthermore, alternate embedded programming, it wasn't that easier at a time. It wasn't that easy, because in some cases, you had to use assembly to simply control I don't know, the LED, and the code that you wrote, couldn't be ported to other platforms as well. That was a very big limitation. But now is completely different story. Because machine learning is actually more accessible, I would say, it's much easier than a few years back. But embedded programming as well is a lot easier. So with few lines of code, you can turn on and off and led, you can control external peripherals. And the motivation of the book was these. So I wanted to demonstrate how easy is now machine learning on microcontrollers to people we know or little familiarity in embedded programming. So this reason the book is mainly oriented to people with machine learning background, so like ML developers, or ml engineers, but also embedded engineers can benefit from the book, of course, but they need to have at least a basic understanding of machine learning. The other motivation was to help these people because I wanted to add something to the tiny ml community because that done ml community has one brilliant book written by Pete and Dan, which is a tiny amount, actually. And I wanted to offer another book complementary to that one to cover the part more oriented to people we know embedded programming experience.

Justin Grammens  11:37  

Yeah, that's cool. I mean, I teach a class on the Internet of Things and machine learning at a local university here in their graduate school. And it's for people that are going through a master's in software engineering program. And so they're used to coding but they're not really used to microcontrollers and hardware. And it's been kind of fun coming at it from that angle, saying, you know, hey, here's an Arduino. Here's a breadboard, here's how you sort of plug this stuff together, because they're kind of out of their comfort zone a little bit. And at the end of the course, they really enjoy it, right? They are like, wow, I didn't think through a master's in software, I'd actually get a chance to get my hands physical and get data from the physical world. And so I like you've kind of flipped that around, you've gone the other way. Right? You have us? Do you sort of have people that maybe know, well, I guess maybe this sort of similar. It's geared for machine learning developers that maybe know how to do the ML models, but they're not really deep into the applications inside microcontroller code, correct?

Gian Marco  12:25  

Yeah, and definitely now is easier. In the book, for example, use the Arduino board, the Raspberry Pi Pico, the book in particular, the the reader will not install any software to build the projects, because they will use the web browser because most of the tools will be browser, Basil cloudbase, actually, so no setup, simply connect the microcontroller to the laptop, and that's it magically is going to work. So it's definitely different from what I experienced at uni, for sure.

Justin Grammens  12:58  

Yeah, for sure it will we always put sort of liner notes and links and stuff in the description of our podcast. So absolutely, will have links off to your book. And, you know, as I've bought, I purchased your book. And I've been kind of like looking through some of the examples looks like some of them actually use this other company called Edge impulse for some of their stuff. And what I found interesting about edge impulse, correct me if I'm wrong, but you know, you can run a lot of stuff on your cell phone, right? It allows you to do a lot of training, using essentially hardware you have in your pocket. And I'm assuming you were really trying to base a lot of these examples maybe around maybe every day real world use case examples. In fact, you know, maybe you can share with us some of the things that happened in the book like what are you know, I know you do stuff around sight and sound. I don't know what were there some specific examples that you enjoyed building and creating as you did this book?

Gian Marco  13:43  

Definitely. Hygiene pulse is a good example of why it Mel became that tease me because I find it extremely straightforward to use the tool, not just for building the machine learning models, but also for doing the data acquisition from the device because at the end, ml yes is machine learning. So algorithms, but also data acquisition, you need data to build on it to build your product. Edge inputs offers an easy way to do it from Arduino and Raspberry Pi Pico. So that is the thing I really enjoyed. In terms of projects in the book, probably chapter five and chapter six, I'm gonna say in a bit what they talk about. So chapter five is about indoor scene recognition. So the the aim of the chapter is to build a machine learning model for Arduino Nano is capable to recognize indoor environments. So in that case, Edge impulse is not used. Only TensorFlow and TensorFlow light for microcontroller will be used for for this kind of application. But the reason I enjoyed writing that chapter is because that chapter is not just about ml, because most of the time it was think about well, the big problem is building the model, while on microcontroller And this is part of the problem, because that is also the part related to, okay, the data acquisition, and how do I feed the data in a most efficient way to the model. For example, in that case, there is a camera. So the camera will have its own resolution and color format, the model, we have a different resolution, a different color format, and a different datatype by the memory is limited. So there are anywhere in the middle operations like I don't know, resize, color conversion, cropping, and other things that must be executed in the most efficient way. So I enjoyed showing how to do it. In a, let's say, keeping in mind the memory constraint. Chapter Six, is already actually chapter seven is the other project I enjoyed. Because that is on a virtual plot for the goal was to implement sci fi turn on the device, which is roughly 10 years, so more level than five years old, more or less, because the goal was to demonstrate that actually tiny map could Ronaldson devices with new important memory constraints, like the one I use in the book, in particular, in that case, if our 10 is running on cortex, M, and three with only 64 kilobytes of RAM. So Barry Taniel. At a it was fun to try to squeeze the model to execute the Sefa 10 on that device.

Justin Grammens  16:31  

I see. Those are those are great, great examples. And yeah, it's all around the constraint, the constraint of the devices, not only power, but you said some limited CPU, I guess, if you're running on a Cortex M, because that's basically sort of like the lightest processor there are makes Correct?

Gian Marco  16:47  

Yes, I mean, constraints is there is an important key word here, because constraints in terms of computational capabilities, constraints in terms of memory, in constraint in terms of public consumption. So there is this tiny amount. So you need to know your device at best. But at the same time, say also the device needs to know machine learning if it wants to build hardware in the most efficient way for machine learning. And that is the future. So the future, of course, will look close to ml to develop solutions that are friendly from power consumption to these kind of workloads as well. But yes, that is also the work I do at a time. So I work on performance optimization. And every time I say about performance, you may well, performance is speed of computation. Well, this is part of the problem. Performance is a lot of things. It's also a power consumption is also memory. So we want to execute in a most efficient way in terms of latency, memory, and power consumption

Justin Grammens  17:50  

is sort of the term tiny ML is is still kind of new, right? And it's kind of in its infancy, I went to the 20 ML Summit. And I think it's only maybe in its fourth or fifth year or so. Right? It's it's kind of still a new term that's evolving and morphing in the industry. Is that correct?

Gian Marco  18:03  

The Tiny ML world yes, is pretty new and the Tiny ML Foundation, actually if I can, if people are not indeed Demeter group, I recommend to join us because it is a free Meetup group where we can they can learn more about the tiny amount initiatives, actually tournament, the word is pretty new. But a man of these devices, well, already outside uni, I heard about it, the thing is nowadays, easy to do it for everyone. And that is the big difference from let's say, five, six years ago, because if we look around us, I'm pretty sure we have I mean, on a smartphone, we have activity recognition or handwriting and writing the recognition as well. So there are already applications powered by Tiny ML, but before it was difficult to do, or you had to be an expert. Now we have the tools, the software's and it's much easier.

Justin Grammens  18:59  

Yeah, like I guess the go to example that I always use is just consider wake words, you know, a you know, whether it be Okay Google or hey Siri or whatever it is. Those were on cell phones. And there's a essentially a special chip that listens for that word, and it needs to be very, very low power right.

Gian Marco  19:15  

Hobbs said it was late and data is an important use case actually, when we talk about animals we usually refer to these use case keyword spotting. So it's like a new kind of interrupt for the hardware right. So something you want to earn you start when you heard you only hear for Eastern these special keyword and as you said, you need to run it a in a in a on a process. So that is efficient from a power consumption perspective because it runs continuously in background. So that is an important although we are talking about a use case on his smartphone with tiny amount. Can it sees access also on on smartphones because it can be part of that to trigger more power Um, three applications.

Justin Grammens  20:01  

Yeah. And I think there's another thing that I've kind of been watching and reading the other advantage to sort of 20 ML is there's there's no connectivity needed to maybe a cloud service, for example, inherently, maybe that would provide us additional security, right? You're not actually having to ship data. Do you see some of those benefits as well, if something can be autonomous and just sort of running on its own, figuring things out without actually having to be connected to the cloud or sending any data?

Gian Marco  20:26  

Yes, when I talk to my students for essential care about tiny amount, I usually start from white animal, and not from what animal because in terms of why 10 ML, where we want to bring intelligence to objects around us. But with a focus on power consumption, as we mentioned earlier, and focus also on data privacy, and cost, power consumption, we talk a lot about sustainability. So this is an important aspect to consider also data privacy, which means that we want to unlock smart applications without necessarily using an internet connection or sending data to the cloud. There are different reasons. But data privacy is one of these for sure. And I can give you an example. I mean, if we talk about a toy, as smart toy, to learn vocabulary to learn new words, well, actually, I prefer probably having told me that doesn't require an internet connection, just to avoid the question Where to the data, which set will be sent to the cloud, probably prefer a toy that does not require any internet connection. And when he set up that it works without that thesis and other think, I mean, we want something that we can build with a small budget. And that is the reason we talk a lot about microcontrollers for a sense, because are everywhere, inexpensive, me really popular. And so when we talk about animal we talk definitely at the moment, particularly about microcontrollers. But it's not just my controller, but my controller is an important Tiger platform for taneema.

Justin Grammens  22:06  

Absolutely, I was just kind of like taking a look more at you're at the ML group, the tiny ml group that you founded, or co founded, I guess, in the UK, and you guys have events happening, like pretty much every week.

Gian Marco  22:18  

Well, we try to have Meetup group, and let's say, every week, every month or two, so we try to engage with with people every week, or we try at least but yes, some are good activities have been half free every month or every two months, we are doing also activities in in person in Cambridge, which is something new, because when I found that the Tiny ML UK group, well, it was in the middle of pandemic. So we couldn't actually do anything in person. And now it's much better. Also, there is a Raspberry Pi here in Cambridge, so we could organize events in the store as well. So it was pretty fun. But that is pretty good. Now,

Justin Grammens  23:00  

I'll be sure to put, you know, a link off to the group. But yeah, I mean, it's just it looks like there's a lot of interesting talks going on some online events. I've actually attended, I think one of the ones a couple of weeks ago. And I think I found out about this group from being at the tiny ml Summit. And I think I think one of the CO organizers, Olga was the name that I recognized and so kind of followed her on meetup and have essentially, I'm one of the members of your, your 400. And I think if one or two members, I guess on this public group today, so that's great.

Gian Marco  23:25  

Well, definitely we need Olga. Without her we couldn't do anything, actually, we'd have support. So on our side, we try to propose interesting presentations, not just theoretical presentation, but also hands on sessions, for example, by Yes, we'd all be really, really difficult, honestly,

Justin Grammens  23:44  

for sure. Well, where do you see the word tiny ml going in the future? I mean, I'm a guy who kind of cut my teeth on Internet of Things. I've been doing IoT for probably more than a decade now. Although it wasn't called IoT back then. Right? It wasn't, wasn't really until maybe 2014 or so that the term IoT sort of came into existence, and everybody was using it everywhere. But I always sort of thought that it was Curtis kind of kind of gonna fade away, because everything is going to be connected to the internet, there's not going to be anything special about it. How do you feel about the word tiny ml, you think that's gonna eventually sort of fade away? Because we're just going to sort of expect these things are going to be smart? And

Gian Marco  24:17  

that's an interesting question. So I need to say that probably, in this field, I don't see a single killer application. Because when we talk about a machine learning problem with think about it, what is the killer application? Well, for Tanya male, probably, there isn't. But the friend scenarios where we can bring definitely the intelligence on these devices. And I like to quote actually, Arlen Hollister from Raspberry Pi. I had been in session with him three weeks ago, actually. He was say, Well, I expect taneema in a shorter, longer term to be the layer on top of the sensors to bring the smartness when there's less intrusive way And that is what I expect also, in the future, expect tiny amount to be the default way to bring intelligence to how engineers to be to the development was multiplications. We don't necessarily require internet connection. That is also what I expect. So to see the object around us most March, or let's say, to make AI Kobe Quintos, to use Buffer good, and for good, but this is my hope, is to make definitely positive contribution to the sustainable goals of the United Nations. That is what I like to see.

Justin Grammens  25:36  

Awesome. Those are lofty goals. But no, I think it's totally, totally possible. And it's, again, technology can be used for good or for bad in any sort of way. But I'm always a fan for using it for good. And how do you see there's a question I do like to ask people, and maybe it's a little bit off script, but like, how do you see these sort of Smart Objects, maybe changing the way that we work? And maybe it's not even work? Maybe it's just how we recreate or how we go about our lives? Like, what are some examples are, are things that you could hope, you know, you mentioned about, you know, the United Nations and probably conserving energy or, or whatever it would be under those goals? I guess what, what do you have any other thoughts around how you see this changing in the next decade,

Gian Marco  26:12  

I hope that Tiny ML for instance, can help to have more sustainable farming, for example, I had shared with people that are in Africa, where their use cases are simple, but pretty challenge challenging from different point of view. Because while Ethernet connection is something that probably is not there, and is not really easy to access, let's see, but at the same time, you want to have as Mark for me to have to improve the efficiency of your all your farming and and avoid wasting resources at the same time. So I hope this technology actually could can lender. Another challenge. I heard from time was about the the server because we talked about what they you haven't you need an IoT infrastructure, definitely you need a server. But a server could be your smartphone, actually, smartphones are actually now are really, really powerful. And so I really hope that we can unlock something like that, in this environment, where time owl can help to bring the smartest to these devices, improved efficiency, and maybe the smartphone can do the extra beat to actually pay them more autonomous, more efficient submission for them.

Justin Grammens  27:29  

So awesome. Yeah, yeah. And just think of the number of smartphones that are essentially thrown away each year, or that go into the recycle bin that are actually very, very capable devices still, right? People get a new one. And they feel like to have to have the latest and greatest when actually, there's a lot of hardware still left in these devices that could run for many, many years. Yeah,

Gian Marco  27:49  

the other thing, probably I like is what tanomo doesn't cost a fortune. So if I consider microcontroller, world with a few dollars, I can have a computer in my hand. Weird. Well, that is not that powerful. But it can learn a lot of things. So probably 10 ml and is MicroPort. Bear and these devices can also help to learn more about electronic engineering in regions where for instance, well, expensive devices are not really easy to get. So that is the other the other reason I liked ml, because we can learn at the same time with microcontrollers, very interesting stuff about electronics as well, with a small budget.

Justin Grammens  28:36  

Yeah, for sure, it's kind of accessible to everybody, you don't need a lot of money in order to actually make this happen. I feel that way with IoT, you know, in a lot of ways, the fact that the Raspberry Pi Zero came out, and it was like, you know, $5, or $10 makes it pretty accessible to just about any, any person in the world potentially, to be able to just start playing with things, right. A lot of it is just a lot of stuff just comes out of experimentation, just allowing people that are younger, just coming out of school to get their hands on some of these things. That kind of leads to my next question, I guess is like if somebody is coming out of school, I do like to ask this of some of people that are guests on there, like, how would you advise them? You know, looking back on your career, as someone's working through other classes that they take, I mean, obviously, we'll talk a little bit more about your book here at the end, but how do people sort of immerse themselves or get themselves into the space that they're interested in? 20 ML,

Gian Marco  29:24  

okay. So be curious for sure. is the key here be curious, not just about machine learning files about embedded program is a field where there is an intersection between these words, so is a staple of the Union interest or a duty unit? Are you interested in developing dannimal applications? Were definitely should be curious about machine learning, but also about embedded programming. And here are my recommendation is not just about how to make it working, but also about why and how that works. If it works in that way, because if we ask, always ask ourselves why we're probably, we can have a better understanding of how these work war complex word works. Because definitely you can follow the recipes in the book to develop your 10 ML application. But the why question should be always in our head just understand better and unlock other use cases that are probably not reported in the book.

Justin Grammens  30:26  

I love it. Yeah, yeah. And oftentimes, what I'll have students do or even myself is, maybe I'll work through an exercise, but then I'll say, Well, what happens if I tweak this? Right? So those exercises can be an starting point. But then the world is infinite, you can start, you know, pulling in, well, what about if I pull data in from here? What about if I change this parameter in my model, or you can just start tweaking things and then starting to learn all this is what happens, you know, when I when I do this, and then bring it back, as you said, sort of like the why we shouldn't just be building technology for the sake of building technology. It's maybe, like, why are we doing what we're doing?

Gian Marco  31:00  

I've gotten the book, the book is in Morrowind, is a question based book because they have questions also in the book, for example. And some of the questions actually have been asked to me at UT directly by students. And one of these question was, well, why do we have the program memory and data memory on cheap for microcontrollers? That was an interesting question. But and the answer was, well, we want to make a power efficient device. That is a reason. Primarily, everything which is outside the cheap, is going to cost us power. So if we put everything on shape is definitely more power efficient. But of course, is not infinite. The silicone that we have for building our chips is going to be small for sure.

Justin Grammens  31:44  

Interesting. Yeah. There's there's trade offs, right? Whenever you say whenever you answer why it's because of this, but you're also sacrificing likely this you know, you need to make a decision. When it comes to actually writing a book. You know, that is something that I shared with you that I've shared with my listeners too, as well. It's like it's kind of a bucket list item for me, I really would like to have the opportunity to spend the time to write a book. And so while I have you on the show, because we one of the few people I think that I've talked with that is a published author, could you walk me through I guess us through the process, right? You you sort of you had a reason as to why you maybe wanted to do this book, but then I mean, there's there's a long path between having an idea and a reason to build a book to actually publishing it. So what were some of the highlights along the way? Maybe there was some lowlights. But you know, what was the sort of path of view to get you from A to B,

Gian Marco  32:30  

quite a lot of lowlights.

Justin Grammens  32:32  

That's what I've heard. I told you

Gian Marco  32:34  

the motivation. But definitely, from the motivation to the end of the book, there was an endless journey, endless nights I'd seen. So for me, it was the first book and Park, the publisher offered me the opportunity to write a book, what I learned definitely is, well, you can have the better story your head, but you need to be to have a plan to the lever. And we need to be precise with a shadow. So that is that is one thing, we're saying with this because that definitely having a good outline at the beginning can help to shape the story. And that helped me. However, the outline doesn't solve everything. Because I believe the most challenges and months for me were the first three months where I had to write the first three chapters. And I didn't know the free Sunday the tone to use, or how I wanted to shape the recipes presented that book. So I don't know how many times I change my, my ideas have occurred. But I can tell you that after three months and discussing with the publisher, a publisher definitely helped me so deep here, talking to other people can help because it's true day you are alone, probably during, in during the writing phase. But people can help you to spot something that you can not see. Because you want just to write a book. And other people actually, which had probably a more open mind at that moment can help you to say well, probably if you change the tone here or if you've read these in a different way can help you to explain a Copson clear. Definitely there were other challenges because I had to write a book during the night so and last night as I as I said, but it was at the end an interesting journey I really enjoyed honestly. And if you ask me why I enjoy it, because I learned quite a lot from the tiny amount of community because definitely a lot from water learn a ton but also what I learned from a tiny other tiny amount of smart people around the world. So I need to say thanks to them.

Justin Grammens  34:52  

Yeah, it's it takes a community. It takes a village to sort of build build your knowledge base and it sounds like you're a lifelong learner. I guess if I could sum it up you really enjoy the whole process of learning. And through this book, you learned a lot.

Gian Marco  35:04  

So, yes, definitely I, I learned a lot learn about new tools that I knew, for instance, but I couldn't have time for this before the book or playing with these tools, actually developer gave me the opportunity to experiment more on this stuff can get a free surge in polls, and it bolts I knew about the tool, but they both was here to to learn more about it, and also sharing feedback to them. But it was a very interesting learning journey.

Justin Grammens  35:30  

That's one of the things that I found, you know, if I have to teach a class, or I have to do a presentation on something, then it really forces you to become kind of master the subject in a lot of ways. What was the timeline of you to write the entire book? Like, how, how many months? Do you think it took you?

Gian Marco  35:46  

Okay, so for writing the book, it took me from end of May, let's say beginning of June, and end of December, that was that timescale. And if I, let's say manage to complete almost on time, because actually, I was out of schedule already. It was because other people helped me too. So it was a journey where other people actually helped me to go in specific direction. So I'd see the themes in a different way. And I think these helps a lot when you write a book, a sharing your, your concerns with other people is quite important.

Justin Grammens  36:22  

Yeah. Who did the review? Like of the book, you said, you got feedback and input with these? Were these technology people? Are these people that have done tiny ml? Or would you just give it to your grandparents who don't know anything about technology or anything like that? Who did you sort of like pull in to do the some of the reviews of the drafts?

Gian Marco  36:39  

Okay, so about the technical stuff, for sure. There was Alessandro Granda from Elgin pulse, who helped me a lot, give him suggestions. He reviewed some of the chapters, but also my wife, not from a technical perspective, but because I wanted to write a book that was understandable. So I was looking for someone with no technical background. And she was perfect. Because she knew a bit of machine learning. But she wasn't aware of tiny ml, and actually gave me some inputs saying, Well, I'm not sure this is correct. Or this can be under kind of correct properly. So these are the two people for sure. That helped me.

Justin Grammens  37:23  

That's great. That is great to get Yeah, sort of two diametrically opposed, you know, perspectives, you're going to sort of be able to pull the best from both worlds. So I think that's great. That's great, John, it's a great story. Before we sort of close out here, I guess, what are some of the easiest ways for people to get a hold of you? Do they just find you on LinkedIn or Twitter? What's your best selling today

Gian Marco  37:44  

in it is the best way. LinkedIn is definitely the best place. So I use also tweeted, Bob, I'd recommend you add me on LinkedIn, definitely Perfect.

Justin Grammens  37:52  

Okay, for sure. Yeah. Well, we'll put we'll put links to you on LinkedIn, in the meeting notes as well. Is there anything? Any other topics or things that you wanted to share during the conversation that I maybe overlooked? Well, I

Gian Marco  38:04  

think I would share with you some links that may be beneficial for the audience of this podcast, some resources as well. So in case someone is interested to know more about Tiny ML

Justin Grammens  38:14  

Yeah, sounds good. And I do know, I mean, it's tiny, which, you know, means it's a it is a nonprofit, right? It's sort of a it's sort of a open community, I guess, right?

Gian Marco  38:25  

Yes, the Tiny ML Foundation is a nonprofit organization with the aim to share knowledge about Tiny ML worldwide, in different locations worldwide out that tiny mile meetups group, one a UK, a in us out a meter groups in Italy. And definitely, if there is no meter group in your, in your country, you can request the Tiny ML Foundation to create one. And maybe you can be one of the founders of that country in that country. So and you can start promoting activities. So it's an interesting place to be right now. Because the things you can learn on not just from things on how to do things, but also learn cutting edge technologies or other research areas that are expanding and evolving in Tiny ML.

Justin Grammens  39:12  

Yeah, perfect. Perfect. Sounds good. And in fact, yeah, I highly encourage people to take a look. I mean, the applied AI community is really around applications of artificial intelligence, and Tiny ML hits that nail right on because you're applying machine learning to a real world use case. And it's actually very something you can physically sort of feel in touch, I feel so I love what the tiny ml community is doing. I'd likely I'm attending a lot of these meetings and reaching out to people that are speaking at these events. And then I'll be having more and more tiny ml thought leaders and people that are doing interesting stuff in the tiny ml community both at our applied AI meetups that happen the first Thursday of the month, but also on these podcasts that we publish every week. So I think it's awesome. Thank you, Gian, for the work that you do both and creating his book because I know it was probably a labor of love a lot of blood, sweat and tears a lot of time I'm going into this, but at the end, you should be very proud of of what you have created here. I purchased the book. Like I said, I love it. I think it's great. And I plan to actually pull some of these examples into the class that I'm teaching. And I'll recommend everyone to go out and pick up a copy. But yeah, I just want to say thank you for all you're doing for the community. And I think it's, I think it's great. We have some really exciting times ahead for the tiny ml community.

Gian Marco  40:21  

Thanks a lot, Justin. And I need to say the same thing for your community because it was actually I was listening to some of the podcasts had had really, really interesting, so keep it up.

Justin Grammens  40:32  

Great. Thank you. We'll be in touch. We'll talk again soon. Thanks,

Gian Marco  40:35  

for sure. Thanks so much for inviting me.

AI Announcer  40:39  

You've listened to another episode of the conversations on Applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied If you are interested in participating in a future episode. Thank you for listening