Conversations on Applied AI

Erik Beall - Computer Vision and Learned Expectations

May 17, 2022 Justin Grammens Season 2 Episode 12
Conversations on Applied AI
Erik Beall - Computer Vision and Learned Expectations
Show Notes Transcript

The conversation this week is with Erik Beall. Erik is an imaging AI scientist and product developer. Eric is the CEO of Thermal Diagnostics, where he has developed the fever inspect a thermal imaging based product that helps keep us all safe by providing accurate, reliable and foolproof fever detection. He's also a computer vision scientist at Digilabs where he is using AI for various applications for autonomous ground robots, with rover robotics and other Digilab companies. Eric holds a BA in physics and chemistry from St. Olaf College, a degree in business administration and management from the University of St. Thomas, and a PhD in physics from the University of Minnesota.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

Enjoy!

Your host,
Justin Grammens

Erik Beall  0:00  

The expectations for AI have shifted wildly back and forth. Now, but the AI winters are generally like the expectation for like how we're gonna have robots that can do anything, she will level intelligence, right, it is right around the corner, and then nothing, nothing changes, and then the expectations like, we're not going to invest in it, or it's never going to happen. And this is the closest we've come to generalizing the human ability for learned expectation. That's what we're doing here. And if you think about what toddlers have to do, to get learned expectation to be able to not fall, climb up on the sofa and walk and not fall, it's gonna take about two years before you're still pretty nervous about them doing it. And that something is data and is highly specific to your environment and the other, they're building multiple models in their head. And we don't have the ability to build these multiple models. There are people working on your spatial perception where they're trying to do trying to tie these things together, but it is very challenging.


AI Announcer  0:55  

Welcome to the conversations on applied AI podcast where Justin grumman's and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.


Justin Grammens  1:26  

Welcome everyone to the conversations on applied AI Podcast. Today we're talking with Erik Beall. Erik is an imaging AI scientist and product developer. Eric is the CEO of thermal diagnostics, where he has developed the fever inspect a thermal imaging based product that helps keep us all safe by providing accurate, reliable and foolproof fever detection. He's also a computer vision scientist at Digi labs raised using AI for various applications for autonomous ground robots, with rover robotics and other Digi lab companies. Eric holds a BA in physics and chemistry from St. Olaf College, a degree in business administration and management from the University of St. Thomas, and a PhD in physics from the University of Minnesota. Thank you for being on the program today, Eric.


Erik Beall  2:07  

Thank you, Justin for having me. I really appreciate this. And I've enjoyed the applied AI meetups that have made it to over the last few years. Well,


Justin Grammens  2:14  

I'm glad you're willing to be on the program today and share some of all like us all of your experience that you bring to the table here. And I just you know, I highlighted a couple of things where we're currently at today. But I know you're you've got a long history of working in this whole space of computer vision and AI and with your degree in physics and chemistry and stuff. I mean, so yeah, maybe walk us through a little bit in terms of help us connect the dots in terms of how you got to where you are today,


Erik Beall  2:37  

how much time you have. I had a really very fortunate series of introductions to technology over the years. So when I was in graduate school, I worked on a particle physics neutrino oscillation experiment. We're firing neutrinos from Fermilab near Chicago, to a mine in northern Minnesota. And so I ran the slow controls the detector control system, because nobody else wants to do and I was just a grad student, where they just give you a bunch of code and tell you to swim. They had a base architecture idea, sketch of all the components while they were supposed to be after monolith at all 1500, high voltage controllers, magnets and everything. And I just completed this thing called Linux From Scratch, where you build up your own Linux operating system 2000, which I highly recommend anybody who's in this field, go through Linux From Scratch, you won't regret it, you'll learn so many, so much about the deeper workings of how an operating system gets started up. And I had to build all these different subsystems and see SQL plus Python, even Visual Basic for a SCADA interface to the magnet controllers, piping all this new raw MySQL database. All this is like compiled from scratch on the various servers because there's no guidance, the spring web detector control status on the people at the control room could see it all run by cron jobs as little pre web 2.0. Now you can pull in services that will do you do all of this for you. But at the time, we would put stuff in our CVs repository, and it was not the most maintainable code as I would have to say, learning how to code well is a late skill that I really wish I'd learned earlier on. But that's how I got my introduction. And then I went and did neuroimaging for 10 years at the Cleveland Clinic, where I had a great mentor, Mark Lowe, who taught me methods development, methods, development is not talked about much, but it's sort of the area of science in a field where you're trying to figure out the way we're doing this. How do we know? What we know is true or not? I mean, how do you tell what you're seeing from a mirage? Because most of the time, it's a mirage. And most of the time those mirages are self generated, like we start out, putting together a study like, what do I want to believe? Well, that's probably we're going to find in the end, by hook or by crook, your mind will just do that to you. So you have to be really careful your methods and so that was really, very fortunate training forced me to think more carefully about what I was doing, but that transit shouldn't have to taste of business development with head motion tracking, which was frustrating because it was trying to do it from within the clinic. But that led me to look outside the clinic when I started a Kickstarter while I was still at clinic. So my last two years there, I was working on developing a thermal camera system. And ultimately that funded just as we were moving back from Cleveland to Minnesota, and I spent about a year and a half with a small team or media, Cleveland. And here working on developing a quad core ARM processor to run computer vision algorithms on the GPU of the system. And we ran out of funds. And we also lost two key components were reportedly million dollar minimum order quantities we cashed about. And I ended up doing consulting for digital labs, and various corporate development projects. That's where I am today, I'm still most of the time working out of digital labs, and they still support me, but I've also worked for Cognoa autism diagnosis company, Marcela and engine manufacturer, and then one flip thermography division. And then my own thermal imaging company making a fever detection device is all gone on the shelf. Interesting.


Justin Grammens  6:04  

Wow. Sounds like get your hands on a lot of different stuff. I mean, it's it sounds like computer vision is sort of a big component of what you have have done is that, is that true? Is that is that your passion? I guess, is that the part of AI you really like to do?


Erik Beall  6:16  

Yes, I really am hooked on anything that's computer vision related. I like hardware, too. But I also like the software, the algorithms that run on it. So I I have a deep insight into both aspects of because they are, you end up needing both be your image quality can affect the application. In many cases, some cases don't matter, you can just use a webcam, depending on what you're doing. We're doing this animal sizing project with the most recent project I've been working on, we're using a depth camera, the quality of the image is very important for me to get data and also understanding how to synchronize the color depth images is essential. And okay, so hardware experience was invited. But I would say my my biggest interest is in computer vision and more advanced algorithms that we want to add neural networks.


Justin Grammens  7:00  

You mentioned animal sizing, I mean, and are you trying to take a 2d image and try and figure out the three dimensionality of it if that's even a word, three dimensionality?


Erik Beall  7:09  

Yeah, well, so for over robotics, one of the things I worked on was a monocular depth system where you got your stream coming in, and you're converting him the depths mat. 20 2018, when it was really hard to do monocular depth, but I personally don't trust monocular depth enough, I think it's pretty dangerous. And 2018, and we're gonna CVPR and there were a series of presentations on May this 2019, a super resolution, which is related to monocular depth. Seven of the eight papers presented, said while as we look deeper and deeper, we realized they were just hallucinating features that weren't there before. And it's the same thing with monocular depth, it learns the training data really well. So we had one that seemed to work well outdoors, brought into the office complete garbage, retraining indoors, okay, it works. Take it to a part of the opposite hasn't seen before complete garbage? And how much data are you going to keep putting into it to figure out how it works? It's unnerving to think that your neural network that this is just a saying that I use a lot. It works until it doesn't be ready for when it doesn't. And that isn't moving for I'm sure a lot of your clients as well.


Justin Grammens  8:22  

So it sounds like it's very narrow, I guess it's kind of like a narrow AI application. This one here, and even in any type of depth perception realm is that is that fair, like so we were talking to Cargill, this was years ago, but they wanted to basically provide a way for there's a just basic amount of grain that they wanted to drive by and point something at it and say, Tell me how much is in there, right? Like, what's the volume of this thing? And right now, somebody needs to actually go up and survey it right, get all the way around it. And so, you know, the thinking was was with some of the new camera technology, even like the iPhone has allowed you to capture some of this and get get get some of this data. And there were third party software solutions that we were looking at using. So we didn't need to reinvent the wheel. But it seemed like at the end of the day, it was all in the training, you can't walk up to something that completely it's never seen before and pointed at it and say, This is what it is even when you try and do there's some stuff going on with room sizing, right? You can walk into a room, I think and take a picture and it tries to map it out. But correct me if I'm wrong, because I don't know the space, you know, enough given with regards to sort of 3d imaging. But, you know, is that even possible today without having a whole lot of data around the specific thing you're looking at?


Erik Beall  9:30  

I would say three years ago, and even two years ago, CVPR there were people still claiming that that was possible believing it. And that was I would say more than half people believe it. But today I think a lot of people, we got to be really careful because we've been burned and we know that as soon as you get outside of the training area that they have the The fancy term distribution shift for distribution Shift Happens all the time, and statistical analyses based on some validation data you've collected never works. anywhere near as well as a few weeks of experience in the real world application, like we're deploying your application in real world, where you see really how it fails and how often it fails. And even then, you know that there will be scenarios like, Okay, now it's winter, and everything looks different, or there's the weather is a little bit different now nothing works. 


Justin Grammens  10:20  

Yeah. So it feels like we're in this era of AI right now, machine learning, whatever you want to say. But the data is very, very, it's not generalized enough. You know, you'd kind of you're solving these little problems, sort of one by one. 


Erik Beall  10:32  

Yeah, well, I think are the expectations for AI. Yeah, have shifted wildly back and forth. And now with the AI winters are generally like, the expectation for like, how we're gonna have robots that can do anything, she will level intelligence, right is right around the corner, and then nothing, nothing changes within the expectations, like, we're not going to invest in it, or it's never going to happen. And this is the closest we've come to generalizing the human ability for learned expectation. That's what we're doing here. And if you think about what toddlers have to do to get learned expectation to be able to not fall, if you want to your toddler to be able to walk along, climb up on the sofa and walk and not fall, it's gonna take about two years before you're still pretty nervous about them doing it. Yeah, sure, is data and is highly specific to your environment and the other building multiple models in their head. And we don't have the ability to build these multiple models. There are people working on spatial perception where they're trying to do trying to tie these things together, but it is very challenging. And ultimately, I have a kind of contrarian view of AI with neural networks. More specifically, that goes deep to the nature of science. David Hume, the Scottish philosopher, who came up with the whole black swan problem, and will idea that you can't prove causality, which was kind of a shock to other people. And people enjoyed reading about that for a little while. Like it's true, you can't really prove that what A causes B. All you know, isn't B follows a consistently, it's learned expectation and for like Nassim Nicholas Taleb talked about this, in his book, The Black Swans, that when the turkey wakes up, every day the sun comes up suns gonna come up tomorrow, and this is true until it isn't on Thanksgiving. Perfect little sizing down on the problem. Like, yeah, you're gonna have an unexpected stuff that was up on you, which we see every, every few years in macroeconomics. Our assumptions were true until they weren't and then right. You're just them? Sure,


Justin Grammens  12:30  

sure. Well, you probably just touched on it, you know, as well. But I mean, one of them to do like ask people is how do they define artificial intelligence? Yes. What's


Erik Beall  12:38  

key to my understanding of it is the intersection between computational complexity, and it's like, how many steps does it take to compute an algorithm that Reagan really polynomial versus exponential complexity? Computational equivalence, most people call it like Turing completeness, or computational power, or Riemann expressiveness of a system, and provability, can you prove that something's true or not? With logic, this is all balanced by the fact that all of the AI stuff and the way our brains work, these are all just approximations to the problem. So they're not in the domain of logic, where you're trying to brute force figure out exact solutions. But they're all running on computers that are running exact binary operations, and they're guaranteed, guaranteed to be exact this generalization ability is in the domain of the provable or not, can you prove that this is generalized or not. And ultimately, I want to, I wanted to raise something that is rarely learned in computer science or anywhere else in sciences, and I didn't understand it till fairly recent and and that's really our expectations of what reason is reason number logic. If you have a collection of rules, like when you're thinking through something in your head, you think you're using reason, and some approximations. And when computers are operating, they're always using recent or logical operations. You know, math is like applied logic or where logic can go. And logical mathematical proofs are the basis of compilers, and all computations, you write code in a certain way. And there's a proof for how that can be converted into a different representation, and then another, like LLVM, and compilers, when they're converting stuff back and forth. They're all based on simple proofs. Given this input, I can transform into something of a world on the CPU, or transform it. It'll always be the same. Yeah, it'll perform the same algorithm, the goal of math by the end of the late 1800s, they were making great strides and like figuring out stuff by the end of the 1800s of like, we're gonna link everything together so that everything is provable, and we're gonna prove everything. And in 1931, that was completely blown apart by court girdle. This Austrian mathematician has a really cool proof that I had to read six times, other people's explanations of it six times to get us prime numbers to represent operate. shins and these massive numbers was a theorem. And he basically said this theorem is unprovable that conflicted with the goals of the setup of the system. And all of basically, he proved that within any consistent system, there'll be things you can't prove whether they're, you can't prove or prove they're false. Interest is proof. And this blew the mathematicians minds, and gradually became clear that there were more and more things that were in this domain of the unprovable. And it's clear to me that like neural networks are way the domain of the unprovable if you think of them, no matter how you think of them. But approximations are even further into the domain of the young people, but you're just never going to have certainty. Like with proteins, they're really complex things are Sha 256, secure hashes, like Bitcoin, Sha 256, SHA two with 256 bits, the basis of Bitcoin? Can you generate the same hash? By tweaking something slightly in a message or generating a completely different message and then hashing it? It's theoretically possible, but it's exponentially complex? You're going to run out of atoms and energy in the universe before you didn't get a measurable distance into that problem if you're trying to brute force it. Right. Right. Right. It's just so far into the unprovable, that you've got all these practically unprovable, they may as well be unprovable like, yeah, you can form a logical argument that if you just kept running the algorithm and throwing 10 universes, you'll get there, like, well, it's never gonna happen. So that's practically cool. And then there's the proved unprovable, that there's a lot of this stuff. And so we have to really balance how we build these solutions for our customers, with a mind that you have to wrap something around your algorithm. Right? For sure. For sure. Have you run into that? In your experience, like, a lot of customers expect more out of these neural networks? Or just any any algorithmic solution they might expect more than is realistic?


Justin Grammens  16:58  

Oh, yeah, yeah, no, I will, with any new technology. You mentioned, the AI winters and stuff, I, we definitely have seen that, at least, collectively, as an industry. I feel like over the past couple decades, where, you know, hey, this is the greatest thing since sliced bread, and it's going to solve all the world's problems. And I see a lot of that stuff with Internet of Things, as I've been in that industry for the past decade, and then no reality sets in. And it turns out that, you know, you're right, what a two year old can do by not only standing up and walking and what they've learned, but like, you know, hey, here's a bunch of pictures, like find the one that looks like a face, you know, in there, and it's very abstract already stuff, and a computer cannot see that. But a kid's like, there it is right there. That's the face, you know. And so it's those types of things that are just so simple for a human to do, that computers still have a huge struggle doing. And people that aren't in this space that I feel like that maybe don't understand, I guess how AI works machine learning. They're like, Well, why don't you just write another rule in there for it? And it's like, no, it doesn't work that way. Like, we're not programming systematically, here are a bunch of if then else statements in here, right, with a computer needs to actually be as smart as an intelligent brain is. And we need to train it through lots of mathematical, you know, iterations. And you touched on something I think, you know, when you and I were, we're sort of reconnecting here around the idea that, you know, there's a lot of stuff that's not provable inside inside these neural nets. Right. And I think that scares a lot of people. I mean, when I have these conversations with a lot of people that are in the space, or like we don't, we don't know why it gets the result, it does get the result and it's cracked, but we can't explain every hop that happens through the neurons of this neural network. And, you know, I think you mentioned something to me, like, do we have to maybe we don't need to, because it can perform well enough today. So should we care about what's going on? 


Erik Beall  18:48  

Internally, we definitely should care. I think there's definitely ways to improve our, our learned expectation of the neural networks that we have our experience, and that's what enables us to do stuff with it, where we're building on shoulders of other people who toiled on us for the decades, the finding that you could use back propagation with a small fraction of the gradients was a huge, like model that actually works that actually allows them to duck to it. And then three years ago, there was this fast AI paper about the first few iterations, you should actually let it get the full gradient through and they'll let you converge faster in your training. But I think that's kind of I don't know what everyone uses different tricks and techniques that we've we basically learned how to train neural networks, kind of the way that you train a toddler how to eat with the with a smooth, you know, out of a spoon. Yeah. Right. There's like it is messy. And it's powerful, though, because the end result is something we all want. We want to get that toddler to be able to eat.


Justin Grammens  19:47  

And sure, sure, but it's some of the creative aspects that I think are very difficult for neural networks to pick up these ones that are that are a part of the outliers, right? And even this idea of injecting random data. So it's not too perfect, you know, and weighs heavy. Have you seen seeing that be applied?


Erik Beall  20:05  

Yes, some of the methods work I've done there. So I've done methods work on triplet loss, also very interesting and injecting random noise at different layers of the network. I never have gotten anywhere with injecting random noise. I've read a few papers on it. And I think it's potentially a very powerful way forward, hopefully, somebody will hit upon a way to really harness it, maybe they have already seen it in a few weeks. The pruning is really interesting, with like, say VGG. Net was released once it was like 420 weights. And it was 2015. And 2014, I think was the year that pupils were realized that you could remove some weights, retrain it and get pretty close to the same accuracy, ultimately, or on the validation dataset, and get down to about 10% for the size, and still have the same accuracy. And this group at Nvidia did what's called Oracle pruning, where, okay, if you're going to do pruning, it's going to take a long time, you have to pass all of ImageNet through it, it's going to take you let's say you've got a beast of a computer. And it's going to take you a month to run all of your pruning experiments. And you're really just going to try a few printing operation, you're not going to try what happens if you prove all of the 420 different connections. And that's just VGG. There's there's bigger ones, certainly by four with more natural connections. And they had access to 1000s of GPUs and Nvidia and they did or it's provable. And they found that there are, it's hard to read in the body, you could start anywhere, and still get roughly the same benefit. You randomized in there, the weights and you pass data through it, you can envision it as traversing space, each weight moves somewhere near error surface changes as as you move through this n dimensional space is impossible to really think about. But it's fun to say space because you can approximate that linearly as well, I can I can move left to right and forward and backwards. That's easier to reason about. But you can start with any random weights and train it and you'll have a completely different way you, if you measure a distance metric, like the L L to distance of all of your weights, in neural networks 123 through 1000. They'll be all over the place. But they'll all be about as good, which is stunning. They will have shared features. They tend to have shared features, but there's no guarantee. Sometimes though, they'll seem to have the ability to detect certain features at certain neurons and other ones won't. And they'll have about the same accuracy means it's a collection of hierarchical rules, we think is a useful way to think about it when you're training these, but that's about the best. I think we can do right now.


Justin Grammens  22:45  

Yeah, yeah, for sure. And I was just gonna say for people maybe that aren't into pruning, I guess, do you have a short definition of what that means?


Erik Beall  22:52  

Pruning means removing network connections, so you don't have to compute them, you don't have to run the multiply add on that particular neuron, which can speed up, which can speed things up, especially on CPUs, and GPUs, it's a lot harder to speed them up with pruning, because it's already paralyzed. So I honestly today, I don't think many people bother using pruning, just because you don't get much of a speed up of GPUs. Because I'm sure, but if you need to use a CPU pruning is very valuable. Basically, what you're doing is you're, you might have a layer that has 32 channels, you could do away with a whole channel for you to do on a specific filter weights on that channel. And then just avoid those computations. And in general, people were able to get, get rid of between 70 and 90% of the connections and still end up with a train network. The trick is that you train it, and then you start removing networks and retraining. There's a bunch of different ways to get it. And there could be more discovered, but I think people have moved on. I don't think anybody's really focusing a lot of effort on it besides people who that's my that's my method. That's my area. I think the mostly it's quantization, and compression of neural network to try and speed them up on GPUs, or accelerator chips. Yeah,


Justin Grammens  24:05  

and you're doing a lot of stuff sort of at the edge, right when you have these these these robots.


Erik Beall  24:10  

Yes. So whether you're using like an NVIDIA nano that's got a GPU on it. 128 cores. The big challenge with the NVIDIA products is that they and I don't know about the Xavier, I haven't had experience with that yet. All of the backends I'd say in pytorch, you'll have to CUDA and what that does is that calls the CUDA library to explicitly copy your network or your data over to your graphics card, which is a separate card, right? It's got its own memory. Well, not on the NVIDIA nano or the TX one TX two, it's the same they have shared memory. But for some crazy reason, they never went and like the NVIDIA engineers are making these back end changes. There is a primitive that allows you to just copy and place or not actually do the copy. So you end up wasting, you waste half your memory on those particular plans, I don't know if that's actually changed, I look around well, more careful. But I was doing some segmentation work on an NVIDIA nano a month ago, and it was using an insane amount of memory. And I was, let's hold off on this until later, we do some of these tasks on the CPU if the CPU is fast enough, and you spend enough time figuring out how to do that, like the Raspberry Pi four, I have a real time object, almost real time object detector running on 80 by 80 images. It's admittedly smaller, but it's just a standard mobile net three, I didn't even modify the network at all. This is just a quick get it working. What I had to do to get it working was I recompile. Open blas, I use MX net. Nobody else uses anymore. MX net was was great. It was the first one that was designed properly, and then pytorch basically copied it. Now nobody uses MX net. But it's still faster than pytorch.


Justin Grammens  25:52  

That's funny. I was actually just looking at that recently, because I had somebody on the program. And they were they were talking the same way. They were like saying, oh look the MX that's awesome. And I was like whatever happened to that it's still sitting out there as an Apache open source project, right?


Erik Beall  26:04  

Amazon did not support it very well. And internal Amazon engineers don't even use it, which is really unfortunate, because it was I thought the pipe torch 2.0 was going to redo the just in time compiler butter. And they didn't. So it's still a little bit slower than an X net. Anyways, by recompiling, those with the right neon flags, you can do up to eight, multiple ads at the same time on the CPU. You can accelerate and on a single core of your quad core CPU. I was able to get this object detector to run in 180 milliseconds. So I task locked one, two CPUs, zero and 11. Secret twos three, and then I would just send them messages. Every was I'm running at 25 frames per second. So I'm getting Yeah, 50 milliseconds. So I'm sending them every 50 milliseconds degrade any image, and amortize they get them just just fast enough. I might drop a frame here and there, but it's practically real time with Wow. Yeah,


Justin Grammens  27:04  

that's great. You said running on a pie for


Erik Beall  27:06  

the press proposal, which are now becoming available again.


Justin Grammens  27:11  

supply chain issues, I guess all that stuff.


Erik Beall  27:13  

Yeah. So Nanos, are gonna be hard to get for several more months, because they're, I think there are 14 nanometer node, the 28 nanometer node is already oversupplied, which is what the Raspberry Pi four is on. So prepare for lots of $35 Raspberry Pi fours again, for the Nano, we're gonna be a while before they come out.


Justin Grammens  27:32  

Yeah, well, you got me thinking about so just, you know, when you're running processes at the edge, and you have the neural network already trained, you can easily get by with any old generic flavor of CPU. And then a lot of these cases, is that fair to say?


Erik Beall  27:45  

Yes. If you invest a little bit of engineering time into the speed test, make sure it runs fast. Cuz on the on the nano and the other video platforms, the time it takes to transfer the memory bandwidth over eats up a lot of your gains. I was getting arbitrators running in 50 milliseconds. I'll propose berry pie. I'm not that far off.


Justin Grammens  28:04  

Right. Right.


Erik Beall  28:05  

I do want to see more. Well,


Justin Grammens  28:07  

I also think about things that run on battery power. I mean, I would assume that that, you know, you can run stuff a lot more low power mode on a CPU versus on a GPU or anything that NVIDIA is putting out because they're pretty much built for dedicated power in a lot of ways. 


Erik Beall  28:21  

Do you follow RISC five?


Justin Grammens  28:22  

No,


Erik Beall  28:23  

Ok, you're gonna love this. So RISC five is open source, CPU architecture design, it's going to be a few years before their peripherals support, like all of the want to control videos. And it's gonna be a little while before, it's fast enough and validated at smaller note sizes, but you will, within 10 years, I'm convinced that Intel will be making half of their processors with RISC five, rather than their own. They've already committed to making some processors with arm direct competitor. And they've committed to making some with RISC five, even though it's literally for risk five, but it's just gonna get better and better. And they've hired Chris Latner, the LLVM guy, who then went on for swift, he created swift and did TensorFlow for Swift. And his focus is making sure that they can do massive SPMD instructions oriented specifically for neural networks. The specs are, are one thing, but actually running on a neural network, you can get very different throughput, because of this complicated interplay of the memory bandwidth, your cache, your processor cache, your L one and l two dashed line, completely change what you're actually gonna get versus your expectations. And so having him on board, I'm expecting good things, specifically for neural networks from out of risk five within the next five years.


Justin Grammens  29:40  

That's super cool. Yeah. So as a part of the program, we always have liner notes and stuff that we publish along. So I will make sure to put a link to risk five in the notes as people read through the transcription and want details of the podcast. That's a great, great source of information. So what tools are you using these days? It sounds like you're very much in the NVIDIA area. Do you use any stuff with TensorFlow? Or is it all pytorch? Or like, what? What are some libraries that you're leveraging


Erik Beall  30:07  

pie torch? I still use MX net on my thermal cameras, but pi torch for corporate development projects just because I know that they're not going to you can't rely on MX net, because it's being orphaned. Unfortunately, I wish someone would take it up and champion or maybe it is, maybe it is going to be around longer. I really wish it would because it is definitely faster. And that saves cost all around. But so Pi torch, for the most part, I had to use TensorFlow for a while without autism diagnostics company, but the selling point of pytorch for me was that is better than cafe. And it was expecting TensorFlow 2.0 to exceed that. And it didn't, that's just my experience. But for me, the selling points of TensorFlow still today is that it's better than cash. That's just my snarky look at AI, it's fine. Lots of people use it. And they've done great things are getting TF light TensorFlow light to run on the edge accelerators are fun with it. So it's well supported. Certainly, I tend towards by torch more, just because I like the it's almost identical MX nets, philosophy, procedural programming can do with it. Parents are, well, what's what's a


Justin Grammens  31:11  

day in the life of a person in your position, and all these different roles and companies,


Erik Beall  31:17  

planning, managing expectations, keeping corporate partners and teams on the same page aligned. So there's lots of equate administrative work, I'd say more leadership oversight role. And then when there's time, about half the day, if I'm lucky, spent on what's tried to limit ourselves to a very specific small area of computer vision algorithms, or hardware, depending on what's what's needed at the moment, set out what we want it to look like, what we know and what we don't know. And just try and figure out that one small area, document it so we can move on and not have to come back to it again. When I was younger, really up to like four years ago, I suffered from that the disease of the physicist programmer. Like all just keep hacking away to go till it works, right. Not here. It's certainly gotten better over the years making it more maintainable. But it wasn't until it really had an epiphany, four years ago, I need to program better. And you program ways that I don't have to come back and revisit mistakes, I mean, modularity, because it just eats you up maintenance of code. Fred Brooks and the mythical man month, my God that's still relevant today. Perfect in its it's kind of thing that matter. They don't teach these things in school because there just isn't enough time in the day. And you haven't faced the pain yet. That entrepreneurship and learning anything life. If the feedback is unclear, delayed or ignorable, we don't learn if so if you're in a startup where you're not going to get feedback for a while, but of course, you have unlimited amounts of money, you might not learn because you can just ignore consequences. I haven't learned anything despite her startups. And there have been times I haven't learned things. But like Peter Thiel wrote this book zero to one, or one of his students wrote it based on lecture notes, where he pointed out most startup founders will tell you what we failed because of x, or x and y. And really, they failed because of a dozen things that could not better. And my mind your job is to figure out as many of those as you can, so you don't make the match.


Justin Grammens  33:22  

Right. Yeah. I mean, that's, that's I think part of that's what it takes to be human, I guess, is to try and learn from your mistakes. And, you know, your I would the definition of insanity is doing the same thing over and over again, and, and hoping you'll have a different outcome, you know, and so the more you can pick up and learn along the way, I guess, bringing it back to AI and machine learning, I guess, the better you'll be in the future. What's the size of teams that you're working with? And sort of like the composition? Are you you kind of working on your own little projects and sort of a team of one or yeah, what's what's your experience in in working with the group.


Erik Beall  33:52  

It's varied, like say, they'll take Cognos and a team of three, focus directly on AI and autism diagnostics. At digital labs, I was the sole AI person, but I ran in AI, how to do neural networks from NumPy and pytorch, or Python and okay, that digital apps, give me my take on it and working through how you can look at the gradients flowing through with NumPy. And add, let's see, thermal diagnostics, when the pandemic began, and I was like, I see all these fever screening solutions, they seem like a great way to reduce our risk. But I see all these stupid crazy ways of doing it like they're forcing people to set up a blackbody, which is a no standard candle. It's glowing in the thermal domain, so you can't actually see it, but it's always coated with a black looking surface because it's designed engineered to be highly emissive, which skin is also it must begin to have skin and then you have a tripod, but these are industrial caliber. You're not that tripod over you don't have a blackbody or it's broken permanently. Like you can't drop a black body several inches without wondering if might be broken. Net. And so I always like those crazy whenever you integrate the blackbody into the device itself and correct for air temperature and correct for distance to target and use a face detector, you make a much simpler, easier, foolproof application that doesn't require the customer to set up a basically a site laboratory experiment in their entryway, right? Yeah, I realized something unsavory unfortunately. Turns out, no one had figured out how to do it. Oh, everyone was correcting for air temperature. Turns out not a single product even bothers measuring the air temperature, you go outside the wintertime, your skin temperature is going to decrease, right? Yeah, totally. You're gonna miss on it's gonna go up. Temperatures in an office is oscillating anywhere between licking the best maintain might be only two degrees might be 10 degrees. That is enough to turn like the best working system into something that has a false positive rate 50%. Well, worse. And that's just the four degrees allowed by the standard? Sure, sure. Some things are worse. That's 50% false positive rate. So how these things seem to never really detect anything. They never ever false positive. It's great, right? And right, everyone say they push the numbers towards normal, so hard that they couldn't detect hypothermia or severe fever, crazy. Most of them there are some that just adhere to set up a lab in your entryway. And if there's any problems in the lab, it's on you, not us. So that culminated because I was obsessed with it. I had to do a research study showing several comparisons of other studies. And I came up with a test method. I have a preprint that just went out on med archive. And it's under review, journal biomedical optics. And this shows several clear problems with the way it's done today. And ultimately, the FDA used to step in and say, Okay, everybody stop, let's do this correctly. And then let her be short again. Because there are some companies have been selling stuff for 20 years and getting away with something I was


Justin Grammens  37:05  

when the pandemic first hit. I mean, everyone is flooding the market with solutions. And I don't and I maybe like kind of what you're hinting at is is I mean, there's no quality control on this. There's no standard. It's just you can put out anything in some ways, right? I don't I don't know what would stop a company, like you said, if they're already in the space, like, hey, let's just do a product that does this. And sort of, but they don't have to prove anything that that truth, it seems


Erik Beall  37:29  

like there is a standard, but it's entirely equipment based. And they don't even mention air temperature at all, was like they didn't realize that that was a problem. just astounding. I'm sorry, I get quite passionate about it. But I'm putting that on the shelf. Because while the market is poised now it's gone for several years.


Justin Grammens  37:46  

Well, you know, that's what's interesting about some of some of this technology, or some of the algorithms, just these techniques, it feels like they can kind of come and go, right? You You never really know, when it's going to be the hot time and I'm working on a startup have been working on a startup for the past two years or so around presentation skills. So before the pandemic started, you know, we're using AI to take a look at people's eyes. Are they looking at the camera, as they're presenting? You know, facial expressions, right? Are they happy, sad, so giving back visual cues to the person. But then we're also doing a lot of stuff with audio. So you know, if you can, if you can present better by saying not so many hums and ahhs, if you can just slow your speech down a little bit. And so kind of bringing that together into a virtual coach. But you know, we started this before the pandemic, and we had this really difficult time, like, how are we going to get all this video and all this audio when someone's at the podium, right. So like, someone's got to sit there with their phone, and we're going down all these paths of like building a mobile app to capture all this. And then overnight, when the pandemic hit, everyone got on Zoom. And we're like, tada. Like, I mean, it was actually kind of make lemonade out of lemons in a way, because now all of a sudden, you and I are in these meetings, and everyone's in these virtual meetings, and my face is here, front and center. And the audio is really good, you can hear everything. And so, you know, it was just kind of one of these things where the market changed in, in our in our favor. Now, a lot of other people doing the same thing. So it's not like we have the corner of the market. But it was amazing that like because of COVID. Now all of a sudden, everyone went online, and this piece of this, this technology, and the startup that we're working on now seems very, very relevant, when it was very difficult to not get the data or hard to get the data. Right. That's that's one thing. I think that sort of change now is this, this idea that, at least in our world, everyone's in front of the screen, but also everyone's at home. And you know, Netflix is getting a lot more subscribers. I mean, just it just this whole digital world feels like there's just a lot more data we're giving away today.


Erik Beall  39:32  

Your application now has data that you don't have to really struggle and fight for, like a lot of startups do. Yeah. So


Justin Grammens  39:39  

sometimes, yeah, you can just outside market forces can completely change it. So I totally get it, you know, put your thing on the shelf and we'll see where it goes in the future. You know, one of things that I like to ask people is if you were entering the field today, so rewind the clock back, you took an interesting path to where you got to where you are today, but if someone's out, someone's getting out of school, like what are things that you might suggest even you mentioned a ton of talks here and I'll be sure to put those because they're all very good standards mythical man month and others but I had all this those in the notes. But yeah, someone's coming out of school and they want to get into this artificial intelligence machine learning, you know, CV sort of space, what what do you advise that they do,


Erik Beall  40:15  

I will tell you depends on what areas the they want to contribute in. And it's hard to tell because your desires will change as you get exposed to different areas. So definitely get exposed to different areas in it. But if you want to do anything with orchestration of these things, or even orchestrating your own working environment, it really pays to know Linux better, like just like how a RISC five is gonna take over the chips, industry, Linux has taken over the computing industry, or the POSIX type operating systems, where they all share that the Unix like, environment even Mac's like Microsoft now runs over half their cloud runs Linux, I would say Linux From Scratch is invaluable can teach bash scripting, Python scripting has also gotten so clear a requirement for this misfield nowadays, definitely learn some better programming practices how to make code more modular, while realizing that's well in the domain of the unprovable you can ever prove that your code is like perfectly modular, there's an infinite number of ways you could pre design, but stopping and thinking about the design, really sketching out carefully what you want your inputs, programming on paper, is the most valuable program you've got. For me, I know on paper, it prevents me from starting coding, though, I'm ready. Really ready.


Justin Grammens  41:29  

I love that. Yeah, that that made me think of back in the old days with index cards, or, you know, whatever, when people would run stuff on the mainframe, right? You would, it would force you to actually think through your program, right? Because it was like, the next time you're gonna get a chance to run it might be 12 hours later, or a whole day later. Right? And so you really need to it's not wasn't like a just, you know, control, you know, our and just recompile the thing, right? You needed to make sure, and so people would really, really double check and triple check all their stuff before they ran it through


Erik Beall  41:58  

constraints are your greatest friend, learn love constraints, find them late, Bill more than if you can? Well, sure,


Justin Grammens  42:07  

sure. good programming practice, you know, I think object oriented design, you know, by Bucha, and others, you know, that are they're just they're, they're really good books by Gang of Four. These are these are things that I came to later in my career, because I was I was kind of a little bit of a cowboy coder at the beginning. And then I kind of I started working with other developers, and I started seeing patterns emerge. And I'm like, what, what is this all about? And started, you know, stepping back and realizing, oh, my gosh, yeah, there's general programming principles that you can do, that you can put in today, that will help you create, you know, more flexible code in the future. So I think that's kind of what you're getting at


Erik Beall  42:42  

Git flow, learn, learn Git flow, if that makes sense for your, your group or your team. I did that for about a year and a half with one group. Another group, it was more just we split off into modular sections of the code. If the underlying architecture design modularly, then it can make a lot of sense to split off into different parts of it. But you still have to align on how everything's going to talk together. For sure,


Justin Grammens  43:09  

yeah, that's where I think software is a little bit more of a art than it is just a straight science, or just procedural.


Erik Beall  43:15  

One of the things I touched on earlier talking about, like logic and provability. All that. I wish more people were just aware that, like there was a paper 15 years ago about why most published papers are studied, most medical studies are false. John, Unitas Unitas publish that. And it's true. Most published papers, most work published work is non predictive. They're publishing a mirage. That's not because the scientists are unethical or incompetent. For the most part, there are certainly some sociopaths amongst the scientists, less than 10% mean, I worked at a hospital they were I worked with a fair number of really intense scary people. Yeah, definitely psychopaths, who then stole work that we've done. And for the most part, it's you're trying to get funding, you're trying to, you're trying to find a way forward on something. Let's say, We're studying multiple sclerosis. And we're looking at lesion volumes in the brain and trying to understand, is there a way we can predict the course of zoos that this is going to convert to different types. So drug trials, Will, each time you look at the data, you have a chance of finding something that's a mirage? That means if you look at the data 10 times, like, well, let's reanalyze or let's reprocess, let's clean the data up, we'll look better, and we'll look at it again. By applying a slightly different analysis, you've just doubled your chances finding a mirage that isn't real, but still fits that, hey, the correlation was significant, where the fit was significant after we took out all the compounds, and that is still to this day. The most common cause of this and people are aware of there are people who are aware of this on the field and like, well, I have to publish this because it's promising. I've taken as much care as I can justify that and limit the number of time to reprocess the data. That's why most of them are false. And it's, it's just going to keep on happening. And there's there's gold in there. But it's hard to find. And that's if it was easy, we'd already have it as unprovable, and it's going to come to remain so which means we have work to do. And we will always have work to do.


Justin Grammens  45:23  

makes it good for scientists, I guess people that are in this field, for sure. Look


Erik Beall  45:27  

for the silver lining. There's work for all of us. There will always be more to do. Yeah, you busy for?


Justin Grammens  45:34  

All good. Eric, I appreciate your time today. Before we ant here, how do people get a hold of you just I think you're on LinkedIn. Right? Is that probably the best place? Yeah, that's


Erik Beall  45:42  

progressive, best way to get a hold of me here today. Ex LinkedIn is fantastic nowadays for communicating on in this domain for business opportunities, or just it's like a perfect, great networking site. And I don't use Facebook getting worse than yours. Normally, a lot of people,


Justin Grammens  45:59  

yeah, yeah, for sure. Was there anything else that I maybe missed that you wanted to share? Oh, I


Erik Beall  46:05  

could talk for hours on various things, that your devices and things that I incur some memory, but I don't want to get sued. So we'll have a private conversation.


Justin Grammens  46:16  

Sure, sure. Understood. I'll put a link to your LinkedIn profile here in the liner notes here. And I just again, appreciate all of the time that you gave us today and your your input and you know, got your hands on a bunch of different stuff over a number of decades here. So excited to see where you go in the future and how artificial intelligence and machine learning plays into that sounds like it's gonna be a big component. I wish you nothing but the best but can have your movie back on the program next year. So and we'll touch base.


Erik Beall  46:44  

Sounds great. Thank you so much for having me, Justin, and have a great day.


AI Announcer  46:50  

You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening