Conversations on Applied AI

Prasanna Balaprakash - DeepHyper | Scalable Automated Machine Learning for Scientific Applications

May 31, 2022 Justin Grammens Season 2 Episode 14
Conversations on Applied AI
Prasanna Balaprakash - DeepHyper | Scalable Automated Machine Learning for Scientific Applications
Show Notes Transcript

The conversation this week is with Prasanna Balaprakash. Prasanna is a group leader and computer scientist at the Mathematics and Computer Science Division and the leadership computing facility at Argonne National Laboratory. His research interests span the areas of machine learning, optimization, and high-performance computing. He is the recipient of the US Department of Energy 2018 Early Career Award, and is the artificial intelligence thrust lead at Rapids, the Department of Energy Computer Science Institute that assists application teams in overcoming computer science data and AI challenges. He is the principal investigator on several Department of Energy-funded projects that focus on the Department of scalable machine learning methods for scientific and engineering applications. Prior to Argonne, Prasanna worked as a chief technology officer at Mentis, a machine learning startup in Brussels, Belgium. He received his Ph.D. from IRIDIA, the AI Lab at the UOB, in Brussels, Belgium.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode


Your host,
Justin Grammens

Prasanna Balaprakash  0:00  

That's the beauty of national lab, there is a huge interest and push towards adopting and using AI to accelerate scientific discovery. And in that context, the automated machine learning is something that is relevant for a wide range of applications.

AI Announcer  0:18  

Welcome to the conversations on applied AI podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied Enjoy.

Justin Grammens  0:49  

Welcome everyone to the conversations on applied AI Podcast. Today we're talking with Prasanna Balaprakash. Prasanna is a group leader and computer scientist at the Mathematics and Computer Science Division and the leadership computing facility at Argonne National Laboratory. His research interests span the areas of machine learning, optimization and high performance computing. He is recipient of the US Department of Energy 2018 Early Career Award, and is the artificial intelligence thrust lead at Rapids, the Department of Energy Computer Science Institute that assists application teams in overcoming computer science data and AI challenges. He is the principal investigator on several Department of Energy funded projects that focus on the Department of scalable machine learning methods for scientific and engineering applications. Prior to Argonne, Prasanna worked as a chief technology officer at Mentis, a machine learning startup in Brussels, Belgium. He received his PhD from IRIDIA, the AI Lab at the UOB, in Brussels, Belgium. Thanks Prasanna for being on the program today.

Prasanna Balaprakash  1:45  

Thanks, Justin. Thanks for having me.

Justin Grammens  1:47  

Excellent, cool. I told a little bit of the listeners about your background, but maybe you can fill in some of the more details like what was the trajectory of your career to sort of get you interested in in this field, and then kind of lead you to where you are today.

Prasanna Balaprakash  1:59  

I did my undergrad in India. And then I moved to Germany for my Masters, and then later to Belgium for my PhD, then during my PhD, I was involved in the design and development of sort of algorithms, automated algorithms for designing other algorithms. And that's where, you know, my interest in design automation and automated algorithms started a lot of credit goes to my supervisors back in Belgium, who sort of installed these type of research into my career trajectory.

Justin Grammens  2:30  

Yeah, sure. Sure. Well, algorithms designing other algorithms, that sounds pretty meta, I guess. How does that work? I guess, are you you, you have to train these algorithms based on? I mean, are we actually like having code generate other code? In some ways?

Prasanna Balaprakash  2:44  

That's one part of it. Right? So you know, there are several ways to think about this problem, as you said, it's a meta algorithm design problem, where you have to sort of think about what are the templates? And what are the design choices that should go into a particular algorithm. Often, algorithm designers, you know, as a computer scientists or computer engineers, they come up with some sort of algorithm based on the prior experience. And even inside the algorithm, they can think of different types of competence. And often, they have to pick one, based on a particular problem that they're solving are the sort of test cases they have and sort of, you know, fix that as a choice as addition, addition choice, instead of exploring all possible choices, the reason being that it's very, very difficult to explore all possible choices. In a complex algorithm, each algorithm can have different choices, and the choices can sort of expose different parameters, and so on, so forth. So all these sort of explode, the number of design choices one need to explore in coming up with the right algorithm for the right problem. Even if they come up for one algorithm, and for one setting, it is not generalizable to other settings. Right. So that's where the whole problem of automated algorithm configuration comes in, where we say, Okay, what is am I setting? What are the sort of problem instances that I'm going to see? And based on that, how can I come up with a set of design choices for a particular algorithm that will work well for the particular class of instances, and so on, so forth? So that's the sort of one way to look at this problem. And then you can even take one level up where you can put a sort of a reinforcement learning policy, where you learn a policy that decide which choices that algorithm are and what are the addition of variables are their condition values that particular algorithm needs to take based on the particular problem instance that it is seen at that instance. So there are many flavors as I said, you know, it's a very complex problem, very challenging problem. And trying to automate these algorithms in a much more rigorous and systematic way, is a open open research. And there are a lot of interesting work happening in this domain in particular, again, in the context of machine learning as well.

Justin Grammens  4:57  

And so that's kind of what you focus your PhD on. What was this was this concept around this?

Prasanna Balaprakash  5:02  

Yeah, that's one of the major topic of my PhD.

Justin Grammens  5:06  

Yeah. I mean, I didn't mention it during the introduction, but you know, you're, you're instrumental to this deep hyper project going on, correct? Correct. Could you tell us a little bit about that? I mean, this seems to sort of fall in line with with what you did with your PhD during the PhD,

Prasanna Balaprakash  5:19  

you know, was looking at the design and development or automating the design and development of heuristic algorithms. These are class of AI algorithms to solve combinatorial optimization problems, which are one of the hardest problems to solve. The deep hyper project is primarily focused on how we can design and how we can develop machine learning models. So if you think about neural networks, one of the most promising machine learning approaches, you know, it comes with lots of hyper parameters, so to speak, what type of learning rate is or what what learning rate to use, what type of optimizer or a number of layers, number of units per layer, and the topology itself, you know, what type of topology to design, even if you think about convolution neural network, you could always ask why this field there five by five, or three by three, right? So there is no sort of clear cut answer for that. And often this is determined by some human expert based on some test cases they play with, right. And often, one could improve the performance of these these methods to a great extent by changing architectures changing the optimizers, changing the learning rate regularization, and so on, so forth. So here are the books, they provide powerful function approximation for many problems. But at the same time, it comes with a lot of complexities. And many times, we'll need to sort of, you know, map products take this deep neural networks and adapted to a particular data set. And that's typically trial and error process. It's, it's okay, you know, how can I change this and make it work very looking at the loss function and looking at the validation loss and see, okay, on my ward feeding and underfeeding, okay, I can kind of go and tinker this around and make it make the network learn well, and so on, so forth, right? If you think about it, all this can be sort of encapsulated into the automated design problem where we can do this in a much more automated way, and also in a much more rigorous and systematic way.

Justin Grammens  7:13  

Yeah, for sure. You were talking about tinkering with things. So yeah, I do a fair amount of stuff with Google colab, which are basically sort of like Jupyter Notebooks, you know. And there's, there's when you start working with TensorFlow, and sort of building these models, I mean, it's just yeah, there's almost too many dials that you would want to be able to turn. It sounds like yeah, it's you guys are trying to automate. And I was thinking about to like Google has this concept called Auto ml, right? Is that what this project is sort of based around? Is it A, is it a competing project, or that I know yours is open source, too. And I'll be sharing links to all of that. But kind of a lot of things wrapped up in my mind right now, as you're sort of like talking through but is that that's the general gist of what you guys are doing there? 

Prasanna Balaprakash  7:50  

Yeah, high level, we are all trying to solve a similar set of problems. The way that deep hyper, sort of different from auto mL from Google is that, you know, we've primarily focused on scientific applications. And also, we focus on solving problems at scale, we have access to some of the fastest supercomputers on the on the planet. And how we are trying to solve this problem is to scale up auto ml in a way that we can reduce the time to develop these models. And also our focus is incorporating scientific domain knowledge into those models and how we can automate that process right so if you're talking about solving or developing the surrogate model for a scientific application, there are a lot of physics knowledge that comes with it, a promising approach to incorporate the existing physics knowledge into the deep learning models is to add them add them as a constraints of soft constraints in the loss function and so on. So it's great this is the design finance philosophy behind physics and formed neural networks, what once you start adding these constraints into those loss functions or in the architectures you were training loss surface becomes really really difficult to navigate. So, any sort of optimizer that you throw there will have problems and there is a lot of interesting research happening in that space, how to address this problem and so on so forth, right, but we take this idea of okay, how to incorporate the specifics constraints into loss functions. And if we do so, how that affects this architecture and and how we can adapt this architecture. So think of ways in which we can put residual connections to make the loss surface much more smoother, and how we can adapt learning rate to cope up with that sort of ragged loss surface. And all these things even further, you know, non scientific problem with the regular loss function, you had to tinker so many parameters. But now add to this complexity, this physics knowledge, the physics constraints, and so on, so forth. That just blows up the complexity even further and the class of problems that we deal with. It's not just image our debts. There are a lot of problems that we have to solve with graph neural networks. Point Clouds are managed manifolds in a row. This type of input data sets the diversity of the of the data across different scientific applications. And for each of them, think about ways that we have to customize these neural network models or general machine learning models. That's a very, very challenging problem. And one way that we are trying to address this problem is to sort of help the domain scientists to turbocharge the design and development of neural networks. So how can we automate these surrogate model developments are these neural network model developments for various types of datasets? And that's the core focus. And that's where we are specializing and helping domain scientists. So in that way, it's sort of different from auto mL from Google, where the focus is primarily on the commercial applications and images and texts.

Justin Grammens  10:47  

Yeah, no, I totally get that. So yeah, you guys are basically a Swiss knife, I guess, for people that are in the research domain space, right? It's completely different, or a Swiss army knife, I guess the word that I would use where they can basically use your tool to then focus on specific issues that they're running into, that they're trying to build out in the field. And, you know, you're at the Argonne National Laboratory, I guess, kind of under the Department of Energy. Is that Is that where a lot of the work that your programs are working on? Batteries. Correct. Cool. And so can you give me some examples? I mean, I think when we originally sort of started talking, you were talking about some ideas of accelerating weather simulations were some things that we maybe talked about, or I love the idea of, you know, disruption, detection of fusion energy reactors, I mean, speak to me about any, I guess projects that you'd like to with regards to maybe how this is being used by researchers,

Prasanna Balaprakash  11:36  

There are multiple projects that are currently being benefited from the use of automated machine learning methods, and I would like to point out a few of them one that you already mentioned development of surrogate models for weather simulation, so, weather simulations, the ability to simulate the weather phenomena is critical for many different applications. And the way that we are looking at this problem is okay, so, you know, there are two different approaches to do surrogate modeling for weather simulations, one is the intrusive way, where we say okay, here is a simulator and here are the most computationally expensive part of the simulator and on for your, for your information, these simulations are very, very expensive, they take the entire supercomputer and run it for several days or even months at different resolutions and so on, so forth. So, the intrusive way of attacking this problem is looking at what are the computationally expensive part of the simulation and most of them are sort of related to solving PDE or partial differential equations and so on so forth. And say, like, okay, you know, can we learn and function approximation to those type of computation, expensive modules, and if we can do that either offline or online, then we can reduce the computationally expensive part of the simulation by a large factor, and here we are talking about 100x Speed up for those parts, and if he can do that, then there are multiple things that we can do on the top right. So the simulation time will be drastically reduced, and if we can reduce that simulation time or the weather, then we can run, not five or 10 ensembles which are the current state of the art, so to speak, you know, we can do 1000s of ensembles to differentiate initial conditions and at various resolutions, those types of things allows us to ask very interesting, what if questions with respect to the climate change and so on, so forth. And the non intrusive, one is trying to develop a surrogate for the entire simulator in or taking the data from the from the simulation and sort of, you know, matching that with observations and calibrating the simulator with respect to their observations. And using the sensitivity. For example, you know, you cannot think of this complex code and computing derivatives of the variables from this complex code impossible. On the other hand, we can develop a surrogate to this entire simulator, or at least the major parts of the simulator, and then compute derivatives or compute giants and look at the sensitivity of those input parameters, which is always impossible to verify. So there is a physics, here is a physics model that is encapsulated in simulation. Now, we have a surrogate and another computationally cheap model, which we can use to compute derivatives to analyze sensitivities of the variables, climate variables, and so on, so forth. And that has a huge, huge impact, not just the non physics, but you can then match that with the real observations and try to see whether your physics model what our human understanding of the physics model matches with the observations and bad the mismatches are and try to sort of improve the understanding going further. So those are all interesting questions that the surrogate models for weather simulations can enable the second project the fusion energy, reactor dolloping target models, in particular, the disruption detection in in the future reactor, so we are working with our collaborators in Princeton and we are looking at ways to develop more models that can sort of look at the state of the plasma and trying to see whether this plasma is going to be stable or not, this is very, very critical, because you can detect precursors are using these precursors, you can detect that the plasma is not going to be stable, and you can go and adjust the parameters of the reactor in such a way that this this plasma becomes stable in the future time steps, because once the plasma becoming unstable, then you have to switch off the reactor and that has a huge overhead cost. And in fact, some of the recent work from Deep Mind is looking at related problem Mohan and they are using reinforcement learning to sort of control the stability of the of the plasma. So, the problem is much, much more sort of complex, you know, it has a different flavors, and different reactors have different set of complexities. It's one of the grand challenge problems, if we manage to crack this, we almost address lot of energy related challenges that we face today. So, that's a very exciting project that we are involved in. And there, what we are trying to do is to develop these these spatial temporal deep learning models, automatically, it's a, again, a complex problem, complex model complex data set. And we are trying to help this domain scientists to automate the design and development of deep neural network models to detect the instabilities or the disruptions in the in the plasmas there, the challenges will differ, it's not like we can, we can build a trillion parameter model to do that, because this will be deployed at the edge near the reactor. And here we are trying to not only reach accuracy, but also inference time and model size. So it's a sort of a multi objective, automated machine learning problem where the goal is to build a neural network or a neural network model that will be small, efficient, low latency, low inference time, and we need high accuracy. So it's not about accuracy there. It's all the other metrics that makes this problem lot more a lot more challenging. So that's another project that we are involved in. We are also working with other teams where, you know, we use where we are developing automated NLP techniques to analyze large amount of Corpus scientific articles to extract information related to climate change, and how the climate change is affecting or will affect the US infrastructure and how these things change over time, and so on so forth. This is primarily trying to help the addition and infrastructure team here at Argonne to see, you know, how they can consume really hundreds and hundreds of articles that are coming out and sort of synthesize that information and try to find what they are looking for, again, you know, we are trying to, you know, automate this process of developing a natural language pipeline.

Justin Grammens  17:50  

Yeah, there's just there's a lot of written records, I'm guessing around over the past 100 years, right. So not everything was stored on disk drives and stuff 100 years ago. So you're kind of trying to parse and go over all these articles, I'm guessing, right? There's just there's a lot of and probably a lot of, I mean, are you guys using a lot of OCR to sort of take a look at even images or is a lot of it and then put it into digital format, at least, but then you're trying to sort of read through and understand it.

Prasanna Balaprakash  18:14  

So right now, our focus is primarily on scientific articles, by as you said, there are there are many sort of written records, that's one of the sort of earlier challenges that we had to face and try to overcome, we haven't found really a clean cut solution for that. And we are still working on it. Because you know, if the records on the infrastructure, and if there is incident in that infrastructure, it's you know, those are all sort of captured in a written form and ability to extract them and put them in a meaningful way or in a computable form. It's an open challenge that we have to overcome for this type of analysis.

Justin Grammens  18:52  

Yeah, I was just thinking, as you were talking about all these all these projects, I mean, how do you take on so many at once? You know, I guess you personally, I mean, are you hopping around between these projects? Or do you have a team that you run that kind of does their own thing and the probably on different timelines, just just curious to know, how you, I guess, work with regards to so many different interesting, cool things going on.

Prasanna Balaprakash  19:13  

I have a very good team and fantastic collaborators. So you know, it's a not a one man show, as you have alluded to, in particular, the DoD labs, it's a highly interdisciplinary work and and we work with the domain scientists, we work with other stakeholders, or we work with the university so it's it's a collaboration between wide range of of people and that makes this these projects very, very interesting. And one day you work on the fusion reactor project and another day you will be working on natural language processing pipeline another day for a compiler optimization using artificial intelligence or supercomputers. And another day reinforcement learning for nuclear reactor control and that makes this job very, very interesting. And the full credit goes through my collaborators and my team. It's definitely not a one, one man show.

Justin Grammens  20:06  

Sure, sure. But you do get a chance to sort of touch all those things. When you're when you're working at the Argonne National Laboratory, it's not like staying in your lane, and you just sort of focus on one thing for five years.

Prasanna Balaprakash  20:16  

Correct. That's the beauty of of national lab, there is a huge interest and push towards adopting and using AI to accelerate scientific discovery. And in that context, you know, the automated machine learning is something that that is relevant for a wide range of applications. So everyone needs to develop models. And that's where we come in, like, okay, so how we can sort of formalize this, this whole process. And once we formulate this, then we can think of solutions to address this problem. And that's where our supercomputers are coming in. And because this problem record scale, you know, this is not about, okay, training a billion parameter model on large amount of data. Even if you have a small data, the automated machine learning methodologies that we are developing, you know, we explored 1000s and 1000s of models in just two hours, writes in two to three hours, we can we can explore 1000s of models for a particular data set, and come up with not one best model. But hundreds of really good models for that particular data set. I mean, of course, we do have large data for which we have to scale up this, but even a small data set recreates a scale, because the number of possible models that you could explore in this space is really huge, right. And another thing to note here, the reason why we are interested in developing lots of models in a very short period of time is that unlike other settings in the industry, most of our problems are inverse problems, in the sense that we have the output, and we have to go and see what is the input. This is what we call it sort of inverse problem, where you say you say the diffraction pattern from X ray. So you take a material, you send X rays towards it, and you will have a diffraction pattern. And what you observe is a diffraction pattern, not the material. So from the diffraction pattern, you need to go back and reconstruct what is inside the material, right? So that thing was problem. And the inverse problems are notorious, really hard, which we also means that you have not one solution, you have multiple solutions. And that means developing one single model for this data set doesn't make a lot of sense. You need to have hundreds of models from those hundreds of models, you can do inference for those inverse inverse problems, and then combine all those results to make sense out of it. And more importantly, uncertainty quantification is a must have property for many of the problems that we are dealing with. And developing these type of models are quite important. So it's not about just building one model that can do well. It's about building hundreds of models that can do well on the day data set, and then use those models and combine them to generate results.

Justin Grammens  22:54  

Yeah, for sure. I mean, you guys do stuff with? Well, I guess a couple thoughts that I had right now on one was, Are you doing a lot of essentially unsupervised learning where you're not tagging a bunch of stuff, you're just sort of running it through and trying to decipher what the right weights and measures would be for the neurons and in a deep learning? Or are you guys dealing with tag data, what's most of the data that you're dealing with.

Prasanna Balaprakash  23:16  

We have a wide range of obligations and wide range of modality as CO pointed out by for some problems, we don't have label data. And we have to do a lot of unsupervised learning, in particular, trying to, you know, learn latent representations and make inference out of that later Representations are used that and combine that with the physics knowledge. So one thing that that I, I also want to point out is like, again, the science setting is that way, it's good, because in many, many applications, we know a lot about the underlyings underlying domain, right? So you have physical models, and you can use those physical models are you have a simulator, that simulator can be run in different facilities like low, even if you take the climate simulation, it's in order to run it at a very high resolution, it's very, very expensive. So they cannot afford more than, let's say, 10 simulations. On the other hand, if you want to run it at really low fidelity in a lower resolutions, then you can run many more. And that presents interesting challenges as well. So how can you use a lot of low resolution simulations, learn from it, or learn the representations and dynamics and physics from it, and physics, knowledge into those type of models using loss functions are in the architecture, then take that to the final solution where you just calibrate your low resolution model to the high resolution model, right? So those are the things so yeah, labeling is a big problem in many settings, in many sciences label are expensive, hard to come by, or you have to do physical experiments. You know, that's not only time consuming, but it's also it costs. There are other domains where you have lots of labels and there the data could be, you know, the data so data volume could be different. It could be an issue, and the point is in In some settings where the data collection happens at a speed, that you cannot just afford to move that much data from one experimental facility to the computing facility, like in the cloud and do that in the in the cloud and send the results back, you have to do that at the edge, trying to see you know, whether the results are good or bad and how to do that. One way is to do this, train the model offline and move into the inference, that sort of modality works for certain settings. But in several other settings, we have to develop continual learning methods where the data comes in, and you have to continually update your model on the fly, to make it more efficient for that particular experiment for a particular sort of modality and materials and so on. So I'm saying that it has a wide range of problems and wide range of challenges that we had to overcome.

Justin Grammens  25:47  

Yeah, for sure. Now, you guys run your own data centers? I'm I'm guessing, I mean, all your data is probably very sensitive information. So you kind of have to build this infrastructure, completely in house with with your team, is that correct? 

Prasanna Balaprakash  26:00  

No, we rely on a number of infrastructures, data movement infrastructures and compute infrastructures. There are research that deals with sensitive data, like health and so on. So look, by that we have sort of, you know, secured enclaves have bad most of the research that we do get a dog on our sort of open in nature doesn't mean that we can just move the data outside, but it's an open research, the research will be sort of, you know, disseminated to the public, once the data is verified, and so on, so forth. But when it comes for learning, right, so when it comes for developing models, and doing machine learning on this data, there are different types of constraints associated with the data. One is just the volume, you know, you have to post the data, you have to move the data to the compute center. So some of the experimental facilities we have, you know, some of the processing can be done there. But there's also infrastructure, data movement infrastructure, called Globus, which is a flagship product of Argonne. It's used widely in the research community to move data from one place to another place large amount of data without doing, let's say, SCP over your Telma. And hoping that this, you know, it doesn't doesn't. Yeah, probably, right. So those types of research infrastructures exist. And also, we have a standard set up infrastructure for data movement from one lab to another level from across universities. It's a very solid infrastructure, a lot of research went in before the AI boom. So now we are leveraging those type of infrastructures, both the compute and the data movement infrastructures for AI, which is, which is sort of, you know, helping us to do these kinds of data movements across geographically distributed sites.

Justin Grammens  27:39  

Yeah, sure. Yeah, I guess I was curious if you guys run your stuff on large public clouds, like Amazon, or Google or Microsoft, Azure, any of that, of that type stuff.

Prasanna Balaprakash  27:48  

So since we have our own computing infrastructure, yeah, majority of the research that we do sort of, you know, targeted at these type of computing infrastructures. But there are some, some research groups that are also trying the public clouds and how to use public clouds for various types of purposes. And particularly, this is coming from the stakeholders, for example, the projects, you know, some of the projects that we have the those are on traffic forecasting, and the traffic monitoring centers want to use the models, and for them using the newest supercomputing infrastructures will be too much. On the other hand, if you say, Okay, we develop the model, and we can go somewhere in the cloud, that will be easier for them to just write an API and pull the prediction or they can even host the model inside the DMC centers if they have an infrastructure at a smaller infrastructure, right. So there are there are multiple benefits for both these things, you know, when it comes for the supercomputers versus clouds, right, so those are designed for for different purposes, although that the gap is, is becoming smaller and smaller, the DoD machines are sort of custom designed for science applications. So that, you know, we want to run simulations, because simulations, they have major role in various scientific domains. And those types of simulations need really tighter high performing interconnect, where you have data moving from one node to the other node in a much faster way, right. And that is critical for hyper HPC, or high performance computing simulations. Although machine learning requires those kinds of things for certain applications, in particular, if you are trying to do distributed data, parallel training, or model parallel training. So the large models are a large amount of data, we need that sort of, you know, a good interconnect that connects these nodes and can move data in a much faster way. And the software stack and and the hardware is sort of architected in such a way that you know, it's a long process where the vendors and the Divi community worked together for several years to design these software stack and the hardware so the hardware sort of customized for the way applications were done. Cloud is primarily for the public consumption, let's say, you know, ml models, large language models or vision models. But let's say for us, it's, you know, we look at Graph neural networks, point clouds, or manifolds. And if we need customization, how can we do that. And in some of our applications, we need the learning to happen very closely to the simulation and while the data is being generated from the simulation, so those kinds of things, we can do very well, because we design that system taking into account all these applications within the DBE complex. So that's why doing certain things on on these large Doa supercomputers, it's, it's beneficial.

Justin Grammens  30:40  

That's a great distinction with regards to like, you know, where, where a supercomputer lives and what it's the best thing it does, versus if somebody were to go out to a public cloud and and start up a Linux virtual server, for example, and run some stuff there. That's a completely different apples to oranges. And it reminded me actually, the University of Minnesota has a really X was one of the first places I think, to actually have a supercomputer. And I took a tour of their lab. This is probably three or four years ago, but it was it was really cool to walk through there and realize that, yeah, I mean, there were all these simulations going on, but you know, and it looked like one computer, but really, under the covers, it was a lot of different other, you know, a love was a number of nodes working in conjunction with each other to essentially crunch the data, right? That's really what at the end of the day, it's like a Ferrari, you know, it's like a sports car just for crunching data, you wouldn't want to use it to serve up a website. Right. So it's completely different application for it,

Prasanna Balaprakash  31:32  

it was fought on. Again, it's primarily the applications that drive these designs. And the cloud is sort of designed for larger user base. Whereas the supercomputers are designed for a class of applications and taking those scientific applications specifically constraints into account 

Justin Grammens  31:51  

Cool, what's changed in the past, I would say, you know, five to 10 years or so with regards to making things move a lot faster, right, I'm sure we wanted to take a look at models regarding, you know, nuclear reactors, right and build those those models 3040 50 years ago, right. But what's changed and and technology to now allow us to be able to do what you're talking about today,

Prasanna Balaprakash  32:13  

Our ability to collect data from, let's say, if you're talking about this energy reactors in a weekend, you know, we can collect block more data from those reactors. At the same time, our ability to simulate larger and larger reactors is also increasing thanks to the computing power. So simulation, data collection capability, and basically the computing power that allows us to do both. All these factors allow us to now look at these type of advanced methods to do control, disruption, prediction, and so on, so forth. And also a lot of credit should be given to the AI ml community for pushing the boundaries and pushing the envelope. So we are just standing on on the shoulders of the giants. We all live to the Applied Math Community and Argonne is a fantastic applied mathematics community, which we've benefited a lot from, and also within the way complex and other Applied Math Community. Right. So it's advanced, but in in mathematical optimization, advancement, in feuding advancement in AI and machine learning. So all these giants helped us to push these fields forward, and we will look to them.

Justin Grammens  33:24  

Yeah, I'm wondering if it was, if there was a switch that was flipped that all of a sudden, we have everything we need, but it feels like it was just more or less everyone sort of marching in line, you know, obviously, GPUs I think, probably changed the game a little bit, like you said, maybe there's some new algorithms that were developed, I guess, with regards to deep learning. But yeah, I always start to ask people that like, kind of what what are the advancements that sort of have led to, to where we got to today in our field? It sounds like a little bit of everything. Where do you see this thing going? In the future? I guess, where can we push the envelope further, you think in the next five to 10 years. 

Prasanna Balaprakash  33:55  

The future looks very, very exciting, in particular ability to accelerate scientific discovery using advanced AI ml methods. So that's a frontier. And it's sort of like technology, similar to compute is like how computers sort of transformed our lives, right? That's the sort of potential that AI ml can have in scientific applications. So it's not like you can specify like, Okay, this one application, this one application, but it's the entire spectrum of applications, and that's why the research communities is quite excited about bringing these type of methodologies into their pipeline. And, you know, previously it was, you know, theory experiments and observations now, theory experience observations, and it could be the fourth pillar is, is this AI ml, right. So, that's the sort of importance that it is getting, but as we move forward in AML, it's not a fancy thing to apply in your model. It's much more like it will become more a foundational building block within signs and can sort of accelerate the way that we do science because scientific discovery is very time consuming, very expensive, and a lot of manual effort, trial and error effort. So that's an area where ml can can really help scientists to advance the field in a way which is otherwise not possible. Yeah,

Justin Grammens  35:23  

for sure. Well, it sounds like some really cool things are happening at the Argonne National Laboratory. Are you guys hiring? I always like to ask people, if they're working, whatever organization they're working for, are you looking for top talent? And if you are, you know, where, where should people go?

Prasanna Balaprakash  35:37  

Yes, we do. We are looking for people in the field of AML in wide spectrum. So starting with from postdocs all the way up to scientists level. So there are there a wide range of open positions, we also engage with universities prior to bring undergrads muscle level students and, and we have a fantastic internship program for graduate level students. And particularly we are we are looking for opportunities to engage with underrepresented communities and bring them into the lab and expose them to the scientific advancements so that we know we have some sort of constant pipeline of talents coming in diversity is becoming very crucial for our advancements as well. So there are a lot of people it's under trend. So there's wide range of opportunities. And that is a career path page. A doggone people can go on, check that out. And there is also internship programs, they have specific web pages math and computer science division has a separate page for those type of internship opportunities. And I highly recommend the listeners, if you're interested in please check out those web pages or reach out to me. I'm happy to make connections.

Justin Grammens  36:51  

Awesome. That's great. And yeah, well, we'll definitely put links to your careers page here and the liner notes for the podcast when we publish it. I also did want to make a quick plug here too, because you will be speaking at our applied AI meetup on June 2, democratizing deep learning with deep hyper, so we'll get a chance to dig more into the deep hyper project, you guys are publicly funded, or I guess, funded through the government. Is that Is that correct?

Prasanna Balaprakash  37:13  

Correct. We are funded, mostly funded by Department of Energy. But there are also projects that are funded by Department of Homeland Security, NASA and Japan and other agencies, including NSF and I edge and so on, so forth.

Justin Grammens  37:28  

I'm just curious, are there requirements for you to make your source code like open it because of the way that you're funded at all? Or I love it, that de Piper is is actually open source, right? You've got a great page there with everything people can download, inspect everything, run a Google colab, do whatever else they want with it. But I mean, are most of your projects open, open source? Or are not I guess I just don't know the regulations by and then just kind of curious.

Prasanna Balaprakash  37:51  

Often, it depends on the type of funding by Argonne is privately open research lab. And most of the research that we do are open in the sense that we disseminate through publications and the software's technical reports, and so on, so forth, after approval from from argon. So the source code and open source culture is something very, very strong within argon in particular computing divisions. And you know, whether we we do something together, we put it in a notebook file associated with the paper, because reproducibility is very, very crucial, especially in the area of AML. And so that helps other researchers to take the code, reproduce the results, and use it in their own setting. So, and also the computing division, they have several flagship open source projects, and that sort of inspire us to develop such kind of software, for example, Betsi is a linear algebra library coming from our division, and the NEC 5k. And so it's another type of software. So these are all software's that existed before. And there are new soccers that are coming up. So you know, this chain effect, right, so people inspire us to do more open source software's. And we think that open source, software's are great, not only just for the visibility, but also for the scientific progress. And that's how, you know, we've benefited a lot from from industry like Google and Facebook, pytorch, and TensorFlow, without those software stacks, we wouldn't be here, right? Everyone will be writing their own C, C++ code to do background, and that's not scalable. So, you know, we, again, owe a lot of things to the industry, and we learn the best practices from them and the software stack, we benefit from the software side again, and we also do this for the others so that they can benefit in a similar way.

Justin Grammens  39:41  

That's cool. I wasn't sure if you guys commercialize any of the stuff that you do. Are you more straight straight just research and and publishing your findings

Prasanna Balaprakash  39:48  

It depends on the type of research and the type of project and funding and there are separate channels to take open source projects or big the projects that I've done at Argonne and Big commercialization drove out to that. So it depends on people. And there is also licensing opportunities for companies to get the IP from Argonne and use that in their product and things like that. So there are a wide range of commercialization aspects. And in particular, DOE is interested in helping small and medium scale businesses to benefit from the research that we do. And there are separate channels within Argonne and within DOE to help those kinds of things. But right now, you know, we are privately the computing applied math divisions, you know, we primarily focused on open source research, so that the community will benefit. And particularly we are we are, we are interested in sort of pushing the boundaries of scientific discovery using AI ml. That's where we are. 

Justin Grammens  40:45  

That's awesome. No, that's, that's so cool. And I mean, I will say just the people, the listeners, I, I love that I just I sent a message into the Argonne National Laboratory and actually got a response pretty quickly from you guys with regards to wanting to share what you're doing. And you know, I don't have a lot of experience, I guess, talking with anybody in in, you know, national research organizations or whatever. But, you know, I really appreciate your time being on the program here Prasanna and sharing everything that you do. My undergrad is in applied math. And so I just I really love to see the applications and kind of spike at the idea of applied AI as well as just the applications of all this technology. And I know you guys spend a lot of time not only thinking but also computing, right? There's, I mean, you guys have an entire laboratory and spending a lot of energy, I think, trying to make the best solutions that you can that then will of course benefit, like you said, the scientific community, you know, abroad. So I just I love your mission. I think it's really, really cool. And thank you so much for being on the program today. Is there any other thing that I that I, you know, that you wanted to maybe share at all that I might have missed? Or did we cover most everything you wanted to talk about?

Prasanna Balaprakash  41:48  

I think we covered most of the things and again, there are a couple of things, if things aren't energy efficient computing, like alternate forms of AI neuromorphic computing, you know, we are looking into ways to sort of how to develop energy efficient AI ml methods that can work at the age, which is like more inspired by or inspired as opposed to backpropagation based or a gradient based methods and related things, but probably, you know, we can save it for another time. I think we covered a lot.

Justin Grammens  42:18  

Have you heard the term tiny ml? I guess I'm not sure if we had 10. I thought that Yeah, yeah, yeah, no, that's a huge, huge interest of mine. And I'm actually going to be teaching a class here at the University of St. Thomas. It's really around that I've taught an IoT class for many years. But um, I want to bring in the machine learning. And when you start talking about running at the edge, my mind went sort of right to 20 ml. running these, you know, very low power.

Prasanna Balaprakash  42:41  

Oh, exactly. Yeah, that's a very big piece of the whole puzzle that we are looking at. So it's great that that you are teaching course on that. So you know, I think that's the sort of very promising next frontier in AI ml, not just for the inference, but how can we do tiny ml at the edge with learning and do that more personalized things? Right. So a particular instrument ml for a particular person, right? It just opens up so many new possibilities and things that we can we can do.

Justin Grammens  43:11  

Yeah, for sure. Well, maybe I'll have you back on I guess here in a couple of months or so. And we could talk about 20 ML, you know, all day. But yeah, I that is a very cool part of where I think a lot of this technology is going is to run it more and more personalized. So really, like you have that digital assistant with you. And that can be very, very powerful for people. So well thanks, Prasanna again. I appreciate the time today. This has been great. And look forward to having you at our next meetup on June 2. Well, we'll talk more about the Piper. But yeah, thanks again. And let's keep in touch for sure.

Prasanna Balaprakash  43:43  

Thanks for having me. Justin. Thanks for the opportunity.

AI Announcer  43:47  

You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied If you are interested in participating in a future episode. Thank you for listening