Conversations on Applied AI
Welcome to the Conversations on Applied AI Podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of Artificial Intelligence and Deep Learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real-world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI.MN. Enjoy!
Conversations on Applied AI
Rajeev Bukralia - Applying AI to the Right Problem at the Right Time
The conversation this week is with Rajeev Bukralia. Rajeev holds the position of a tenured associate professor and the graduate coordinator in the Department of Computer Information Science at Minnesota State University, Mankato.He specializes in AI data science and IT strategies and serves as the founding director for the MS Data Science program. He's also currently focused on investigating responsible AI including the ethical implications and XAI Otherwise known as explainable AI. He is the founder and advisor of Dream, a prominent student organization focused on AI and data science, which has over 300 members. His commitment to student success and professional contributions have been recognized through various fellowships and awards. including the Minnesota State Outstanding Educator Award.
If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!
Resources and Topics Mentioned in this Episode
- DREAM
- Explainable Artificial Intelligence
- ACID framework
- Base Framework
- Hadoop Ecosystem
- Spark
- The Coming Wave by Mustafa Suleyman
- Midwest Undergraduate Data Analytics Competition
Enjoy!
Your host,
Justin Grammens
[00:00:00] Rajeev Bukralia: On the other hand, if you think about any technology, whether it is as simple as a hammer, hammer is a technology. It is gonna have two sides of it. It is effective when you apply it to solve the correct problem. So one of the challenges we see in AI space is when AI is applied to solve any problem, even though inherently it may not be the good solution.
The second thing is like any tool, it is going to have some flip side. A hammer can help you build a table, but a hammer can also hurt someone's head, if not used properly. Same is true with AI, like any other technology. So, where we have the challenges in AI space, in terms of its ethical side, Sociological side, legal side, those things are still being explored.
And that to me is really intriguing.
[00:01:08] AI Voice: Welcome to the conversations on Applied AI Podcast, where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today.
We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI. mn. Enjoy!
[00:01:38] Justin Grammens: Welcome everyone to the Conversations on Applied AI podcast. Today we're talking with Rajiv Bukralia. Like myself, as someone who gives back through teaching, I Rajiv holds the position of a tenured associate professor and the graduate coordinator in the Department of Computer Information Science at Minnesota State University, Mankato.
He specializes in AI data science and IT strategies and serves as the founding director for the MS Data Science program. He's also currently focused on investigating responsible AI including the ethical implications and xAI Otherwise known as explainable AI, which I'm very excited for us to dive into and discuss during our conversation today.
He is the founder and advisor of Dream, a prominent student organization focused on AI and data science, which has over 300 members. His commitment to student success and professional contributions have been recognized through various fellowships and awards. including the Minnesota State Outstanding Educator Award.
And finally, his life motto is a quote from Ralph Waldo Emerson, who said, do not go where the path may lead. Go instead where there is no path and leave a trail. So I love that. That's an awesome quote. Thank you, Rajiv, for being on the program today.
[00:02:41] Rajeev Bukralia: Thank you, Justin, for inviting me. And I look forward to our conversation.
[00:02:45] Justin Grammens: Oh, yeah, yeah, no, it's going to be great. This is going to be great. And, you know, I talked a little bit in your background about your academic career, but you also obviously have done a lot of stuff in businesses as being a CIO as well. And usually what I start out when I start talking to people is like, maybe you could paint a trajectory of your career.
You know, you've been doing this for a long time. So how did you get from, in general, sort of like the large arc from point A to point B, you know, from coming out of school to all the way where you are today?
[00:03:10] Rajeev Bukralia: So, Justin, I can take this question in many different directions, but where I will start with is like when I was in high school, I wanted to pursue medicine and I studied biology and chemistry in my bachelor's degree.
But I was always intrigued by how our brain works, and I always asked questions like what would happen if a computer could one day do things that a human brain is uniquely suited to do? Things like, can a computer think? Or can a computer have self awareness? So those were the intriguing questions for me at that time.
And then, uh, I was a programmer who learned computer programming on his own. And then, uh, when opportunity came, then I pursued a master's and doctorate in information systems. And in my doctorate, when I was doing research, I came back to my passion in high school that was asking those questions about computers ability to do cognitive work like human brain does.
I pursued my doctorate focused on machine learning. At that time, machine learning did not have the same level of excitement. As it does today, many people thought it was a dead field because, you know, we over promised and sort of under delivered in artificial intelligence over decades, right? It's not a new field.
There was a lot of excitement about artificial intelligence in 1940s and 50s and so on. And then we went through AI winters and there was not much going on. And then suddenly we saw the field changing dramatically. And then everybody got excited about, uh, AI and machine learning. And then there was an emergence of a new term called, uh, data science.
And I still remember seeing the cover of, uh, article on the Harvard Business Review that was called the data scientist, the sexiest job of the 21st century. And that got me really thinking about that data is extremely powerful. And in my view, there are two strategic assets. that any organization can have.
One is the quality of people they hire. If they hire talented people who are loyal to the cause of the company. And the second thing is data. If the company has right type of data in the right amounts and they know how to best leverage it to foment innovation from it, to create business value from it, then those organizations are going to lead the industry.
I had a chance to eventually, through many different jobs, like computer programmer and things of that nature, eventually get to a point of becoming a CIO. And there, I focused on my passion during my doctorate, which was focused on leveraging data to create value through AI and machine learning. And that gave me some more background on the problems we face in that respect.
And then, uh, I always loved working with students and it goes back to my high school. I was a substitute teacher in my own high school. I was teaching 10th grade students when I was 12th grade student.
[00:06:37] Justin Grammens: Ah,
[00:06:37] Rajeev Bukralia: nice. And, um, that passion in teaching and helping students succeed. led me to the faculty job I have at Minnesota State Mankato.
And there I launched the MS in Data Science program, which is one of the very successful programs we have in the, not only at my university, but in the system. And, uh, also I have many data science and AI related initiatives that we can talk about. So that's where I started. In high school, and that's where I'm at.
[00:07:09] Justin Grammens: That's great. That is great. Yeah. I can kind of sense this common thread of sort of giving back, you know, and being able to sort of learn things along the way and then be able to explain them in sort of succinct ways and starting organizations, even within organizations, right? So it sounds like, you know, you've started a lot of the data science programs at the various universities that you've been at.
You know, I teach at the University of St. Thomas, I'm an adjunct. So I do night classes there for master's students that are going through the software engineering program there. And I teach a. Class on Internet of Things and machine learning and AI as well. So really, really fun to be able to work with students that are just, you know, passionate about wanting to learn, being very, very excited about it.
Tell me a little bit about Dream.
[00:07:49] Rajeev Bukralia: So Dream is a student organization. It's a award winning student organization. And the story of Dream is that when I arrived here at MSU Mankato about seven plus years ago, I was thinking out loud in one of my classes that we did not have a course in data science because it was a sort of new field back then.
We did not have a program. So how could the students learn data science because there were plenty of industry jobs? And I was thinking out loud that it would be great to have a student organization focused on data science and artificial intelligence that could provide students learning opportunities outside of classrooms.
And two students reached out to me after the class and said, you know, we really liked the idea of creating a student organization focused on data science and AI. How could we start? So we created a student organization. And we had only very few students in the beginning, but there was a lot of excitement about it.
And there have been many, many articles written about Dream. So if you do some searches, you will find some articles written about Dream. We started to meet in the evenings, and I started to teach those students. And basically, we were doing hands on projects. I was mentoring research projects related to data science and AI.
I And our intention was to help students learn the basics of data science so they can be competitive for IT or computer science related jobs after they graduate. And, uh, slowly and slowly that organization really picked up a lot of momentum. People saw a lot of value in it. And within a couple of years, we reached over 300 students.
The organization has received several awards. Several stories have been written about it. Our goal is to provide students learning opportunities in AI and data science space. So we bring industry partners. So Microsoft has done some trainings for students. We have other organizations like SAS. They did a lot of data analytics training.
So we have industry partners who do these trainings for students. We have industry speakers who come and talk about data science and AI or IT related challenges. Thank you So students, uh, gain some understanding of the complexity of industry work. We also organize hackathons. We do a lot of research projects.
So students get some mentorship. Students learn from each other. So students, especially graduate students, when they are working on research projects, they go out and teach other students about how to solve a problem. So, like, for example, recently, I have a graduate advisee. Who is working with me on this problem of identifying black eyes.
Okay. Deep learning. So it's a big problem. A lot of people get injured, right? And there are some unique reflective properties of eyes that makes image recognition quite complex. So this student has been working on solving that problem, and he has written a couple of papers about it. So I asked him, I said, you know, teach a student how to do image recognition through Dream.
So now students are teaching each other. So that's an example, right? We also create opportunities so students can network with industry professionals. So mentorship with industry professionals is important. So this is what, uh, sort of things Dream does. Initially, the organization was primarily male students because our field of computer science is heavily dominated by males.
So this became an objective for us that we wanted to bring people from diverse backgrounds. So in the last three, four years time, Dream Executive Board has been led by a woman. So the president of dream in last few years has been women and we have a, about 45 percent women, which is astonishing because we see less of a representation of women in computer science and engineering related fields, but the thing is, data science is eclectic and it's multidisciplinary.
As much as it can contribute to solving problems in other domains, whether it is sociology or linguistics or business management, data science benefits from other fields as well. So like if we talk about ethics of artificial intelligence, that's not a problem of just computer science or just math. It is going to be a highly transdisciplinary question.
So it's a bi directional thing, so DREAM has benefited from engaging with the student outside of computer science and IT and math backgrounds. We have partnered with the students from disciplines like biology who are probably asking questions about AI, but bit differently.
[00:13:05] Justin Grammens: Yeah, yeah, no, no, this is, uh, this is great stuff.
I'm just, just kind of trying to process everything that you brought in because first of all, you know, everyone needs to be data literate. Right. I think that's kind of the point you were making there at the end was regardless of your background, the data is going to be important. If you could clarify, so we definitely will put links out to Dream in the liner notes for this episode, but I'm curious, you said 300 students, is that just at MSU Mankato or is this a program that's available to people outside of your college?
[00:13:32] Rajeev Bukralia: This has been limited to MSU Mankato. So this is a student club
[00:13:37] Justin Grammens: at MSU
[00:13:37] Rajeev Bukralia: Mankato, but the beauty is that a student come from a variety of different backgrounds.
[00:13:43] Justin Grammens: Now
[00:13:43] Rajeev Bukralia: we have collaborated with others, other clubs on our different universities, but this is basically a MSU Mankato student club. Interesting thing is the size.
If you look at the size and, uh, though we are the second largest public university in the state of Minnesota.
[00:14:00] Justin Grammens: Oh, really? Okay.
[00:14:01] Rajeev Bukralia: You know, after the University of Minnesota, Twin Cities. So we have the second largest public university. But even from that context, having 300 plus students focused on data science and AI, I think is intriguing.
[00:14:15] Justin Grammens: That is. No, no, that's a, that's a, that's a huge number. Hey, probably just about everyone. I mean, I guess, which is a testament to, you know, if you're in the program at MSU, you, you want to be involved in this. And the thing that I think is really cool is you talked about different perspectives. And so, you know, when I teach my class, I always have at least a couple of guest lecturers come in from industry.
So the students don't just hear my perspective. I make it a point to bring in at least two people that can come in and talk about AI and machine learning and Internet of Things in their specific case, because they really need to see other perspectives rather than just a professor, I believe.
[00:14:51] Rajeev Bukralia: Yes, you are right about that.
[00:14:53] Justin Grammens: And then the other thing that I think that's very interesting is once you start having to teach people something, you really got to know your stuff. So these students that are teaching other students, you know, yes, the students that are being taught are benefiting from it. But I would say just as important, if not even more, the students that are doing the teaching now, they have to really understand the material inside and out to be able to turn around and teach it to one of their fellow students.
[00:15:15] Rajeev Bukralia: No doubt. I always say, Justin, that if you want to understand something better, try to do it. And if you want to understand even better, then try to teach it to others.
[00:15:31] Justin Grammens: Yeah. And, uh, I just saw something. Richard Feynman, right? One of the great physicists, he expanded even further. He said, try and teach it to a six year old, right?
So he, so his concept was, can you make it just simple enough that you can teach to somebody that's very, very young, which is an even bigger challenge. So,
[00:15:47] Rajeev Bukralia: no doubt, no doubt.
[00:15:48] Justin Grammens: Very, very good. Well, You've been doing a lot of stuff I mentioned at the beginning around responsible AI and the ethical implications.
You know, people have heard of this term XAI. Be curious for you to kind of break that down for our listeners, kind of what does that mean and really how that is such an important part of this field of artificial intelligence.
[00:16:06] Rajeev Bukralia: You see, there are some major concerns about AI and machine learning, deep learning.
Many of those concerns are very valid. So, if you go back and see how we got here compared to where we were at in, say, early 2000s, what has changed in the field? And where we have picked up the momentum and because of that momentum, we are seeing a lot of positive developments. But on the other hand, we need to also focus on its flip side.
So the positive development has happened because of certain factors. One is, We have more data than ever before, right? So the datafication of everything, the sensor technology has gotten better and sensors have become very cheap. Now we are carrying smartphones which have lots of sensors, you know, Wi Fi adapter, Bluetooth adapter, gyroscope, accelerometer, et cetera, et cetera.
So we are creating a lot of data. Our capability to store data has also improved simultaneously. www. So in the past, you know, we were focused on enterprise data, which is still very important. So enterprise data was primarily structured data, and we had relational databases that are best suited to handle those type of data.
But now we have a lot more unstructured data rather than structured enterprise data. And that's the data coming from social media, from mobile phones, etc. So we have figured out better frameworks for storing and harnessing that data. And instead of ACID framework, it follows the BASE framework. Different.
So it's not about one is better over the other. It's about you have different use cases for different technologies. So now we have no sequel databases. We have figured out distributed architecture for data storage. And for processing the data like Hadoop ecosystem. And now we have Spark. So technology has improved datafication of everything.
Then think about our ability to do deep learning through better linear algebra calculations. With the help of the GPUs. So CPU are not as inherently good at solving linear algebra problems compared to the GPUs. GPUs have been around, but were basically meant for gaming. But in last 15 years or so, we have figured out how to use GPUs better for linear jabra calculations, which is what machine learning is a lot about.
So that was a very big game, right? And then if you think about the investment in technology in the past, it was all about mostly government funding. So organizations like the U. S. Department of Energy or the National Science Foundation, they provided funding. to do basic research. But in the last couple of decades, what has happened is the industry has taken the lead role in research.
They are investing a lot more money in research. So organizations like Google and Facebook, OpenAI, they all are spending ton and ton of money in R& D. And they're outpacing public investment. So combined all of those things together, that has created a big momentum for AI. But if you think about it, there are pieces in AI.
AI is a very broad field. Not everything in AI is garnering attention. Although there are ideas that people are thinking about using symbolic system with, say, deep learning or neural networks. But much of action in AI is primarily in only certain spaces of deep learning and the natural language processing or NLP.
People are not thinking about a lot of evolutionary computing, for example, or expert systems. They are also part of artificial intelligence, right, or symbolic systems. So, what has happened is that the deep learning side of things have grown tremendously. And now, with larger language models, we are seeing amazing capacities in NLP.
So, that's happening. And along those lines, we have done amazing cases, right, in this world, where AI is solving problems related to cancer diagnosis, right? Or finding out effective drugs, customer churning, you name it. People are applying AI to solve those problems. And in many cases, it has delivered some good results.
On the other hand, if you think about any technology, whether it is as simple as a hammer, hammer is a technology, It is going to have two sides of it. It is effective when you apply it to solve the correct problem. So, one of the challenges we see in AI space is, when AI is applied to solve any problem, even though inherently it may not be the good solution.
So, that's one. The second thing is, like any tool, it is going to have some flip side. A hammer. can help you build a table, but a hammer can also hurt someone's head if not used properly. Same is true with AI, like any other technology. So where we have the challenges in AI space in terms of its ethical side, sociological side, legal side, those things are still being explored.
And that to me is really intriguing. Where XAI comes into play is about transparency. How do we know whether a model is transparent? So if I go to a doctor and doctor tells me, okay, you have this condition. I have an AI application. I have inserted all your data into that application. And this is the diagnosis I'm getting, which I agree with.
And this is the possible course of treatment. I don't know how the AI is suggesting that treatment. But I know from the previous experience that the AI has been generally correct. In that scenario, let's say as a patient, I take the medication and I get some serious side effects and I sue. My doctor, the court is going to say, how did exactly AI recommend that treatment?
Can you explain that? So, explainability is directly tied with transparency, and it has much more of an impact in certain disciplines compared to others. So, for example, healthcare, you need greater transparency. in AI solutions because people's life is at stake. If the case goes to the court, court asks the question, explain how did the AI model make this decision or recommendation, and we used deep learning.
To create that model and we cannot explain it, it may be effective, but it cannot be explained. So when it backfires, it creates lots of implications. It creates sociological implications, it creates legal implications, it creates ethical implications. So I became more interested in looking at different aspects related to ethics of AI, which some people can put it under an umbrella term of responsible AI.
And XAI is a part of responsible AI, where our goal is to think about interpretability of a model. Our goal is to think about improving the transparency, so we understand how a model actually Made certain choices.
[00:24:37] Justin Grammens: Yeah. I think, I think one of the things that comes top of mind as you're talking right now, is the whole blunder that Google had with Gemini, right?
It, it, this happened about a week ago where it was generating images of black people in the completely wrong historical context, right? It actually was a part of, you know, racist organizations that were racist against black people. Then it was these, these images that Gemini was creating and they've had to go out.
CEO has had to go out and essentially apologize up and down that they messed up, which obviously they did. And this is a tough, this is a tough thing. I mean, obviously these models are massive, right? I mean, you're going to get some things wrong along the way. I don't expect things to be perfect, but I think that is something that a lot of people, I think the general public just didn't really realize that these image generators could and would, and probably will.
Unless there are guardrails put in is, and even as much as you try to put the guardrails in, things can happen that are very offensive to others. So as you're looking at this, like, tell me a little bit more about, you know, are you looking at maybe how organizations need to put these, these guardrails in?
You know, I, I guess I would agree, I think a lot of people would basically agree with regards to that. We need to make sure that XAI is a component of this. I'm curious to see what angle you're taking with regards to your research. Thank you.
[00:25:50] Rajeev Bukralia: Yeah, so one aspect is explainability, right? So basically, XAI is focused on explainability of AI solutions.
So if I cannot explain something, that poses certain risks, even though the final solution may be effective. If the risk is posed, it can hurt trust.
[00:26:12] Justin Grammens: Mm hmm. Sure.
[00:26:14] Rajeev Bukralia: And if people raise alarm because they were disadvantaged in some form or fashion by an AI solution, and we cannot explain how it actually happened.
That, to me, is a serious ethical and legal and sociological issue. So whether AI moves forward and at what pace is going to be determined partly by public trust. So up to this point in time, AI has grown so much Because of all the factors I talked about earlier, the datafication of everything, ability to do linear algebra with the GPUs, or the industry investment and so on.
Moving forward, to me, I think public trust is going to play a critical role. If there is a lot of backlash against AI, because AI solutions put certain people or certain communities at disadvantage. And the organizations that produce those solutions are not able to explain it, how exactly the model made those decisions.
So, for example, if a particular race of people get disadvantaged by a bank when they apply for a loan. Now, there is some bias and some accuracies in the data. So, if you think about it, where does AI get its bias? Partly it gets biased because we have been biased as a human, right? We still hold some responsibility.
We hold the responsibility of what type of data we feed, how much bias is there in that data, how we manage that bias. Being very thoughtful about that is part of XAI because you have to explain your solution. So, to me, moving forward, Justin, public trust is going to play a critical role. And, uh, public trust will decide how regulation is shaped.
If there is any regulation, to what extent it would be, and balancing all the different sides of it. So, To me, XAI, Responsible AI, play critical role in terms of whether AI will live up to its promise or whether we will make serious blunder and we basically won't be able to use a technology to its potential in the future because we may over regulate it, for example.
So this is going to be a hard question, something that I know a lot of people are wrestling with. And, uh, we don't quite know what is the right level of regulation, but more and more people get worried about it because they were disadvantaged by AI solutions. More and more lawmakers will be pushed to think about regulating AI.
When we regulate AI, There will be some positive things about it because now we are setting the rules of the game. We will possibly have some discourse about mitigating risks. We will know what is acceptable and not acceptable in the civil society when AI solutions are deployed. What sort of punitive measures will be put against companies that deploy AI solutions that are rogue Put, uh, people of certain kind at disadvantage.
So that would be positive side. The negative side is, will it hurt our ability to innovate?
[00:30:04] Justin Grammens: Yep. The speed of innovation.
[00:30:06] Rajeev Bukralia: The speed of innovation could be compromised. And if that happens, AI does have a potential to solve very difficult, complex problems. In different sectors. And that to me is a major challenge in terms of the balance where XAI and responsible AI are at the front and center.
[00:30:25] Justin Grammens: Sure, sure. Great. That was well, well put. And you know, a lot of things come down to trust, I guess, right? You know, these Technologies are very new. I was just talking on the last podcast with somebody where it's like, and this was actually quoted in a book that I'm reading right now called The Coming Wave by Mustafa Suleiman, where he kind of says, look, we're back in the Newtonian days or the early days of Copernicus here.
Like back then we were just trying to figure out what the universe, how the universe worked, right? We try to find some basic laws. And we're so early in AI that in a lot of ways we don't know, we can't explain a lot of this stuff right now, like you said. And we absolutely have to, of course, but you know, it kind of made me sort of think about my gosh, yes, people are tinkering, they're experimenting around, like we're still in the lab with some of these things.
And uh, you know, even the people that sort of built. open AI and chat GPT with their billions and billions of parameters. I think they were surprised with what came out at the end of the day, right? Even they were like, wow, this thing is working way better than I thought. So what does this mean? Right? Have we unlocked something even more powerful than we had known?
So trust across the board, obviously is going to be, you know, very, very important to all of this. And I feel like data science, if you go back for one o'clock, 15 years or so, I mean, there, there are. formulas and basically linear regression algorithms that you can point at and say, look, this is why we got the answer that we did.
I mean, it's all sort of foundationally built in math. And right now with neural nets and the output of that, we don't really know what's happening inside. So there is no idea of a confusion matrix or a R value or any of that type of stuff. Now people are building them. They're trying to obviously be able to figure out is one model better than the other, but Until we can actually tell through logical explanation to somebody why this system is doing what it's doing, it's going to continue to be this sort of black magic and that's not good.
That's not good for our general society just to say, Oh, you know, this AI agent, and imagine if they're in a position of being a judge, right? You know, imagine if they're sending people to, to jail or not, basically based on certain rules and laws. That is super important, super important for us as a society to understand how these decisions are being made.
So it, it affects everybody up and down, no matter really if you're just an average citizen or you're a doctor or you're even driving down the street, right? You basically think about, you know, the first woman that was killed by a automated autonomous vehicle, right? It was in Arizona and it was, they were doing a testing, but it was just, it was misinterpreted.
Like it thought that she was just a, just a signpost, right? Just very, very important. So yeah, these are very, very interesting topics and until they end up getting resolved, I think it creates an interesting game. I think the other thing that I would like to get your opinion on a little bit is, as I'm thinking about this right now, like this is an election year too.
So things with deep fakes, you know, images that are being generated by anybody. I mean, that's another concern that I have is just how easy and how much harm you can do with a couple of keystrokes and social media to basically sort of tilt a lot of people's perceptions, right?
[00:33:23] Rajeev Bukralia: Mm hmm. And I think that's where, uh, Justin, we need to really come up with some rules of the road.
So you have so many startups. And these startups are trying to gain momentum in marketplace. So there is an incentive to go rogue, so to speak. Just produce something that get people excited. But they have not thought through all of its implications. And there is not even time to think about implications.
Because everybody is in a cutthroat competition. So that's where I see the role of regulation. Like, for example, will some law require people to have a watermark that cannot be manipulated on any images created by generative AI tools? How about the idea of using blockchain? So blockchain, the purpose of blockchain was to really track the origin of something that is immutable that you cannot modify.
How about using blockchain with D fakes to see where a D fake image was actually originated. That's cool. So there are some ideas Yeah. That we need to do technically. But also through regulation and education. So there are going to be people who will have an incentive to use AI tools to manipulate elections, to spread misinformation.
We need to find a way to limit that type of use. through regulation and also some technological tools that can do that and educate general public at the same time. So that to me is the possible solution, although easier said than done. Because we have not been able to agree on how to move forward with AI regulation.
Because again, you have different parties who have different types of vested interests. And you have to manage all those interests.
[00:35:38] Justin Grammens: Yeah, yeah, yeah, yeah. And so if you could put on your CIO hat now, you know, right now you're in academia, but you've been a CIO. What are, what are some things that maybe CIOs are, are thinking about?
Because they're coming at it from a, from a different angle, right? Like what AI means to them? Maybe you could talk a little bit about that if you're sitting in that seat.
[00:35:55] Rajeev Bukralia: Yeah, so I think in terms of CIOs, they, like any technology, they have to look at the value proposition. They have to look at the cost benefit.
They have to look at, does AI make things simpler for their users? Does it make more cost effective for their users? Does it add any new features that their users want? So those are the basic questions. So if they are building AI solutions so that they can predict customer journey, In an organization, that makes sense if that's their business goal.
So AI should be aligned to the business goals. Not the other way around. So many times I see organizations that they basically jump on the AI bandwagon and they say, well, everybody is doing AI, so we should invest our money and find AI solutions. Well, first, we need to really think about what business problems do we have and also think about how does it align with our strategic goals.
Every organization has some strategic goals. They have a strategic plan. They have a mission. If AI solutions do not contribute to their strategic goals, that's a waste of money and effort, right? So that's a very basic question for any CIO to ask that. Am I using this technology to enhance business value of my organization?
Am I using this technology to create some strategic or competitive advantage? If the answer to that question is, then investigate it. Then the next question is, really think about, just because it is the right thing to do it, So that's first thing, first question is, is this the worth doing it, right? And you look at very pragmatically whether it's worth doing it.
Don't jump on the bandwagon. If it's worth doing, then the next question the CIO should ask is, Are we equipped to do it? So equipped to do it means, as an analogy, let's say AI, some fancy LLM, is like a Lamborghini, right? Yeah. Well, it would be great to have that Lamborghini. A lot of people would be excited that we have that Lamborghini, that new fancy LLM, but we need to think about what fuel does that Lamborghini take?
Lamborghini is not going to work on kerosene oil. Most organizations are not very good at data governance. They don't have the right type of data that could be used for decision making. If they have the kind of data, it is not in a readily usable form. People don't know who the data steward is, who maintains the quality.
So if you have Lamborghini running on lower octane number, it's not going to work very well. So you invested money in your Lamborghini, but your fuel is not right. Right,
[00:38:42] Justin Grammens: right. Yep.
[00:38:43] Rajeev Bukralia: Well, data is the fuel for artificial intelligence. If you have high quality few, which means you have a excellent data governance process, you know what data is confidential, what is not, you know that you are maintaining data quality all along, data is stored in the right type of databases, it can be harnessed easily.
And you have the right type of data that is suited to solve a specific business problem. Then you can buy the Lamborghini because you have the right type of fuel. You are more likely to succeed in your efforts. Many organizations are investing a lot of money in AI solutions when they don't fully understand their business problems.
And number two, if they do have a good understanding of the business problem, they don't have the right data infrastructure. They are going to put poor quality data. In fancy AI solutions like Lamborghini, you put kerosene oil in Lamborghini and then you say it doesn't run very well. So that's a problem I see.
So from CIO standpoint, I think in addition to asking those basic business questions about value proposition and strategic goals and how AI aligns with those strategic goals, the next thing they need to worry about is not how to purchase this fancy LLM tool, but It should be focused on, are we creating the best data infrastructure that can leverage fancy AI applications?
[00:40:18] Justin Grammens: Yeah, yeah.
[00:40:20] Rajeev Bukralia: And that is not as exciting. Yeah,
[00:40:25] Justin Grammens: data's always boring.
[00:40:27] Rajeev Bukralia: That's boring. Data governance is boring. And data governance is also a very political process in organizations. Because I. T. does not have. Access to all the data. Many times the data resides in shadow systems. So they have a lot of organization are still paper based organizations, right?
And if they even, they have created some forms and they are PDF forms, then you're not able to scrape them. So the data is not in a usable form that is needed for AI solutions. And that takes a lot of time, commitment. And money to do that. But that would be the right approach of doing it. Because if you have high quality data as an organization, then you can use AI to your advantage.
And you can minimize your risks. Most AI solutions in organizations fail. Because of primary reasons. Problems with data. Disconnect with the business proposition. Right,
[00:41:30] Justin Grammens: right.
[00:41:31] Rajeev Bukralia: People's reluctance. to use AI solutions because inherently some people in some cases they will be thinking about if a company deploys an AI solution to do something more efficiently, will that have an effect on jobs?
Will I lose my jobs tomorrow because companies deploying an AI solution? Now that question relates to change management. So CIOs are uniquely suited to handle to foster change management through technology. If there is a huge organizational pushback to utilize AI solutions, process is going to become political and it will create headaches for CIOs.
So change management and thinking about the people angle is also important.
[00:42:25] Justin Grammens: Yeah, I think not even ai, any new technology you bring into an organization. I mean, I, I worked with a lot of companies trying to basically add automation using Internet of things technologies, right? Making their equipment on a factory floor, for example, more intelligent.
But the question was, again, the, it was more on the personal side. A lot of the workers that were there that we were expected to get the data from, didn't actually want these machines. to do what they were supposed to do. You kind of had this thought that they was going to take over their job when in effect it was trying to make them more efficient.
But you don't solve that personal side and sort of get people on board with it to see the benefits. Because these, a lot of these organizations, and probably in your case too, these are organizations that have been around for 40, 50 years. I mean, they've done what they've done using manual labor, using humans to do a lot of the work.
And now within the course of the past, you know, 16 months, we've started to see what. The power of sort of large language models and chat GPT can do. They're not ready to change yet. Right. And so you almost have to, what I talk to companies about is, is just start with something very, very small that is easily digestible for everybody across the organization.
But yeah, you need everyone to be on board. It's not just the C suite. It's all the way down to the people that are going to be interfacing with the system to have any chance of success. Whose
[00:43:38] Rajeev Bukralia: job may be impacted by AI technology in any shape or form. And I agree with you. I think a pragmatic approach could be that we start with a low hanging fruit.
See, what is AI can do to create substantial business value at minimal risk? And a minimal cost. So if we become a little more thoughtful and then also at the same times, improve our competency. So whether we have competent people from the both sides, the technology side, as well as people who are looking at the business implications of technology or impact from technology, those parties, we need to bring them to the table and take a small project and learn from it.
We will make mistakes and that's not a big problem. As long as we learn from those mistakes rather quickly. So that would be my thoughts on what CIOs could do about AI. Although one thing I would say that a lot of things we have said about AI are applicable to any technology. And certainly CIOs, you know, manage and lead technological efforts.
One thing is different about AI. You see, this is a technology that can improve by itself. That's a great point. There has not been any other technology up to this point in time that can improve by itself. So that raises some interesting questions. For CIOs.
[00:45:07] Justin Grammens: Yes. Yes. And you can even ask it questions about how can I best use you?
Right? So this is the thing I think is just so fascinating with a lot of these things. It's like, you know, so I'm, I'm working on a presentation right now that sort of walks somebody through a product development roadmap from ideation all the way to implementation. And so we generate a requirements document, but what I've had fun with is basically asking the large language model, if we load it in there, say, what have we missed?
What are some things, this is a medical device, so give me some things that we missed because we're all human, we created this thing. What are some areas that there might be some holes in? And so it's those types of questions that I don't think people have really, aren't thinking about this in some ways.
I mean, I certainly haven't when I first started thinking about this. A lot of people approach these as more of a Google search, right? Give me some information and I'll get some, something back out of there, which in a lot of ways, doesn't work very well if the data isn't up to date and whatnot, and there's biases in there, and it's making things up and hallucinations and all that type of stuff.
But what's more fun to me is actually training it around your data and then saying, you know, give me some more info here, like help me do my job better. And to me, that, that is a completely different paradigm shift with, with regards to any technology. We've never had a hammer. That basically tells you how you should use the hammer, right?
Or that you could actually interface and ask it to improve over time. It's always been a human that's needed to do that. Now, this new technology is going to be able to do some of those things, which is what I think you're kind of hinting at here.
[00:46:25] Rajeev Bukralia: You, you're absolutely right. Because this is a technology that's so creative and constantly changing and improving and learning and evolving.
No technology has been that, and it can do that without any human intervention. Granted, we are teaching that technology to get better by prompting it right. So that's there. I think another scope for CIOs is to really improve their knowledge bases in the organization through custom GPTs. So the data, you know, every organization has some valuable data.
And think about helpdesks. For example, a technical help desk where you have a lots of knowledge base, people go through, you know, train custom GPT on your private data that could be effective, more contextual information you get. So I think custom GPT is definitely something that you will see a lot more adoption, like more of a precise chatbots.
In the past, they were just like stupid. Right? They can't ask, answer basic questions. Now, with custom GPTs, they are becoming a lot more personalized, a lot more specific. So, I think we're going to see huge developments there that will possibly offload some of the commitments in certain departments, especially the customer service, tax support, that type of thing.
And that's a much easier proposition because, okay, if GPTs are trained by organizations like OpenAI, et cetera, so you, you don't have to invest billions of dollars that those organizations have invested. But you take the learnings from them and then you further train the model with your own data. So that's a little less risky proposition to improve greater value.
[00:48:10] Justin Grammens: Yes, yeah, it's good. This is good. Well, boy, we could probably sit here and talk for another hour. This has been going on for quite, quite some time here. And then we just kind of scratched the surface on a lot of these things. So, Rajiv, how do people get ahold of you? What's, what's the best way to reach you?
[00:48:23] Rajeev Bukralia: I think people can just Google me and then they will get to the university page and get my information from there. Before I leave, I do want to talk about an exciting event that is coming up.
[00:48:35] Justin Grammens: Please do.
[00:48:36] Rajeev Bukralia: So, April 6th and 7th, we will be organizing Midwest Undergraduate Data Analytics Competition, which is also known as MUDAC, at Minnesota State University in Mankato.
And this is a 24 hour event. nonstop data challenge. It has been around for 12 years. So in 2019, we brought the competition to Mankato. So we did it in 2019, 2020, and then COVID happened. And now in 2024, we are back on track. We have taken problems from big organizations. We never disclose problems to students in advance.
So this is a very rigorous competition where students of merit will come to compete and we make this competition as tough as possible because we want the best of the best to come out. So in 2019, our competition was focused on Minnesota water quality. And data was sponsored by the Minnesota Pollution Control Agency.
Then, uh, 2020, we focused on a problem related to improving the judicial system. And the problem was sponsored by Thomson Reuters. And, uh, this year, it is going to focus on agriculture. And the problem is sponsored by Agricultural Utilization Research Institute, AURI. And it is going to be something valuable for all Midwest states or agricultural focused states.
So this is a unique opportunity where students gather from different universities. So we will have lots of big research universities and small private colleges. They will all come here Saturday, April 6th, and we will reveal the problem to them. And each student team will get a classroom. For 24 hours, they will work nonstop, although they will bring their sleeping bags and they can certainly sleep as needed.
And the next day, we have a judging process that will have 75 to 100 industry judges. And each team will go through many different layers or steps in the judging process before the final round happens. So this is a exciting event. This is something that I would love to see more industry people involved, especially from Minnesota and surrounding states.
It's a great opportunity to meet with the talented students who are going to be the best, the best students under one roof. So anybody who is listening to this podcast, I encourage you to learn more about MUDAC and maybe be part of that event in the future.
[00:51:14] Justin Grammens: Excellent. Excellent. Yeah. And this is something that you guys just decided to start doing.
And uh, I love these idea of these hackathons. I've run a couple in my career, more focused on web technologies, and we also did an Internet of Things hack day where people came in and tried to build an IOT device in 12 hours. And so there's something special. There is something very special about having a deadline, right?
I think, you know, a lot of people work on things here and there, but if you have something you need to get done in a fixed amount of time, the team. Really gets focused and pulls things together. And I love that you guys are doing something that is socially beneficial, right? Every, every event here is related to something that's going to benefit society in a larger scale.
So even if people don't attend this one, they can search for MUDAC. They'll find it on the internet and they will be able to attend future ones as well. So this is, this is fabulous. All right. Well, Rajeev, I appreciate your time today. Appreciate your insight. I mean, really, really great, uh, way to break everything down for our listeners in very, very clear and concise measures.
So I can see why you're an educator. I can see why people enjoy taking your classes because yeah, you make a lot of sense and you make these complex subjects a lot more easier to digest and a lot more simpler for us. So thank you again for your time and look forward to keeping in touch.
[00:52:22] Rajeev Bukralia: Thank you, Justin, for giving me this opportunity.
[00:52:26] AI Voice: You've listened to another episode of the Conversations on Applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at AppliedAI. mn to keep up to date on our events and connect with our amazing community.
Please don't hesitate to reach out to Justin at AppliedAI. mn if you are interested in participating in a future episode. Thank you for listening.