Conversations on Applied AI
Welcome to the Conversations on Applied AI Podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of Artificial Intelligence and Deep Learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real-world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at AppliedAI.MN. Enjoy!
Conversations on Applied AI
Ren Bin Lee Dixon - Exploring Global AI Policies
Today we're talking with Ren Bin Lee Dixon. Renin comes to the podcast with a deep experience in public and artificial intelligence policy. She's experienced in the research and analysis of issues at the intersection of technology, socioeconomics, and politics, with an international and diverse professional background in academia, public and private sectors.
She's currently a research fellow at the Center for AI and Digital Policy, where she drafts AI policy statements and comments to local and global regulatory bodies with a focus on human rights, democratic values, and the rules of law. A graduate of the Humphrey School of Public Affairs, her master's thesis entitled AI Governance, A Comparative Analysis of China, the European Union, and the United States was awarded an excellence in global policy in 2022. She also previously has been a marketing and PR specialist with more than 15 years of professional experience in the private sector, including starting an online retail business. Thank you, Ren bin, for being on the podcast today.
If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!
Resources and Topics Mentioned in this Episode
The Challenge of Regulating AI in the US – Lawmakers are struggling to keep up with AI’s rapid evolution, resulting in gaps in oversight and accountability. https://www.congress.gov/crs-product/R48555
How Algorithms Incentivise Engagement – Social-media ranking algorithms optimise for clicks and shares, often amplifying polarising or sensational content. https://pmc.ncbi.nlm.nih.gov/articles/PMC11894805/
AI as a Foundational Technology – Artificial intelligence is moving beyond niche applications to become a core infrastructure across industries and business models. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-next-innovation-revolution-powered-by-ai
The Information Integrity Challenge – The rapid spread of misinformation and disinformation is undermining trust in digital ecosystems and societal decision-making. https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/03/facts-not-fakes-tackling-disinformation-strengthening-information-integrity_ff96d19f/d909ff7a-en.pdf
Can AI Help Regulate AI? – Researchers and policymakers are exploring whether AI-based tools can monitor, audit, and govern other AI systems to ensure fairness and safety. https://academic.oup.com/policyandsociety/article/44/1/85/7684910
Ren Bin Lee Dixon: [00:00:00] Specifically for AI policy, it is a very wide ranging field because it affects different parts. It affects governance. Compliance is also human rights. It is advocacy work. It is research work. So you can really come in from many different angles. And I'll also add that because I think in my conversations with people who are interested, one of the main concern is.
Technical understanding of the technology, and I always assure them that the most important thing you need to understand is how the technology is affecting people, affecting us ourselves. So you can really come in from many different backgrounds.
AI Announcer: Welcome to the Conversations on Applied AI podcast, where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning.
In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems [00:01:00] today. We hope that you find this episode educational and applicable to your industry, and connect with us to learn more about our organization at applied ai mn Enjoy.
Justin Grammens: Welcome everyone to the Conversations on Applied AI podcast.
Today we're talking with Ren bin Lee Dixon. Renin comes to the podcast with a deep experience in public and artificial intelligence policy. She's experienced in the research and analysis of issues at the intersection of technology, socioeconomics, and politics, with an international and diverse professional background in academia, public and private sectors.
She's currently a research fellow at the Center for AI and Digital Policy, where she drafts AI policy statements and comments to local and global regulatory bodies with a focus on human rights, democratic values, and the rules of law. A graduate of the Humphrey School of Public Affairs, her master's thesis entitled AI Governance, A Comparative Analysis of China, the European Union, and the United States was awarded an excellence in global policy in 2022.[00:02:00]
She also previously has been a marketing and PR specialist with more than 15 years of professional experience in the private sector, including starting an online retail business. Thank you Renin, for being on the podcast today.
Ren Bin Lee Dixon: Thank you for having me, Justin.
Justin Grammens: Alright, well great. You know, I talked a little bit about sort of where you are today, being a research fellow.
Curious to know about how you got into that line of work and sort of what the trajectory of your career was to get you to this.
Ren Bin Lee Dixon: Yeah, absolutely. So I. Recognize that I have a rather unconventional path towards my career today. So I would go all the way back to where I'm from. I was born and raised in Malaysia.
Back then, I had earned my degree in communications, but since then I have worked in Copenhagen, Denmark. I've worked in Shanghai and China, and I'm currently based here in Minnesota in the us. Now before I diving into this Korean AI policy, as you mentioned, I built a career in marketing communications and working in both tech solutions and fashion industry.
Very [00:03:00] unconventional. But I like to think that this varied path has sort of like provided me a perspective, look at issues and problem from many, many different perspective. Now I've always had an interest in emerging technologies and I always kept myself up to date with, you know, the emerging new technologies that are coming out.
But there was a couple of events, recent event that kind of prompted me to reflect on my career and kind of prompted me to change my path. So the first one was a series of Senate hearings in 2018 on Facebook that examined the company's data protection and privacy practices. So those hearings highlighted the significant impact that social media platform had on society, and it really kind of shed a light on how regulatory efforts are trying to keep up with the rapid developments of these technology.
And it was really profound because when you think about it, social media has become [00:04:00] such an integrated part of our lives, and it's influencing how we shape our ideas, how we connect with people. And then the second event is a more personal level where I read a few books by the writer, professor Yuval Noah Harri, where he wrote about, you know, a brief history of humankind, but also looking forward on where humanity is headed towards.
And then. He had a very short, concise book called 21 Lessons for Its 21st Century. So these series of book really highlighted to me that AI was going to be a very profound and foundational element in a lot of emerging technologies that's going to be used across different sectors and different aspects of our lives.
And I realized with this technology coming and seeing how social media has affected our lives, we needed to be ahead. Of this development. And so that prompted me. I realized I have a lot of questions I needed to ask. I need to figure out what the [00:05:00] questions were. So that helped me make the decision to go back to school to pursue Master's in public policy, and I decided to concentrate on AI governance.
So that's kind of led. Me too, where I am today.
Justin Grammens: Wow, that's fabulous. You know, a lot of times guests come on and we only get a chance to talk for like literally a couple minutes before this, so I don't really know a whole lot about your background, but as you're talking, boy, you're hitting all of the notes for me.
In fact, as you were speaking, you probably saw me doing some thumbs up and stuff because you've all know Harri like I, I just absolutely love him. I have mentioned him on this podcast, and I'm assuming you've read his one that he came out with this past Fall, the Nexus?
Ren Bin Lee Dixon: Yes, I have read that one. Yes, and
Justin Grammens: that one completely blew my mind because it really sort of talks about these social networks that you can get in your own cocoon, I think is the word that he starts using it for.
Is you're basically entranced around all of this information around you, and it's very, very difficult to break out. I think I can see that overlap with regards to 2018, kind of what you're talking about first, was that related to [00:06:00] the sort of the election, uh, the, a lot of the stuff that was coming out on Facebook that sort of sparked
Ren Bin Lee Dixon: that?
I think yes. That's partly what triggered the Senate hearings and also started to shine a light on how Facebook was managing data and, you know, user data. So that was a big case. And I know that there's a separate investigation itself on the Cambridge Analytica, which was a company that kind of, you know, started the whole thing during 2016.
Justin Grammens: This technology is getting better and better, and what I remember in the book is we think if we put more information in people's hands is they're going to get. Smarter and we're going to actually lead to more, I guess, common ground. Everyone's gonna be able to take a look at the data and understand it and the world will be a better place the more data that we have.
And it's turns out that that's not the case that we're running into right now.
Ren Bin Lee Dixon: No, sadly no. And you kind of brought up another point that I have been looking a lot at the moment 'cause I'm actually delivering some presentations around AI and national security. So actually one of the [00:07:00] top sort of concern at the moment globally.
Is actually information integrity. So what that encompasses is one, like in misinformation, disinformation, and because it's global, so they also encompass hate speech. So it's very interesting that we're looking at information, how there's a assumption that with more information, we assume that people would be more well informed, well equipped to make informed decisions, you know, creating this common experience.
But unfortunately, as we've seen the past few years, that is not necessarily the case. In fact, we're seeing more very targeted and very siloed sort of messaging on these platforms. And a big threat is that AI is making this easier. Not just in creating the messaging, but also the very personalized targeting of this messaging.
And I think that 2016 case, and it's not just election, it was also Brexit that happened, [00:08:00] how Cambridge Analytica was able to use these information that they kind of harvested from Facebook users and created very, very highly personalized targeted messages for them that look like, you know, news article look like average posts from average users.
So this kind of creates an information ecosystem that was not the ideal state and it was filled with these type of difference, right? So information integrity is actually one of the top concern globally for leaders.
Justin Grammens: Okay. We'll definitely dive in more with regards to how we can fix this problem, because that's what you're trying to do.
And I think a lot of us are just out there using this stuff and we're not thinking about it at the level that you are even have the influence or talking to the right. Policy and lawmakers. Before we get there though, the other thing that I was thinking about was do you think it also has something to do with, I guess like how systems are incentivized?
And I was thinking back to Facebook, like the whole point of Facebook is to make money and the whole point of [00:09:00] Facebook is to essentially make you continue to watch and view things and there's an algorithm there that keeps you watching. And I remember, I don't know if it was in one of these books that we're talking about or whatever, but.
The whole point of Facebook is basically they will start throwing in misinformation, disinformation, because it's the hot thing to have people do that. And so I'm assuming that's a component of this thing that you're talking with as well.
Ren Bin Lee Dixon: It is partly related, and I wouldn't say that Facebook is doing this, but it's how their algorithms.
Are designed, and it's not just Facebook, it's all these digital platforms, how these algorithms are designed that it is, yes. Like you said, financially driven. The incentive is to keep people, there is the attention, time spent, it's ads driven. They may have different sort of business models for exactly how they're trying to, you know, keep people on a platform.
Essentially, yes, that's the common goal. You know how we talk about viral [00:10:00] post and then as a emerging sort of like trend as people are talking about rage bait, because that's the easiest sort of like model for keeping people responding and engaging. Compare it to neutral post or happy post. So that's also an interesting aspect to that, and it's kind of, you know, reflective, it's thinking about how human nature is as well.
Justin Grammens: Yeah. Yeah. It's funny, I was listening to a podcast this weekend by, it's the guy that runs Y Combinator and he was talking about, in some ways, hacking the YouTube algorithm. So they put out a bunch of content. But they found that when they actually put out the title of their content on YouTube, like the thumbnails, if they show people's faces and some sort of a, an active movement, like interesting stuff, then they get more views out of it.
Or you know, YouTube pushes it harder. They obviously wanna use more controversial wording. Right. With regards to the titles. And it's funny, every social network seems to have its own little way of doing this and everyone's trying to game this system. But then it just comes back to like, you know, we're sort of driven by [00:11:00] this, all of this morass of.
Data that's just sort of showing up in our face. Mm-hmm. And we don't have much control over that.
Ren Bin Lee Dixon: Control is a good word. That is something that we are trying, at least in a policy realm of trying to push for more control.
Justin Grammens: So how does this work? I, I'm fascinated in terms of like, what is a day in the life for you?
Ren Bin Lee Dixon: So, full disclaimer, I am working remotely most of the time. So a lot of policy work is happening in California and in DC for obvious reasons. So a lot of my work time is spent working remotely now. It really depends on the type of projects or tasks that I'm dealing with. There is a lot of drafting policy recommendations for government and multilateral organizations, so that would involve reading and going through a lot of emerging regulatory framework or guidance and analyzing these frameworks in order to develop sort of concrete policy recommendations for these type of groups.
And then previously in my work with the Center for Security and [00:12:00] Emerging Technology, where I was examining AI harms risk and issue, that involved a lot of research into existing frameworks for documenting harms and incidents in general from other industry to sort of propose and advise a new incident reporting framework.
So. I wanna say my specific position, it involves reading a ton, like a ton of, and it's not necessarily books. I would love to have more time to read books, but it's a lot of research, paper journal articles, and emerging framework results. Some people might consider it dry, but it can be exciting and if you see like what they're actually developing and what is emerging.
Justin Grammens: Gotcha. That's more on the government side, is that right? So like I, I had somebody on the podcast maybe a year or so ago from NIST and she was talking about the NIST AI risk management frameworks and stuff like that. Is that, yeah. Are you taking a look at a lot of that same stuff?
Ren Bin Lee Dixon: Yes. So I definitely read a lot through that.
NIST ai, I call it [00:13:00] RMFA Risk Management framework. It is still actually one of the gold standard that we currently have in the us. It is by far the most comprehensive. So the framework that we have in the US to look at AI risks, so that type of documents, for example. It's a good indication. Yeah.
Justin Grammens: Yeah. And so then do you work with then industry, so like Google and Microsoft and all these companies to sort of tell them how they should be following these policies?
Ren Bin Lee Dixon: A lot of time my work is focused on statements and recommendations targeting towards governments and multilateral organization like United Nations, the OECD, so not directly to. Companies. So it's more looking at government's use of AI and how governments are going to regulate ai, which will in turn, you know, it will involve companies use.
Yes.
Justin Grammens: Yeah, yeah. Interesting. So you did a whole thesis on this with regards to the United States and Europe and China, right?
Ren Bin Lee Dixon: Yes. That was, uh, back in [00:14:00] 2020 when things were still, you know, starting to happen in the AI governance field. Yeah. Lots have changed since I always say ai. Timeline is very much like dog years.
Like one year is almost like seven years or so. Effort.
Justin Grammens: Absolutely. No, it's insane. It's insane how fast it goes, for sure. Mm-hmm. So do you interface with these other countries or are you focused mainly on the United States?
Ren Bin Lee Dixon: It is very much global in psych. So I am currently working with the Center for AI and Digital Policy.
And they have presence globally. So many, many, I think the last report had contribution from over a hundred different contributors across the world. So it is very much global and we have groups dedicated specifically to United Nations statements. There's also a group for Europe as well, the European Commission.
So it is very much global. I wouldn't say it's just narrowly focused on the US per se.
Justin Grammens: I mean, are you seeing any one country doing it better than others or.
Ren Bin Lee Dixon: I don't exactly use the word better. It's different. So for example, the EU [00:15:00] has, I think last year released a world's first comprehensive AI regulation known as the EU AI Act.
So they have, you know, covering across, looking at AI based on a risk approach. So they are regulating AI based on high risk AI to low risk, to minimal risk. And so their regulatory approach is proportion to those level. And then they also have one category that's prohibited practices, for example. And then China has been actually releasing a lot of rules for ai, for like generative AI for algorithms.
And they're starting also as well deep fakes. And I think they're starting to look more specifically on applications and how AI has been used. So they're also pretty fast in coming up with rules. And then compared to the us, US is adopting a more sort of voluntary like self-regulatory approach. There isn't really.
A federal or comprehensive rule. Yet, [00:16:00] that's why I mentioned the NIST AI Risk Management framework. It is a voluntary framework. It's not obligatory, so that's the most comprehensive, but that is where we stand at the moment. There are a lot of states coming up with their own legislations, however, but what we're seeing with that type of effort, while it's commendable, it is going to create a very patchwork, sort of fragmented type of regulatory regime.
Justin Grammens: Yeah, yeah, for sure. I mean, I don't see how it can. You know, regulate AI within the borders of Minnesota. It's just this technology is just spreading everywhere. It reminds me kind of, I think of the internet, right? I mean, I graduated college in 1996, actually. Mm-hmm. And got a job working on the internet and building stuff.
And you know, at the time I was just in my twenties and just having a lot of fun with technology and sort of seeing how the internet is completely changing. And I feel like, again, this is just my memory, it was kind of like the wild, wild west. Like people could just stand up a server and could just start.
Putting stuff out there, and everyone was just sort of trying to figure out how they could build their [00:17:00] own e-commerce systems and storing credit cards their own way. I mean, it was just like it was completely, let's just do this stuff. It feels like AI is kind of going, at least in the United States, it's going through that sort of phase right now.
Ren Bin Lee Dixon: I wouldn't exactly use. Wow.
Justin Grammens: Okay.
Ren Bin Lee Dixon: Describe. It's not as wild as that, but yes, it is definitely moving fast and there's, especially now with the new administration, there's a lot of focus on innovating and advancing ai. But I think a lot of people would also emphasize that they are still existing rules in place too, that can protect citizens and consumers from the harms of ai.
Now, the challenge that we see is that. Because of the nature of AI and its capability, it is so widespread and wide ranging. It's hard to have one specific regulatory agency that is able to cover all of it. Like even a lot of oftentimes we would see the FTC, the Federal Trade Commission doing most of the investigation [00:18:00] into these big tech companies, but even their reach is also limited to what.
You know, it was, is within their scope.
Justin Grammens: Is there anything, I mean there's a lot of cost cutting going on in government, like right now. I mean, is that a concern that we just won't have the power, I guess, you know, the manpower and women power to be able to sort of regulate all of this work?
Ren Bin Lee Dixon: The short answer is probably yes.
But the thing is, even before these cost cutting of initiative, there isn't even a dedicated sort of agency that's overlooking AI specifically. Yeah. So, you know, further removing personnel from these regulatory agency, we might potentially see, you know, less effort or urgency in Yeah. Investigating any potential harms.
Yeah. What we'll have to see. I mean, FTC is pretty strong, I would imagine.
Justin Grammens: Well, I'm just, as I was thinking, my brain always goes to, well, could we use AI for this? Like is there a way for us to, I mean, I remember Claude and Anthropic, like their whole thing was constitutional ai. Like they were trying to build [00:19:00] these constitutional systems to.
Follow certain rules and regulations, like is there any way for us to use AI to to manage ai? It sounds crazy.
Ren Bin Lee Dixon: I don't know. That's a good question, but Okay. Yeah. I mean, that's tough. It's like asking can AI do almost anything? Not yet. Not yet.
Justin Grammens: Yeah. Okay. Well, you know, a couple things. One is, is like, I think it's fabulous that you just decided to, you saw something in the world that needed to be fixed and you completely sort of focused your career around that.
You made a, some sort of a shift, you know, a 90 degree shift or whatever it was to sort of pivot into something that you sounds like you're very, very passionate about. Obviously during this conversation. I mean, I think it's great. I think it's fabulous. Like how would you mentor someone else who's sort of like thinking about getting into AI and moving into this space?
Ren Bin Lee Dixon: Well, specifically for AI policy, it is a very wide ranging field because it affects a lot of different parts. It affects governance, it evolves compliance. It's also human rights. [00:20:00] It is advocacy work. It is research work. So you can really come in from many different angles. One thing, and I'll also add that because I think in my conversations with people who are interested, one of the main concern is.
Technical understanding of the technology, and I always assure them that the most important thing you need to understand is how the technology is affecting people, affecting us ourselves, right? So you can really come in from many different backgrounds. I would say start with falling on, you know, emerging news around AI policy, the type of frameworks that are emerging, how, whether or not, what kind of gaps we are seeing, whether or not that's able to address the issues and harms that are emerging.
But also I think in general policy classes are also very helpful in understanding and because it is such an emerging and fast growing field AI governance, there are actually a lot of policy cleaning [00:21:00] outs there for, so for example. The Center for AI and Digital Policy itself, they have a policy clinic twice a year, especially designed for people coming in, mid-career professionals or just new graduates as well, who are interested.
And want to learn more about this field and they can attend these types of policy clinic just to get an introduction of what kind of career that could potentially look like or even doesn't have to be a career, could be someone already in your profession who wants to bring in more governance around AI in their company.
So there's a lot of these types of clinics, fellowships out there that people can register and participate in.
Justin Grammens: That's great. At the end of this podcast, it'd be great for me just to get some links from you. Right. I always put links in the liner notes and I can search for stuff, but if you have like specific like, Hey, here's a link to this upcoming conference or whatever, I'll be sure to sort of put those in the show notes.
Mm-hmm. I think that would be great for people to be able to follow up on that. And then, you know, speaking of which, like do you have much of an online presence? Do you like people to maybe find you on LinkedIn? If they have questions, what's the [00:22:00] best way to maybe engage with you?
Ren Bin Lee Dixon: Yeah, LinkedIn would be the best place to connect with me.
I know some people would have ex accounts. I have very intentionally not wanna have an ex present.
Justin Grammens: Mm-hmm.
Ren Bin Lee Dixon: So LinkedIn would be the best way to reach me and connect with me.
Justin Grammens: That's great. That's great. I feel like we just sort of scratched the surface. We didn't like go in super, super deep on, on a specific thing, like, and again, I don't know what you can share and can't share, you know, how, how much you're able to within your current thing, but was, you know, is there anything else that maybe we didn't really touch on that you wanted to make sure you sort of got across to people that would be listening to this podcast?
Ren Bin Lee Dixon: Wow. That there is a lot three on the spot. Yeah. I was thinking about there are a lot of challenges in the field of AI policy and governance. I mean, it's one thing developing the policy recommendations. We have several that we always say these are the must haves when looking at developing policy.
Framework one is like independent oversight. You must have that. [00:23:00] Not just, you know, saying that we do test and we do our own assessment, but if it's internal, it's hard to really. Know how, you know, how they arrived at the results, whether or not this were truly, you know, valid assessments. So is it very important to have independent oversight and as well as transparency?
This word can mean a lot of things for many different people, but essentially when we talk about transparency is just from basic things like when I'm interacting with ai, it's. It's important for people to know when they're interacting with AI and if deep fakes were involved to have some kind of acknowledgement that it is a deep fake so people are aware of it, but also in reporting a disclosure of what kind of training or data that has been used in specific models.
And this would be also incredibly important because we always say transparency is the first step to its accountability. Which brings me to the next topic. Accountability is also a huge. [00:24:00] Challenge of find, you know, assessing how do we approach accountability when it comes to AI and when harms occur or something goes wrong, right?
So there are many different topics, and not just in terms of issues, but also in general challenges in developing AI policy is we're seeing more and more effort or work from big tech's lobbying and the development of AI policy. So a lot of time we're. Concerned about how much of that is impacting and influencing the outcome, whether or not, you know, regulations are being reduced because of these type of influences.
And then perhaps another topic I think is interesting, especially with the current administration, is this continued kind of balance between innovation versus regulation. Right. There's always a lot of discussion about whether or not to regulate and whether or not that's going to hinder in innovation and advancement.
And I always think [00:25:00] that, you know, there's a lot of good case practices out there and how certain industry that are very highly regulated, such as med tech, right? It is a very highly regulated industry. And yet, you know, we still see MedTech thriving and they're still innovating. They're still doing well, but because it is a high risk sector.
There needs to be regulation in place to address the potential harm. And I often point out that with adequate safeguard in place, you know, these type of high risk, high impact application can be developed safely and then can be deployed safely as well. And it, I think the outcome is it will actually increase public trust, which is very important for adoption because if the public trust is not there, it's not going to.
It's gonna reduce, you know, the rate of adoption with people are not able to trust the system that are being used.
Justin Grammens: For sure. Wow. So independent oversight, we talked a little bit about transparency and accountability. So those sound like sort of like the big terms that you [00:26:00] need to make sure when you're drafting a policy, you make sure that you check all those boxes.
And a lot of that stuff then flows into trust. I was thinking about the transparency part. I mean, speech to text and text to speech has just gotten so good these days that I truly believe that the next time you pick up the phone and call somebody and, and I'm talking about more on the business side, like I call my bank for example, right?
Like, you know, at some point it's not gonna be human anymore, it's gonna sound like it's a human, you know? Are you saying around sort of that transparency, like the bank should basically say you're interacting with an AI now. Like it should actually let me know about that. Do you think?
Ren Bin Lee Dixon: Well, I could put it back to you in a way.
What do you think you would like to know if you're interacting with an ai?
Justin Grammens: Good, good question. As an end user, I probably don't care, but I, I think I might be of a different generation maybe, and different comfortability I guess, with what's happening around it. I know other people that I know would absolutely not like that.
They would feel like they're being duped in some ways. But again, I guess that's what the policy statement is. Then it comes back down to I, I guess, trust. But you know, for something like that, I, I guess how [00:27:00] do you put together a policy for that? Or is it these things where these policies can be a little bit lax?
Ren Bin Lee Dixon: It really depends on application and use cases, I would say. So a lot of time when we look at policy and a lot of these regulation, we wanna address the high impact applications essentially. Right. So for lower or minimal risks such as, you know, chat bots and these potentially, what we look at as, and this is I'm boring from the EU AI Act, is obligatory our transparency where they actually let people know.
So there's no a lot of restrictions, but just in general, there are transparency obligations. So we're more concerned about high impact applications. What I mean by that are users of AI in safety situations, so they involve safety. Or they have a certain impact on rights. So whether that's fundamental rights, civil liberties, or human rights, right?
So these are the, what we consider high impact use cases, and that's where we [00:28:00] are focusing the majority of our policy effort on.
Justin Grammens: Gotcha. Yeah, I mean, I've heard stories about, I forgot what book I was reading, but Yeah, I was like this, there was this. Ai, this large language model that was trained based on criminal data.
Right. And they were using this AI to decide if people should serve time or not. And it was very, very biased. At the end of the day it's, it's those types of situations you're talking about,
Ren Bin Lee Dixon: right? Right. Yeah. That particular case, I think there are quite a few of them, but there were some major ones that were highlighted in a news a few years ago.
And the problem is that with these data, because they are inherently biased already, and then being used to train these systems. Now you would question why were these not checked before they were implemented? And that's what we talk about, independent oversight, that before these kind of high impact applications are deployed, that there should be some kind of independent oversight, you know, assessing whether or not this is going to be the level of accuracy, how much of an impact it's gonna have.
So these type of mechanism need to be in place [00:29:00] before they're deployed.
Justin Grammens: Yeah, for sure. And that's where we still need humans in the loop, I guess in a lot of these cases. You can't just let the AI go on. Yes,
Ren Bin Lee Dixon: I think human in the loop is for now, for these kind of high impact cases, is the way to go. It should not be removed yet.
Justin Grammens: And question is, if ever, I don't know. I guess we'll see. We'll see. Right now is
Ren Bin Lee Dixon: I, I don't think so.
Justin Grammens: I would agree for sure on that. Well, renin, thank you so much for being on the podcast today. I had a lot of fun and look forward to keeping in touch and. Maybe having you at a future applied AI event. I think this is fascinating conversation and as I say, we always have these, um, meetups and we always have these conferences and stuff, so next time you're in the Twin Cities, I'd love to have you drop by.
AI Announcer: Sounds good. Thank you for having me, Justin. You've listened to another episode of the Conversations on Applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied AI Dotn to keep up to date on our events and connect with our [00:30:00] amazing community.
Please don't hesitate to reach out to Justin at Applied AI if you are interested in participating in a future episode. Thank you for listening.