Conversations on Applied AI

Caroline Sinders - The Intersection of Design, Art and Artificial Intelligence

May 24, 2022 Justin Grammens Season 2 Episode 13
Conversations on Applied AI
Caroline Sinders - The Intersection of Design, Art and Artificial Intelligence
Show Notes Transcript

The conversation this week is with Caroline Sinders. Caroline is a machine learning design researcher and artist. For the past few years, she has been examining the intersections of technology's impact on society, interface design, artificial intelligence, abuse, and politics in digital conversational spaces. Sinders is the founder of Convocation Design+Research an agency focused on the intersection of machine learning, user research, designing for the public good and solving difficult communication problems. As a designer and researcher, she has worked with Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, and others. Caroline holds a Bachelor's of fine arts and a Master's of Professional Studies from New York University.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

Enjoy!

Your host,
Justin Grammens

Caroline Sinders  0:00  
I'm interested in legibility and impact. And I think a way to sort of get my point across more quickly is to sort of leverage all of these design and art skills that I have to show people that I'm talking about. So some people are like, tell us don't show us I'm like, No, I'm just going to show you.

AI Announcer  0:17  
Welcome to the conversations on applied AI podcast where Justin Grammens and the team at Emerging Technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.

Justin Grammens  0:48  
Welcome everyone to the conversations on applied AI Podcast. Today we're talking with Caroline Sinders. Caroline is a machine learning design researcher and artist. For the past few years she has been examining the intersections of technology's impact on society, interface design, artificial intelligence, abuse and politics in digital conversational spaces. Sinders is the founder of Convocation Design and Research an agency focused on the intersection of machine learning, user research, designing for public good and solving difficult communication problems. As a designer and researcher, she has worked with Amnesty International, Intel, IBM Watson, the Wikimedia Foundation and others. Caroline holds a Bachelor's of fine arts and a Master's of Professional Studies from New York University. Thanks, Caroline, for being on the program today.

Caroline Sinders  1:34  
Yeah, thank you so much for having me. 

Justin Grammens  1:36  
Awesome. Well, I took a look at your website, obviously, and saw some of the awesome projects you've been working on. But I'd be curious to see. I mean, normally what I ask people first question is sort of, how did you get into this space? And maybe what's some of the background that led you to where you are today?

Caroline Sinders  1:51  
Yeah, sure. My background is kind of full. For me. It looks like a much more straighter line. But I guess to other people, it's quite wobbly, because my undergrad degree was in photography and imaging, and I focused a lot on photojournalism and portraiture, but it was really interested in the future of imaging and how technology was going to affect photographers and what it meant to take a photograph. And like, how would at the time I was an undergrad, this was the early aughts. So thinking about how cell phone cameras were revolutionising photography. But I was interested in like, well, what is what are more future thoughts? Like, how will technology affect what an image means? Like is a photograph a photograph, if it's ever printed if the literal meaning of the word photograph is thinking about sort of light onto paper, for example? So how do we think about authorship in a space of like, digital imaging, so those are a lot of the things I was really interested in. You know, after graduating working for a few years, I went and got a master's at the interactive telecommunications program at NYU. And what I was also interested in there was thinking about how do you build tools for communities, and I was still really interested in photographers. And what really got me interested in machine learning and artificial intelligence is I got a job at IBM Watson working on natural language processing API's, and as a design researcher, and so as a design researcher, I sat in a space between the design team and the research team. So while I was on the product design team, my goal often was to interface with research scientists or engineers, or like different heads of companies. If we bought a product, for example, we bought a company and then had to sort of integrate that software into our company. And one of the things I've had liked a lot about my master's program. And I realized what I actually really liked at IBM as I really like, taking technicalities and translating them into different kinds of understandable entry points. So that could be any streamlined documentation, or how do you build a demo to explain what something is doing? Sometimes I describe myself as not a deeply technical person, but I'm trying to not say that when because it's imposter syndrome into it's not a good way to describe yourself if you're like a woman in technology. But um, this is more to say that, like, I really like programming in terms of understanding how and why people program and how they make the things they make. And I'm less interested in myself in terms of writing code, but I'm very interested in technical processes. And so because I find those things interesting, I really loved having this role, where I got to really learn about all different kinds of software and machine learning capabilities that we had inside of Watson. And then working with the design team to make these public facing entry points for different kinds of communities. So how do we think about documentation on Bluemix, for example, was a thing I had to really think about. We really had to build demos, which I enjoyed doing, I think all of us did, to really help sort of sell what this thing was that we had write a demo is a very normal thing. If you think about it to see on a website that a company has about, like a software library, for example, you want to build a demo to kind of show people what it does really Well, part of what I had to figure out is, well, what does it do? Well, what doesn't it do? Well, and then how do we also make sure when we're bringing over new pieces of software, or if we're launching new kinds of software from our own research lab, that it fit into the sort of product eco landscape, we had, like, Bluemix, like documentation as to sound the same. So I just really liked that, to me, it's very interesting puzzles have kind of tried to solve and unpack, but it also really fit into these things I found interesting. Earlier, which is, well, what is the future of any kind of creative space? So thinking beyond photography, like what's the future of language? That's what I found extraordinarily interesting about working in natural language processing.

Justin Grammens  5:40  
Wow, a lot to unpack there. Well, lots of different thoughts. And I thought it was interesting how you kind of went from photos to sentence structure? I mean, it feels like right, you said your background was in photography. And, and I was I was thinking about Photoshop. And I mean, I was tinkering with Photoshop back in the 90s. And thinking about all these cool things that you could do with it, not even AI related, but just like, wow, just the power that you can do now that you can control these images. But you know, what, what do you think then made you think, get away from visual more into sentence structure? And words?

Caroline Sinders  6:10  
I guess the very easy answer is that's where I was placed when I got hired at IBM. But those were, those are some of the more things we're working on. But I've always been interested in conversation, and sort of the really messy parts of humanity, which ends up being any kind of social interaction. And online, the majority of our social interactions are the written word, it's text, it's maybe less video. So if we're thinking of any kind of conversational space on the web, we're engaging more with text than we are with audio, video, or images, we may be like adding gifts. But the the main sort of conversation we have is one could argue primarily tech space. And so for me, one of the things I was interested in looking at was communities and conversation. And that becomes any kind of based interface. And that was what I what I got interested in it at IBM or not an IBM, but at ITP. ITP is the acronym for my master's program. I was also really interested in political uses of social media, and how people were sort of using and misusing tools in like good, bad, neutral, delightful, weird ways. And that was one of the classes I took at ITP was with Clay Shirky on sort of how people were using social media in these sort of new ways. So I was just really interested in how people sort of augment tools around them, and what is a tool, really, but how does that sort of facilitate or add friction to any kind of human connection, and then within that, how different aspects of human communication really become data. And so I was super interested in conversation. Actually, I should go back and say earlier, that one of the reasons why I was probably put into text I'm now remembering a conversation I had with head of design was I was really interested in human conversation. That was one of the things I was really interested, I'm interested in how people talk to each other. I'm interested in like, where machine learning is going to fit into that sort of engagement. But I also think at the time, Watson hadn't really done a lot of computer vision. And I think that was also something that like, even at school, we weren't really playing around with very much. But I at the time, just, I still am very deeply interested in how people talk to each other, and how technology becomes sort of a medium or conduit for that kind of communication. And then one of the things I got really interested at Watson that it's still interested in is, why do we design conversational tools that mimic human conversation? And what can we think of more as conversely, like I call it bot, Tina's so like the way that bots Converse? Why do we not create new interfaces that can sort of do things that mimic the strengths of machine learning, like data processing? So why are we trying to continually fit interactions in this very humanistic mold? Which is the one to one of like, a two person conversation?

Justin Grammens  9:08  
Sure. Yeah. I mean, so as you seven talking through your career and things that you have done, I do think it's, it's really cool that, you know, a lot of people think of this space, and you've touched on it a little bit, the AI machine learning thing, like everyone has to have a PhD, right? Or everyone has to be a data scientist, or they need to be like, very, very technical. But I think you've been able to really thrive and I would say, you know, do some really awesome projects, do some beautiful things here. Looking at it more from a humanistic standpoint, right from the outside. And when you talked about actually creating demos that reminded me of a job that I had, back in the late 90s, which was with a company that was doing mapping technology, and this was before everybody the big GIS, you know, boom happened. But you know, we had a piece of software and imagine, you know, Google Maps before Google Maps, I mean, we were really doing some really cool stuff with this. And I was tasked with essentially putting to get other developers samples. So you know, as an engineer, what are some things that I could do with it, because a lot of people, as an engineer, you see the building blocks, but you don't see like what you can actually make, you know, at the end of the day. And so I think it's really, really cool that you've been able to, when you were working in IBM, for example, just to sort of like be able to say, hey, here's what's possible, sounded like that was probably a pretty, pretty fun space to live in.

Caroline Sinders  10:24  
Yeah, I really enjoyed it. I mean, and it's something where I think one of the, perhaps the through lines of my work is that I just want to know a lot. And so for me, part of the appeal of technology is I want to know as much about it as possible. And then I think, probably, for me, I don't know if this is a strength. For me, I don't necessarily have to be good at programming to know a lot about it. So like, I'm very interested in like, learning about the communities of people that engage with technology, how and why they use what they use, why do they select the tools that that they select? What do they then make out of those tools? And why? And would they have used a tool at a different time, I should say that, like, I do know how to program. And I do know how to do like some of these, like, computer science based work. But for me, it's less interesting to like learn a new programming language, it's more interesting to understand what the programming language does, and what's the community, like behind it, and who are they and how do they interact. And I think that interest, which is, you know, one could argue much more like ethnographic or like anthropological interest has served me really well as a user researcher and design researcher, because it's given me a skill set to I think, be a stronger like product thinker, because I am deeply interested in why people use things. And then conversely, I find technology interesting. So I want to know as much about it as possible. So I also then am like a more technical design researcher, since there's like an understanding or a background, I try to build up in myself when I work in a new domain. So I really want to know everything about that area. So a lot of what I would do at IBM, even in my free time is, I would make all the research scientists talk to me and get coffees with me and really sort of walk through like, I really wanted to know, like, why does this one thing perform better in this way? And like, was that the intention? And how did you make it and why that wasn't just to help also serve the product? It was like, I am just intrinsically interested in those things.

Justin Grammens  12:34  
Absolutely. And we'll put links to your website, and stuff like that in the liner notes for this podcast, because you've got a ton of projects out there. And I definitely want you to talk speak to some of your favorite ones. But you mentioned about the way bots Converse. I mean, are you getting a lot into Alexa and Okay, Google and Siri, those types of things are those spaces that you're doing a lot of research and discovery in today, or what's what's sort of the sort of on your mind today,

Caroline Sinders  13:01  
today's like a bunch of different things. But I'm coming back to the bot stuff a lot more, especially more into my artwork. And I'm hoping to do more work on that. So if anyone is interested, we know love to like collaborate, a lot of the work I do sort of spans like a few different areas, again, all through the lens of looking at technologies in society. And then like a particular lens I take is like consumer safety. So I'm thinking a lot about consumer products. So right now I'm, I'm a part time researcher at the ICO here in the United Kingdom. So that's the Information Commissioner's Office. And that's like the UK version of the FTC. And what I'm looking at there is, are there dark patterns in artificial intelligence products. And so the role that the ICO takes is, is they look a lot at personal data, and personal data laws that exist here in the United Kingdom. And then they're, they're looking deeply at technology. So I sit on the tech policy team, the tech strategy team. And so a lot of the things that we're we're producing are like guides for companies on like, how do you think about technology privacy, in terms of AI? Or trying to break down like, what auditing means? And what are different ways people can do that? So if if you're not working in AI, or machine learning, at least trying to explain what these things mean, or trying to give a primer on like, what is what is the role of responsible AI? Like, what does that mean? So what I'm looking at is this sort of area that has existed in design for many years called Dark patterns. Dark patterns are design patterns that unintentionally or intentionally manipulate users into making decisions they wouldn't normally make. So if you've ever subscribed to an email listserv, and then tried to unsubscribe, and like two weeks later, you're still subscribed, you probably encountered a pattern. The reason I'm interested in dark patterns related to artificial intelligence comes from a point raised by the FTC. So back in April 2021. The FTC was holding a day long convening with a lot of different experts about the role of dark patterns. And one of the points they raised that they were interested in is, are there dark patterns in artificial intelligence? And what I'm sort of seeking to do over the next two years of my research is trying to establish, are are there dark patterns related to artificial intelligence products? And what would they be? And I'm sort of coming in with a little bit of skepticism, which I think is always healthy, because I'm not sure if there are dark patterns. And one other thing in AI, I do think that there are dark patterns. But one of the reasons I have this sort of skepticism is, you know, I'm also interested in where does like a dark pattern stop and just like really bad or pardon my language, like really shitty design start, you know, you've sometimes maybe in product and you're like, this product isn't really made very well. And it wasn't made not well, on purpose. It just is not made very well, right. Like maybe like, the buttons don't quite make sense. Or, you know, it was clearly made on a very small shoestring budget. And now like, one of the things I can think of is, it's like, really early zoom in the middle of the pandemic was not made. For that many people on the pandemic, there was a lot of bad design choices, right in zoo, those choices have now been updated to be better choices. But that wasn't necessarily a dark pattern, though. There, there could have been some on there. Right? So I'm interested in thinking of what are the edges of dark patterns. And so the reason I'm also interested in are their dark patterns are not in AI is often artificial intelligence, I think when embedded inside of consumer products, you know, we think of dark patterns is like a UI choice. I am curious as if some of the quote unquote, dark patterns we'd see arising are actually sort of downstream harms of an algorithmic harm or like algorithmic bias. So meaning it's not necessarily the way we traditionally think of a dark pattern. It's actually the manifestation of a harm. And so I feel like I'm being a little nebulous, but a better way to think about this is like Princeton did a big study on a bunch of E commerce websites. And they were looking for dark patterns. And they found like a bunch over 1000 websites. So why are there dark patterns there? Well, like the upstream harm is capitalism. There are dark patterns on ecommerce sites where they want you to buy stuff, right? And so then that's, in effect, you know, but if we think of, well, what does artificial intelligence do inside of a consumer product? So maybe it's different kinds of like search engines, right? So the results you're seeing if there's technically a dark pattern in there isn't the way we think of a dark pattern? Or is it actually the harm, right? Is it sort of the like opacity we have related to why we're seeing these results, or like, or a model that's been like mis trained. And so what I'm sort of striving to do is is trying to say, like, Should we call those those harms dark patterns? When we see them arising in something, let's say, like, algorithmic pricing inside of Amazon, you know, I don't know, I think this happened in the US. But you know, we're like, hand sanitizer during the pandemic got really expensive. Should we call that a dark pattern? Or should it actually be named something else? Because it's directly correlated to algorithmic pricing, and algorithmic search? Right? So this seems like a very niche thing to be focusing on. But because it's such an emerging field, I think it's important We name these things correctly, as the research is emerging.

Justin Grammens  18:20  
How does this overlap with you know, everyone's talking about AI and ethics these days, and specifically around bias, right. So, you know, these models have been trained, and oftentimes are trained on old data that has already got biases in it. So now, the results you're getting are even more biased, and you know, it could be related to race, it could be related to sex, or religion, religion, whatever it is, there's the thing you're talking about with regards, you know, to dark patterns overlap. And in some of that case,

Caroline Sinders  18:48  
that's actually kind of what I'm trying to sort of figure out because I would argue that they do. And in doing this research, I want to see actually like, how closely correlated they are. Because my my hunch is, is that they're actually the same thing, like the earth pattern in artificial intelligence, not in any other product, but a dark pattern inside of products that have AI. And then if we see a dark pattern that comes out of something related to machine learning, or algorithms or AI, is that harm. And maybe we shouldn't call it a dark patterns that leads to confusion. So my hunch is that with this kind of, again, like algorithmic pricing, or certain algorithmic search engines in like E commerce platforms, aka Amazon, that like the surfacing of that material, maybe we can't call it a dark pattern, or if we do, we'd be very specific about it. But it is directly related to this research of of Harman bias. And so I think what I'm interested in his is how do we either add more specificity to it? Because I think we're at these different inflection points of you know, so many things are harms. And it's important that we name harm for what it is. But I also think it's important to then understand what kind of harm is it? Or where directly does it fall in terms of like policy when we're naming that harm if we're thinking of prevention, and solutions and mitigation. So I think I'm interested in how to start to add more specificity to this sort of landscape, one to help people better understand what these things are. And then in turn, that helps us either course, correct, either by writing better regulation and better policy or improving the technology. I think it's probably a mixture of both of those. But I think it's important now that we start to add, again, particular names to the kinds of harm that we're seeing. So if it is algorithmic carving, which it is, right, then like, is there a subset of do we have subsets of names now that we sort of call. So that's one of the things I'm really interested in trying to sort of decipher? Because earlier in research that I did in 2018, I was calling like this kind of algorithmic sort of pricing, or at least like the surfacing of products inside of E commerce platforms that we could think of that as a dark pattern, one, because I don't know why we're seeing that. And so we don't really have as a consumer, we can't really compare it to like a variety of other products. Like if we think of Amazon as like a store, which it is, right, a traditional target, they can price things at different points, they can place them in different parts of the physical store. We do that all the time. It's a form of persuasive design, that's totally fine. So legal, right. And it shouldn't be like people, you know, why not play around, see it, see what people will buy? What's different with Amazon is that it's like being at a target of like, 11,000 rows, and all the rows are kind of like moving around you as you grab a product. So there's no way for you to necessarily have a totally clear understanding of choice in that space. And so like in an E commerce setting, like, shouldn't we still call it our cabinet? Something I'm now wondering, like, a few years later, like, or should we call it a subset of a certain kind of algorithmic, like manipulation in an E commerce setting, right, just talking about money and that aspect, because like, you realistically make an informed decision. And as a consumer in that particular product, if there's no really clear way for you to actually like very evenly compare, right? All the different products is you could hit refresh and be given a different thing. Or you could change your address slightly, maybe by a state and see something totally different, even if all those products are still technically available to you. These are like the rabbit holes and like going down like well, it, what is it?

Justin Grammens  22:45  
Is it as simple as clickbait? I mean, that was the thing that I'm thinking of, like, when I when I go to read an article, then at the bottom, this is just like, it's, you know, some some startling picture with some, you know, freaky tagline that just tries you to click it, right. It's just again, I continue just to walk away from those things. But, you know, people get sucked into that. It's just really all about trying to get more clicks, in some ways, right?

Caroline Sinders  23:08  
I think even so, like clickbait also, then is a different term, right? Because we're talking about that in the context of articles. And so then maybe that's something different is maybe of algorithmic harms, specifically with names or clickbait. And maybe it's too much like, you know, it is turtles all the way down. But I do think it was important to try to like, name some of these things. Also, just in terms of you know, so we're all on the same page, regardless of technical background. So we know what it is we're talking about.

Justin Grammens  23:36  
Sure. common vocabulary. Absolutely. That people know, what they mean. You know, you mentioned harm, and I was looking at a project. I'm not sure if this was the most recent project, but there was this remote work report that you did, there was about COVID-19, sort of exacerbating harm, I thought was really neat. And, again, like I say, I'll put a link to some of these projects. But you know, I don't know if you want to talk about that one in particular, if that one's like more recent, but I would be curious to know, like, what are some projects you've worked on? And and maybe what was your favorite? And maybe why?

Caroline Sinders  24:06  
Yeah, well, it's hard to say a favorite. I feel like it's almost like choosing a favorite child, when I don't read children. But my mom has told me that she can't pick a favorite between me and my sister. So I'll believe that that's impossible. I mean, I have quite a few. Like, I really enjoyed working on that report. You're mentioning I worked on two projects. Specifically, I shouldn't have worked on three projects, specifically about COVID-19. So one was early on the pandemic, with funding from Omidyar Network to look at how creative communities were responding to the pandemic. And then what what were like findings that community organizers designers and engineers would need in terms of like designing products for communities. Then I worked on the remote workplace harassment report, which I was really excited to work on with a variety of people with McKinsey Max and Ellen Pao and Yang hyung. And, you know, that was sort of really sort of asking this question of we've seen online harassment And obviously in all different forms of digital spaces, we know there's workplace harassment, what does remote workplaces look like? And what is happening right now with harassment in those now digitize workspaces. And I'm really, really, really proud of the work we did. Because I think we set a pretty good baseline in terms of of looking at the pandemic at that time. And also sort of understanding all the different kinds of harms and frictions people were facing. Like one of the things we were surprised but not shocked to hear, but just surprised that really validated things we've been hearing was it like across the board, regardless of seniority level, everyone was feeling burned out. And I feel like that's one thing to just sort of say, but to have the data on that, because we had 3000 survey respondents was sort of really powerful to think about another project I worked on with funding from the Sloan Foundation was looking at academic communities and remote convenings. In what do they need, and sort of what's the future around that. So again, leveraging a lot of best practices, and a lot of people are really interested in hybrid, and there isn't a lot of good research out in how to do hybrid. And so that was an interesting finding to have. I really enjoyed a project that I did before the pandemic was funding from Sloan and the Ford Foundation, which is a part of their critical digital infrastructure research. And so I was looking at JavaScript communities and a specific subset of JavaScript communities called the Jas comps and the JS meetups, and how there seemed to be a lot of diversity and equity and inclusion in the spaces and why that happened. And you know, again, like what are best practices they have if we think of community health as a necessary part of open source communities and open source infrastructure. And so that was also when I was really proud of, and I think that that, again, harkens back to a lot of my original interest, especially as joining IBM of I find technical community is really interesting. And you know, I have a skill set inside of a technical space that maybe other people don't, which, again, is this form of user research and design research. And so I just find communities really interesting, particularly technical ones, and how they engage with each other and how they talk to each other. And then how do we think of these communities also in very much blended spaces in the sense of that they're existing online, obviously, in digitize communities, be it through, you know, contributing to different code on GitHub, or on Reddit, StackOverflow, Slack or Twitter. And then in some cases, Elise with a J S comms physically meeting up, and sort of being in that space together. And, you know, sort of acknowledging that for a lot of technical communities, the offline meetings are incredibly important. And that's one of the forums that sort of furthers community. But these spaces can still be really strife with toxicity. So how do we think of community health and community norms as a form of like infrastructure we need to invest in as a form of technology in a way that we have to invest in. And so this is, this is the kind of work I really like to do.

Justin Grammens  27:58  
That's cool. Well, you're talking to a big community person, if anybody follows me. I mean, I really am passionate about building technology communities kind of started back in the early days of mobile and formed a community called Mobile Twin Cities, and did a lot of kind of the whole idea was to bring together people who were building mobile apps to people that need to have apps built, right. So rather than being so like, I'm an iPhone developer, it's like, Hey, I build apps and who can I help, right, and moved into Internet of Things and built a whole community around IoT, and now really more focused on machine learning and AI. And it's been interesting when the pandemic had just sort of lamenting here, right. I mean, we were always meeting in person, it was always very Twin Cities, Minneapolis, St. Paul, area focused. And so that was the community that was engaged with us. And the pandemic forced us to lose some of that, I think personality in some ways with regards to we're from Minnesota, we're all from Minnesota, you know, we're all in the community here. And it's been a little bit of a double edged sword. I mean, number one is I probably wouldn't have met you being over in the UK right now. And us doing this, man, these podcasts have allowed me just to expand beyond just the Twin Cities area, but there's been a little bit of a cost to that. Right. So our meetups are really not. I feel like as personal as I used to be. We used to always get together and have pizza and beer and kind of all be together in the same room with that human aspect to it. Right. I don't know. I'm just sort of like lamenting, but it's been interesting, what I've seen, and I guess I'd be curious to see if that's kind of touches on what you're seeing as well.

Caroline Sinders  29:27  
Yeah, I mean, it's hard. It's hard to say because I was looking at different communities that are now like not in the city I live in but for me, like Brooklyn JS was like a core part of my life. And I was introduced to it by my co workers at Watson. We would go once a month, Thursday after work to Brooklyn Jas, Wednesdays I think was like Manhattan Jas, and then like, Tuesdays was like Queens Jas, and they were all staggered throughout the month. So like the first week of the month, it's like, behind Jas, and then like, the next week is Queens. And then like, the third week is Brooklyn, Jas and that was like community that meant so much to me. And I was living in Europe around the time of the pandemic. And I was just sort of seeing what was going on. And obviously, all the spaces shut down within the bar that that Brooklyn dress was held in was sold and emptied out. And, you know, it does seem like for a lot of communities, these physical spaces that were really important, some of them have sort of switched online, the conferences seem to be coming back, like brick. Jas, I know this year is back in person, which is great. But you know, it is like a real blow. Because knowing that you can physically sort of go somewhere and interact with someone, and it's people in your physical community is a different experience, perhaps than being online. And not and I say like, different better because online is as real as offline. But in the sense of like, you know, it is nice to know who's around you, in your city. I grew up in New Orleans, which is, you know, smaller, mid sized city with like, a somewhat burgeoning now tech scene. And I go back there pretty frequently. And, you know, I always want to be more tech people. It's nice to sort of know, or so it's nice to have camaraderie, be able to talk about things like when I first moved here to London, I got introduced to other women technologists, who went to Brooklyn, Jas. And that was really great, because we had this like shared experience. And also, it's just sometimes nice to be able to talk to have friends that you can also then be like, oh, and then this thing happened with this product we're building. They're like, Oh, my gosh, yes. Tell me about versus trying to explain that to some other people. You're just they're like, What are you talking about? How did it break? What do you mean? You're just like, right, right. I guess this is all to say, like, TLDR? I agree with you like come? Yeah, sure.

Justin Grammens  31:39  
Sure. Well, you mentioned about conferences coming back. Yeah, I'm kind of excited. And in two weeks, I'll be flying to San Francisco to go to a machine learning conference, specifically on tiny ml, so kind of doing machine learning at the edge. And yeah, it'll be the first conference I've gone to in many, many years now. And super excited for it. There are obviously ones going out in the twin cities that are local, I'm actually speaking at one in May, but you know, actually physically get on a plane and fly somewhere for a quote unquote, conference, it's going to be fun, it's gonna be fun to see people, especially because you know, I've been doing them online, I will say, but it's so easy to get distracted. And again, I'm kind of going down a little bit of a tangent here, it's not really related to AI and ML per se. But, you know, when you fly somewhere, and you go somewhere to get a hotel, and you get all that stuff, and you're there, it's just so much more engaging, because there's like nothing else to do. Whereas if it's like, I'm just gonna pop into the Zoom call. It's so easy just to get distracted and not actually learn anything.

Caroline Sinders  32:33  
Totally. I mean, I went to my my first conference, post the pandemic, in November, it was a data and art conference, and actually ran into like one of my old professors, I, which is really nice. But I was just struck by like, what it felt like to like, meet strangers again, or like, run into people I'd seen on Twitter and be like, Oh, my gosh, like, the feeling of physical events. But I also hope that like some hybridity, sticks, because there are some events that I've heard of the pandemic, I was traveling all the time, because the, you know, the technology research I do is in the Human Rights space, by also then work with companies either guiding them on all different kinds of consulting related either like trust and safety, ethical tech, etc. But I traveled a lot for work. And then I would travel a lot for conferences, and I don't know if I can, like, do that amount of traveling anymore. You know, like, I've fallen out of it so much. Now, on one hand, I am glad that like, we're sort of entering a time as a society where we don't have to be in person, right. And so it is nice that some things are in person, but I'm also glad, like, there are so many convenings I was at that were like these private convenings. And people were like, they have to be in person. For two days, we're going to fly everyone into this crazy location, we're gonna be jet lagged, and we're gonna do it. And it's like, we don't have to, maybe that really could have been a zoom call. And I hope that some of those things, at least stay.

Justin Grammens  33:58  
I agree. It's definitely more accepted, you know, now these days for sure. Because we've been forced to do it. And it's worked. Right. So it's like, you know, we all had to do it and found out we were still very productive. You mentioned about some of the projects. I was curious about the feminist data set. I mean, that seems like one that you've been doing for the past five years or so. And it's still working up to the present time. Could you touch on that a little bit?

Caroline Sinders  34:21  
Sure. So feminist data set is me, I guess, taking almost like a handmade approach to technology, because I'm investigating every step of the machine learning pipeline from start to finish using intersectional feminism as an investigatory framework. So what that means is, I'm sort of deeply looking at sort of like every step and saying every sort of core component or every component of this, is it feminist? Is it not? And how would it need to be remade so data collection, for example, that was translated into a series of workshops, where I sort of explained what is machine learning? What is data and then participants sort of go out and look for intersectional feminist data. And it's all in text form. So that doesn't have to be an essay on intersectional feminism, but has to be something written with intersectionality imbued within it, meaning that like the actual textual structure has forms of intersectionality in it. And you explain that to people that those are two different things, you need a workshop to do that. And so one of the examples is, you know, if someone submitted an article on wage inequality, and the article said, like, men and women are paid different amounts, that is not intersectional feminist, and it cannot be in the dataset, even though like wage inequality is a feminist issue. But an article that sort of mentioned that people are paid different amounts. So like, black women, like Latina women, like trans people, white women, you know, indigenous women are all paid different amounts. That is an intersectional, feminist article. So I'm interested in is intersectionality, within text, and what does that textual structure look like? And I'm interested in other people also submitting this because I don't necessarily want this to be just a me project. But also I feel like, again, sort of if I'm thinking of what is data collection itself, was intersexual data collection, I think it needs to be done sort of slowly and thoughtfully with the input of others with the input of the community, and sort of I see, then that process, you know, standing directly against the ways in which we think of data collection or data generation now, which is often this very, very fast thing at times, it's maybe not very intentional. So I'm interested again, in this sort of slowness. So that's the first step. And the second step, what has that been thinking about data training and data labeling, and the labor behind cleaning a data set, and that's for the fellowship with the Mozilla Foundation. And the third step is now thinking of generating the model and algorithmic audits. And those are things that I'm currently looking for funding for to do. And the reason the project ends up being so long as it's really hard to apply for arts funding, because this is an art project. And when it's for something like this kind of technology research, and one of the reasons I sort of state an art project is I'm really interested in failure, like the model that I'll be generating is going to be extraordinarily misshapen, because our text sources are coming from so many different places, it's articles, it's poems, it's like song lyrics, it's transcripts of conversations. And so the model itself actually won't be very useful to anyone but me. But also, if we're thinking of this in a more traditional like HCI, or computer science angle, one of the things we'd be deeply concerned about right is sort of the structure and performance of this model. And for me, that's almost one of the afterthoughts and much worse than the entire process of building and actually saying, Well, how do you make feminists technology, and really sort of asking that at every step of this sort of creation process? And so I have to say, like, this project is a lot about failure and friction, and imperfection, right? And also, it's like a handcrafted project, right? Like, it's like making drawing you imagine making all this, you know, from scratch?

Justin Grammens  38:11  
Sure. Well, I was looking at it on the website, there's an open source toolkit that you can download, right with essays from the feminist data set. So you're, you're really doing this in the public, you know, it's very, very open.

Caroline Sinders  38:23  
Yeah, I mean, the whole project. So there's a related project to it called tear case, we made a web browser tool for anyone to sort of do their own data set labeling. And it was me also as a design researcher rethinking micro service projects, and micro service tasks. And so there's a wage calculator in there to sort of help people think about what is this labor. So this was, this was the project for the Mozilla Foundation, I initially really wanted to host payments on this website. And I realized, if I did that, I'm very quickly turning into a startup, and I don't have the money to process people's payments by strike, what I could do is I could sort of show the idea of this sort of hidden labor, and how we think about labor. So one of the big things I was thinking a lot about is, how do you sort of break down the cost an idea of labor? And how do you relate that to these really small micro service tasks that we see on things like Mechanical Turk or CrowdFlower, you know, as you know, part of the sort of backbone of machine learning of like, where datasets can be sort of cleaned and labeled, and or along with models, like refined and trained. And the thing I realized is from doing like, some user interviews with different research labs, different even microservice workers themselves, like I became a Turker for an entire month, but just you know, talking people also worked in startups is we're often looking at these tasks very alone. So like we're like, saying, oh, like the task at this price is like a fair one. And then I was like, Wait, but when you think about it in terms of someone's entire day, and how long it takes be Because how do we know if this is a fair price? If we don't know how much money someone is making, and how long so that's where the wage calculator came from, that was sort of my intervention with the payments was actually saying people should should work an eight hour day, they shouldn't have to work more. How do we break this down and think about this more, the calculator only really works right now for Washington State. The reason behind that is I did a lot of research until where did the majority of mechanical Turkers come from India and the United States, according to some research that I found a peer reviewed paper, I then decided to go with the US because there was a higher cost of living. And then it turns out that the state with like the highest minimum wage is Washington state. And as well, that's really political, because that's where Amazon is headquartered. So then I looked at the difference between like the highest minimum, like the minimum wage and the cost of living in Seattle, and the difference was like about $9, or $8. So like, the minimum wage is 11, the cost of living needed is like, maybe it's less, maybe it's like 16. So then I crafted an entire day, I used some best practices for when I worked remote. So I wanted to give people five minute breaks at the top of every hour for a bio break that's paid for, because when you're a micro service worker, or a gig economy worker, you're not paid for your breaks, you should have a lunch break, so they get a four or five minute lunch break. So an eight hour day suddenly becomes a six a half hour day. And then I did a series of experiments on time of how long it would take to sort of duties, different tasks. And it seemed like people had an idea that microservice tasks are a few seconds long, even doing some basic image labeling is not a fuse, it can be like 20 seconds to a minute, depending upon what you need. So all that calculator. And effectively what I came away with is, you know, there are these little sliders, I hope people are seeing this, look at it, there is a little slider. So if you put something below 20 seconds, it tells you that's kind of an impossible task, like because it should take longer because nothing really a few seconds. And then if you put it below 11 cents, that's not a living wage, like that's the minimum like sweat those two minimums, that's where you start to get towards this living wage. And then sort of add the amount of things you'd want someone to label right or to sort through. And then that gives you a breakdown of how many hours or days it would take someone and like what their daily payment is. And the goal we're aiming for is this like living wage payment. And I think the day rate comes out to like, I was like $120 a day. So the goal is really to show people through this calculator through this visualization, what it actually meant. And then what people can also do is you can also just use our tool to label a data set. And it's open source, so you can use our code. So you can run your own instance of it in any way you want. A few research labs have, if you're the creator of the project, we ask you different questions. And then you have to describe the project. So the person that's then going to be labeling, like knows what it is or looking at, actually, and that was also me thinking a lot about the lack of consent, and a lot of these microservice jobs people do like not really quite weighing exactly what it is you're looking at, there's kind of a description, and then that you could refuse to do it. But if I were running my own startup on this, there's a lot of things I would change.

Justin Grammens  43:21  
Sure, but yeah, you're you're always constantly finding funding. I will say that, you know, one of the things that I think is really awesome about your projects is there's that visual aspect to it right? You You always I just think the work that you do just really pops. It just looks really, really nice. And so that's needed. I think, when you try and describe data, it just can't just be a bunch of numbers, you know, on the screen. So I just think your projects are amazing. Caroline, they'd look really, really cool. And I'm really glad that you have that artistic talent to sort of bring it out.

Caroline Sinders  43:50  
Yeah, I mean, I'm a big fan of humanizing data. I'm a big fan of Georgia loopy. And that's one of that's her one of her theories in state of humanism. And I'm a big fan of data feminism by client index ASIO. But I think the thing I'm most interested in is legibility. And I think that that does come down to design. And so way I'm interested in legibility and impact. And I think a way to sort of get my point across more quickly is to sort of leverage all of these design and art skills that I have to show people that I'm talking about. So some people are like, tell us don't show us I'm like, No, I'm just gonna show you. And sometimes that's the best way to do it is just to like, show you exactly what it is, or build a little experiment you can play with. And that's like, at the core of a lot of my work.

Justin Grammens  44:36  
Sure. Well, one of the questions I like to ask people is like, what's what's sort of a day in the life of a person in your role these days?

Caroline Sinders  44:43  
Oh, I just I think it depends on like, what day because I run my own like little lab and we're consultancy and research lab and a design lab. So again, if anyone anyone's interested in like ethical tech, or you liked the way my brain works, please reach out to us. We're always looking for people to work. I quit. But I'm also a part time lecturer at the London College of Communication. In the data visualization master's program where I teach critical research. I also do a lot of fellowships, I still do art. So it really just depends on the day of the week. Like I just wrapped up a fellowship with my friend Anna Riddler, we decided to collaborate on like a one time project for Ars Electronica is AI Lab. So Anna has a really fantastic AI artists, she makes a lot of really beautiful GaNS and makes her own datasets. And so we collaborated in making them sort of a handmade dataset about cypress trees and Louisiana. So I photographed over like 3000 cypress trees, we made this game video responding to hurricane Ida data, it's on my website, but it's gonna go up soon. So you know, that was a really sort of beautiful experiment that was us really kind of reflecting on the environmental impact of technology, and also just, you know, eco grief and climate grief of right now. And thinking again, of, you know, the sort of amazing tools we can use with technology, their new paint brushes in a way and how we can sort of also think of a data set as this very visual thing, in essence, very visceral thing. That's one of the things I really like to do is I want to sort of show that the heft of data in a way that it's very real. And yes, it was that was something really exciting to work with her on.

Justin Grammens  46:21  
And so awesome, very cool. Well, great. Sounds like yeah, you got your fingers in a lot of different things, probably go deep on certain things for a period of time, and then come back up for air as these projects maybe slow down or wrap up. Yeah, it's a very, very exciting and fun, fun space to be in. Just as we get kind of close to the end here. Are there any sort of books or conferences or I mean, maybe we touched on some of those. But I mean, if somebody was just getting started in this space, you know, this days, if you were to rewind back the clock, so many years, maybe what some advice you would give to somebody as they're starting to work in this space.

Caroline Sinders  46:53  
Sure. I mean, one of the conferences that was really influential for me is a conference actually in Minneapolis called IO. So e y e, do, you know jerath work was was my professor in grad school, he was he was the guy I mentioned earlier, I ran into at this conference, my first physical conference back and he really changed the way I thought about data and the way I thought about technology. And what I liked about IO is there is a lot of stuff on machine learning there, which is great. Like they've had Jean COVID, speak quite a few times hand Davis. And those are like two favorites of mine. I think, for me, I always want to remind people that we're dealing with humans. So any data set, even if it's extraordinarily mechanical, even if it's the performance of two computers, that are just sending out packets back and forth to each other, that's still a form of human data, because someone at some point made a decision that that data set needs to be created, right, even if it's in a purely mechanical way. And so one of the things I want us to sort of think about is this data is very, we should rethink almost the worst of it. Like it's so disposable. And so how do we sort of shift the ways in which we think about data creation, and data maintenance? How can we think of ourselves as data stewards, especially in the space of machine learning and artificial intelligence? One thing I like to remind people, in particular with social media is like, you know, every data point we see about harm online is like a real person's traumatic experience. So again, how do we like, how do we shift the ways in which we think of datasets, as these much more embodied things, as these things that are actually priceless, right? And they're not worthless? Or they're not disposable? How do we shift that understanding more and sort of remember that we're dealing with people, and we're dealing with byproducts of people? And so for me, I think of that as being like a steward. And so I'm wondering if we need a kind of almost like, oath as technology workers that work so much with data because we need so much of it in machine learning, right? To make anything do anything, but how can we really remember kind of that importance. And so for me, that's one of the advices, I would give people's really thinking about this. And then in terms of like, practical skills, io, again, was super eye opening to me. All their conference talks are published online to like, a few months after the conference, you can watch them. But for me, it really helps sort of shift and change the way I thought of technology. And it's very creative. And I think even to someone who thinks that they're not creative, like you are, you can be a new art creative person. If you're interested in thinking of the future possibilities of technology. This is a really great space to look at.

Justin Grammens  49:29  
Yeah, for sure. I missed IO last year, it sold out nearly, you know, right away. I did get a ticket this year, though. So I'm going and it's going to be at the Walker artists to hear this this year. So I'll be going in June. It'll be awesome. So yeah, I'm really glad you brought that up, and I will make sure to put a link to it if there's even tickets still available now, but they just went on sale a couple of weeks ago. So hopefully, there's they're still there. You know, I'm curious, are you looking at at NF Ts, you know, at all? A little

Caroline Sinders  49:56  
bit, okay. There's a research project and been wanting to do about AI that also sort of relates to NF T's. And some of it is, I've been thinking a lot about the environmental impact of all the things we can technology and my own environmental impact. I would say like, I'm very, like, I'm trying to be like, environmentally conscious. But I'm also like, not the best. Like, I don't, what's it? Like, I don't like what's the worst, where you like, collect all your scraps, and you like, oh, I don't compost. Because I live in I live in a building where you can't do. But I am really interested in some of these deeper conversations around, what are the like longer, more longer term effects of our own footprints. And, you know, I'm wondering, like me monitoring how much I binge watch, Netflix is not going to necessarily change the environmental impact of the world, because, you know, it's related to so many bigger systems are much bigger companies probably need to monitor more than I do. But this is all to say, like, I've thought a lot about NF T's. And I have a lot of like, sort of complex thoughts about them on one hand because of their impact. But then on the other hand, like, it's enable a lot of people to make a lot of money and enable a lot of artists who normally don't make a lot of money. You know, that's an interesting thing. For sure.

Justin Grammens  51:11  
Yeah, I was just thinking about just the monetization of data that we create ourselves, right. And that's kind of where my head was headed towards. And, you know, people think it's the greatest thing since sliced bread. And you should definitely spend a bunch of time on it. And other people are naysayers. So I think the jury's really out, with regards to where it's gonna fit, but it has done some good with regards to putting a price on artistic output, I guess, artistic energy.

Caroline Sinders  51:35  
I'm kind of like in the middle. Like, I'm glad that artists can make money. I don't think that NF T's are necessarily like revolutionary, I think they're more like evolutionary, because we've had EQ art for so many decades, for so many years. You know, I don't think they're really disrupting the art market, because people have always bought art and art markets make a lot of money. This is just, you know, like a digitized market. I think some of the issues I have with NF T's is that there are so many markets that have almost, it seems like no real trust and safety teams are no real ways to serve ensure quality. So like people's artwork is getting stolen and placed onto NF T's. And because it's blockchain, it's really hard to like then remove that or take that down. And that's where I think quality control really needs to be introduced into lobbies, platforms, and a much more like thoughtful way. And it seems like a lot of these new platforms maybe aren't really doing that, which is problematic, because I wouldn't want someone to take my work without my consent, and put it on one of these marketplaces, and then make money off my work. Like I would not really like that would not be nice,

Justin Grammens  52:46  
right? Yeah, no, no. And you you've I mean, you've kind of brought it full circle to what we were talking about with regards to harm. And you know, the whole sort of social side of technology, for sure. So well, this is interesting space that you're working in. Is there anything else you wanted to share that maybe I didn't I didn't talk about?

Caroline Sinders  53:01  
No, this was great. I really enjoyed our conversation. Thank you so much for having me.

Justin Grammens  53:05  
Yeah, for sure. And I guess how should people find you? Is it just go to Caroline sinteres.com? Is that the best way?

Caroline Sinders  53:11  
Yeah, I like go by my name on the internet. So I don't have any clever handle. It's just Caroline cinders. So if you're looking for me, that's where I'm at, across most platforms.

Justin Grammens  53:21  
Excellent. Okay, well, cool. I can say I'll be sure and put links to your website and all the projects that you've been working on in the liner notes for this podcast. And again, I appreciate your time. Very, very insightful, Caroline, and thank you again for being on the show.

Caroline Sinders  53:33  
Thank you so much for having me.

AI Announcer  53:36  
You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening