Season 4 Episode 5 Transcript

The Twice Over Podcast

Season 4 Episode 5: Consent Regimes

Podcast Intro

Hello! My name is Steve D’Agustino and my co-host Anne Fernald and I welcome you to the Twice Over Podcast. Because to teach it to learn twice over. In this episode, Consent Regimes, Anne and I are joined by Florence Chee, Associate Professor in the School of Communication,  Director of Center for Digital Ethics and Policy; and She is also Founding Director of the Social & Interactive Media Lab Chicago Loyola University Chicago.

Anne Fernald: [00:00:00] Welcome back to the twice over podcast. I’m really happy that Steve and I are joined today by Florence Chi from Loyola university of Chicago. She’s an associate professor in the school of communications there.

And she’s also the director of their Center for Digital Ethics and the founding director of SimLab, which is the social and interactive media lab. Professor Chi is the author of Digital Game Culture in Korea, as well as many other articles and book chapters. And she’s a specialist in artificial intelligence, games, and social media.

Florence, thanks so much for being a guest on the twice over podcast and welcome.

Florence Chee: Thank you. It’s a pleasure to be chatting with you too.

Anne Fernald: So can you tell us a little bit about how you got interested in looking at games and social media? Let’s start there before we move into AI.

Florence Chee: Yeah. So as someone who has researched social interactions in various forms of communication media. I look at the meaning making practices. I’m a trained ethnographer and I look at pretty much the social world, the cultural world of how people engage with various technologies.

And coming from places where I’ve worked with technologists and engineers, computer scientists it’s often fell to me to look at the social side of things. And that’s how I broadly got into looking at what makes game communities compelling. And that had me going to various places in the world and talking to gamers themselves about what they get out of participating in these communities to lend a more human story to What might otherwise be dismissed as problematic usage of new [00:02:00] media, because typically games have been villainized in the media, they’ve been political talking points often when there’s been a school shooter as, is often in the U.S. news, especially the first thing that they tend to bring up is that this person was a gamer. And so that got me into looking at some explanations from the social side of things to delve deeper into the life worlds and what people really find meaningful about engaging with technology.

Anne Fernald: There are different kinds of games, [00:03:00] right?

There’s tabletop games that you play with other people who are in the room. There are video games that you play with a friend. In the same room. There are video games that you play like Minecraft or something where you’re online with other people who are not in the room with you. And then, there’s the kind of idle games like Solitaire or Candy Crush that you might not even play with anyone at all, right?

That’s a rough taxonomy, but can you talk about how the social aspect of gaming changes depending on the context?

Florence Chee: Absolutely. There’s definitely a truth to what Marshall McLuhan a founding father of communication noted as the medium is the message. And so we tailor our behaviors, our interactions accordingly, of course.

But this is to say that a lot of digital games end up having analog counterparts. They are part of our entire life world. At least this is what I’ve argued in my own work. Is that virtual worlds are extensions of our meatspace world. Are real, tangible, embodied worlds. And so you can see this in some of the works that I’ve referred to.

Studies of what housewives get out of reading romance novels. So that is done in a solitary way. Radway’s Reading the Romance is a key text.

Anne Fernald: I love that book. That book is [00:04:00] amazing.

Florence Chee: So she talks about, how this activity is an act of resistance rather than escapism. And so these are some analogs with the game world or various communities that participating in fan cultures or games and gender.

And so I’m trying to show that the digital world is an extension as a whole of our everyday life. Okay, you’re just slowing

Anne Fernald: My mind. I want to slow you down for a second because my mind is completely blown because you’re showing me something that I think is, Really brilliant. I’m super excited by what you’re saying because you’re talking about the ways in which things that are private, that feel completely solitary, are part of community and can be part of community building, [00:05:00] right?

Whether it’s reading a romance novel or watching a TV show, playing an idle game on our phone, that it’s an act of resistance, that it’s an act of protest, that it’s an act of something, of participation, right? So can you keep going with that? Because I’m like, I’m so excited.

Florence Chee: This is the beauty of audio.

It’s not, it’s a drawback actually, because I wish the listeners could see this physical interaction of minds being blown and smiles and, that’s to say that, yeah, when we’re online we’re really talking about our embodied selves. Like we do not stop being.

Our embodied selves. And this is where the discussion of technology tends to separate our social and cultural lives. And part of what I argue by adding a qualitative standpoint methodology to this discussion is that we cannot [00:06:00] just think in terms of binary. real life, virtual life, or fake life, second life, it’s all one.

And this tracks with how our insights, our data driven insights tend to be signal and relatively binary, right? So silences. get left off the table. I want to talk about how, how the technologies are being deployed in our everyday lives, in our work, in our play. With our awareness or without our awareness and consent and this has been happening in the games world.

My form of socializing or escapism, even escapism, or just participation in communities gets tracked and surveilled. And this is how I really got into looking at ethics and games. [00:07:00] And now, as a consequence of looking at data through games, a lot of that discussion has rolled into AI, of course.

Steven: This is just so fascinating. There’s so much there that just isn’t, so I play games and I never thought of them as an act of resistance. Cause I play alone. And as you’re speaking I’m realizing I am doing this because I am wasting time. Like purposely I’m producing nothing.

The other thing you said that was so fascinating is this idea of embodiment, because when I first started working in online learning, and in Second Life particular everybody was like a mermaid or a dragon.

The idea was that there would be this space where we could be disembodied. I would come to you as a mind or a personality and leave the burdens of embodiment behind. But you’re [00:08:00] so right in that we’re embodied, right? We, it’s, we can’t do that. As much as we might want to

Florence Chee: right. And there’s been perennial research that has tried to understand this separation between online and offline identity.

Whether or not that’s the case is not exactly what these researchers take up. It’s how does online identity manifest in the form of an avatar. What kinds of representations do you choose? Do you choose something that looks like you exactly when given the chance? And so that talks about representation, or do you choose something completely off the wall, right?

That doesn’t look like you at all. And how does this reflect actually what chances for alternative? lifestyles or roles or leadership opportunities you might have. So in my research when I looked at [00:09:00] MMORPGs, which are like massively multiplayer online role playing games I talked to people who worked dead end jobs, who ended up partaking in these communities.

And saying how this participation in community allowed me to wean myself off antidepressants, for example. And it was so emancipatory in this narrative that to dismiss these games as something like, an addictive substance or, what was the scratch that was being itched here?

And this was what blew my mind. When I was doing this research was the interpersonal nature, the social aspect. And so how these games are actually lifelines to the social rather than anti-social as gamers tend to get typecast as. [00:10:00]

Steven: That’s so fascinating. I follow this streamer on TikTok.

She’s a young woman and her TikTok is really dedicated to all of the harassment she gets as a female gamer playing these kind of first person team based shooter games. And so I think there’s some tension there between finding community and kind of the values around which some of those communities might be built.

Florence Chee: 100%. They’re very much like what I say about AI. I have said about games for a couple decades now, which is these media technologies that emerge are extensions of everyday life. And the same challenges we face in everyday life are also [00:11:00] brought to bear in these sexism, Tale as old as time, still a challenge when we’re playing games and forming community, ironically, right?

Racism census data, how we deal with Entrenched racism, systemic bias, all these things that preceded AI are now even more they’re exacerbated because The data that we’re using to train models now are using, those historical manifestations of data, right?

So this is why AI is important. Or should be regarded as important to everything from history to philosophy to math to computer science and beyond. Because it is interdisciplinary in [00:12:00] nature. We’re relying on data that we’ve collected from 16th century peasants in France to inform women’s health.

What we can determine about that in, in decision making in the present, which is, from my standpoint, nuts, but this is what we’re doing in the sciences. So what do we make of that? And that’s one

Anne Fernald: Of my, one of my big worries about AI, right? Is that We can look as individual human researchers at an article from 1960 and discount the entrenched sexism in the article and extract what may be valuable to us.

But when we input that whole text into a large language learning model, it has the same value as a 2024 text and a 16th [00:13:00] century text. Is there a way to train the algorithm to see that? Is anyone working on that? Is that something we should be talking about with our students? I don’t think that anyone’s thought this through very carefully.

Florence Chee: I… given what I’m seeing, there are pockets, of course. I’m not alone in these critiques of AI, to be sure. But. We definitely need more widespread awareness. I advocate for education in technology and ethics from K to 12 and beyond, and that’s certainly something that we don’t have the critical AI literacies and a conversation that keeps going, right?

So a lot of our practices, even if we are educating publics, educating various segments. various stakeholders, it’s not sustained. And this is an issue with technological literacy as well, is it’s not like [00:14:00] one laptop per child and you’re done. You have to introduce the whole system and sustain the system.

And what that entails is a lot harder. It’s not a one-time investment. And this is the, a challenge the, we are currently looking at.

Anne Fernald: So when you think about. ethics and AI right now. Is there an intervention or two or three that you think this would be because it feels to me so overwhelming, what would be the kind of biggest bang for the buck thing?

Is it legislation? Is it regulation? Is it education? Where do you think we should be putting our time and energy in terms of ensuring that AI Is helpful to us and not replicating harms.

Florence Chee: The opportunity here is that we all have something to bring. In terms of contribution to ameliorating this overwhelming [00:15:00] state that we’re in as an educator at a Jesuit institution, especially my go to of course is education.

And that’s something that can do, and it’s incumbent upon me to do which is to educate. Anyone who comes and so this is a unique privilege of mine really to go to where people are next week I’m speaking to a international delegation to the U S through state gov and telling them about what some challenges in the ethical tech space.

The tendency has been to delegate decision making to these machines and I would definitely caution against that. And never mind the fact that we have costs. Cost is a real thing. And I’ve tended to advocate for full cost. Economics [00:16:00] perspectives where you don’t just look at how much something costs internally to a firm.

For example, you’re looking at the social and environmental costs. In terms of what is actually sustainable, are we looking at environmental costs? Of what it takes and how much water does it take?

Steven: Yeah, how many glasses of water is this question to ChatGPT going to use really?

Florence Chee: Do we really need that?

But we need to ask ourselves even more as, AI is a really nebulous concept at the end of the day. It’s regarded as magic, but it’s actually like fancy statistics at the end of the day, right? It’s looking at how data can be mined, how data can be can predict, right? How we can ask data to predict certain things.

And now using that same capabilities to a larger extent, To generate, [00:17:00] given a set of data, , that we also regard as imperfect. What’s scary to me is where these machines are getting their training, right? Where and the power that we give very easily to. What machines assert about us as humans.

Example I like to use is of aliens, because that’s relatable in, if we’re looking at humans versus aliens, if aliens were to come and try to learn about us, And these machines are able to talk about the corpus of human endeavor. They’ve learned from Wikipedia.

They’ve learned from Reddit, right? So if we’re teaching aliens about what humanity is, what we’ve stood for throughout time, and [00:18:00] we have trained machines on a diet of Wikipedia, which in itself has issues, Regarding, oh, not just expertise, but who is an expert, who speaks English, right? We’re already limiting who is learned about, right?

Who is regarded as an expert, as an enemy and any historian will tell you this, right? History is written by the winners. So we have that kind of signal to contend with

Steven: Is it possible to make something ethical post hoc, when its appearance, in the world was unethical,

Florence Chee: This is a real challenge, especially given current state and the current state of affairs in technology and technological adoption worldwide. You just need to see how people regard and use cookies. Okay. Cookies were brought about as a way [00:19:00] to manage data. In Europe. And now we’re encountering all this these different consent regimes that ask for our permission.

They demand literacy in terms of what is this? The data being collected about me. I don’t know. Terms of services and user license agreements that we wholesale agree to just in the course of the every day. We are entering into legal. agreements technically all the time, but in terms of what is ethical, like what we should do, we absolutely can and should intervene at any point and at every point all the time, right?

So this is like consent, right? If you talk about consent, And if you’re on a date with someone, how consent can be given and [00:20:00] also ideally revoked at any point, right? And if you believe that, which, I do that is a standpoint that I apply also to technology, right? Discussions of consent and use just in order to be an ethical agent.

Where this comes in to play in big tech, however, that is, also a challenge that is way beyond any of us as individuals, right? If you’re looking at Silicon Valley, if you’re looking at California law, if you’re looking at Illinois law, already you can see how data Is is collected differently, regarded differently, handled differently.

And so you do rely upon ethics, policy, and law being in conversation with one another, constantly revised. This is why I find it really interesting now working with various facets of the UN, the Freedom Online Coalition. They all have things to say and to recommend and to guide with guidelines the ethics of AI, calls to action.

These are all attempts at these are all non-legal attempts to manage our behavior. This is why I’m also giving talks at management schools. So this is part and parcel with management of technology engineering policy analysis, technology policy management. Agendas and how we educate folks in these areas.

This is not going away. We have to be in conversation constantly and whether or not a principle or a guideline makes it into a situation or jurisdiction where it actually can be [00:22:00] enforced with teeth. is also a challenge that we face. Like in the U. S. in particular if we’re looking federally, we’re dealing with constitution, constitutional law, right?

And that organizes our government, that organizes how we see certain rights and freedoms. And if you are looking at the U. N. Charter of Rights, then you have privacy. Regarded as a human right. And we know how privacy is one of the first things to be compromised when we talk about surveillance data and.

And the technology is in our midst right now. So we act first and then apologize later, but we are like democracy, these things don’t go away. It’s a consistent challenge and something we fight for every day. So that’s our charge as educators as well.

Anne Fernald: You’re talking to people who are in school to be [00:23:00] managers or to become better managers. You’re talking to students and you’re talking to technologists and you’re talking to policymakers. Can you, have you noticed? Differences among those groups in terms of what their misconceptions about AI are. Can you give me a little more texture about what do you want students to know that they don’t seem to be understanding versus what the Silicon Valley folks are missing?

Florence Chee: So on the student front, I really feel for the students always, because they are at the forefront of any kinds of policies or global economic challenges that they have to be responsive to in their program of study. I teach an upper level digital media ethics course where we talk about ethics and AI.

We talk about cyber bullying and things that they might not have [00:24:00] Thought about with especially metadata when they take pictures with their iPhone, why is the file so big and they are very sobered at the conclusion of those lessons. And so just creating awareness in that population is important.

It’s. It’s what we ask of 17-year-olds when they enter university. Everything you learned, you might have a second thought about that. It’s a process of unlearning as well. So when we’re teaching, we learn twice over. But when we’re also learning, we might unlearn.

In that process, right? So right. I have students saying to me things like what’s the point? It’s all happening to us anyway, right? We have to deal with it. And they don’t sound all that different from some of the older folks [00:25:00] who are also in, in positions of power, relatively of greater power, but at the same time, When these technologies are foisted upon us, and then we are also in positions where we haven’t necessarily been trained or onboarded properly, or we too are in positions where our consent is not there, but

Anne Fernald: you’re describing a real passivity that I recognize in myself.

And I think it’s so antithetical to, the best practices and education and to Jesuit education, which is always about this, reflection and practice and implementation and, and then discussion and then more reflection. And isn’t it striking that we have this [00:26:00] incredibly powerful new phenomenon in our world, in our digital world of artificial intelligence. And so many of us are so overwhelmed by the strength of it that we just feel like, huh, I guess there’s nothing I can do when in fact, supposedly we’re reflective people who can, practice and be critical and think.

So I really appreciate you trying to think …help students regain some agency, even in the face of this big change? Do you, when you go to talk to technologists, do you find that passivity as well? Or do you find something different?

Florence Chee: Oh, gee, it, where I encounter a sort of disempowerment with the students and some of the the more powerful actors in the higher education context. In the [00:27:00] tech context it’s more of a, how do we get even more powerful? This is a, we can, and let’s keep doing it and let’s see what the market says, right?

Often the market is also vaguely invoked as a, just a carte blanche for folks to just pursue their projects unfettered by any sort of regulatory oversight. And so this is the discourse as well, where in terms of the prevailing U. S. Strategy or approach for regulation is that, companies should self regulate and interference, whether it’s to gesture vaguely at ethics or to restrict or to ask for [00:28:00] consent like cookies, it’s regarded as an inconvenience.

Steven: And so you mentioned earlier about the social, political, environmental consequences of that kind of perpetual I guess for lack of a better word innovation. Does that make sense to you?

Florence Chee: Yeah, Kate Crawford, whose book is called the Atlas of AI has been very prominent in regarding these practices as extractive.

And it, to look at how we extract from the environment, Raw resources, for example. There is no end to that, but if we’re looking at sustainable practices, then we might want to think twice about how we approach e-waste. Or if we’re looking at water usage for AI models, like what are we actually, what are we doing and what are we doing it for?

And can we [00:29:00] keep going? What about innovation? And so my response to that was, we can innovate in many ways. We can take leadership in many ways. And it doesn’t all end with us continuing to proceed at an unsustainable pace.

Taking leadership also means saying no at times, right? It’s a harder leadership. To engage in often is we actually don’t need to do this right now, but here’s what we should do, right? This win, right? So how do we innovate and be ethical?

Anne Fernald: That’s great before I ask my last question, I have another question one more question for you.

So two questions but I’m really curious to know we’ve talked a lot about what you’re speaking about But I’m wondering if you have a research project that you’re working on now Where are you? What are you [00:30:00] what questions are driving your research agenda right at the moment?

Florence Chee: So this is bringing it back to ethics and games and ultimately trying to get back to that place of play that got me into gaming in the first place.

But dealing with the current world, it’s also about risk mitigation and safety, which is also where I find myself speaking and advising on matters of AI and ethics. My main project is looking at how to what I call pasteurized play and trying to do what pasteurization did for food to AI and games and play.

Because a lot of our play and games practice has been tainted with toxicity with predatory and extractive practices that can [00:31:00] we find ways to pasteurize these processes, how we handle data, how we, extract data, how we feed signal back, right? All these different ways that everyday folks are engaging.

And we’re looking in the home environment where, minors have access to all these surveillance technologies and they’re having their eyeballs scanned, they’re having their voices. And their voice data collected, what do we have to account for that? And this is absolutely a safety and risk mitigation question, which a number of other folks are working on right now.

So it comes full circle to gauging what the state of the art practices are in making like AI usage. Safer, whether, however [00:32:00] that is defined. And that’s part of my work right now is to define the places and spaces where that happens. And to advocate fundamentally for helping to make our play spaces a little safer. And more consensual, more ethical.

Anne Fernald: Our last question is about a teacher that inspired you or that mattered to you. So I imagine there are many, but can you tell us about someone who’s been a teacher to you formally or informally in your past that you want to give a shout out to?

Florence Chee: I would like to give a shout out to my pre-calculus teacher in, in university. I started as a computer science major and we had to take a lot of. Math starting with pre-calculus, and coding classes. Måns

Landvig Hansen, he taught me that you don’t have to [00:33:00] really check if students are cheating if you write the exam in a certain way. What he did was say, okay, you’re not allowed calculators in class. But the, you can just write the exam with what you know, and I will, from my end, write the exam in a way that you don’t need a calculator to demonstrate that you understand the concept and have a mastery of it.

And that really stuck with me for the work that he did as. a teacher and that’s, I take a little bit of that and I apply it or try to apply it in my everyday teaching practice as well.

Anne Fernald: Wow, that’s great. Florence, this has been just terrific. The hour flew by. I’m blown away by what you’re doing and so grateful [00:34:00] to get the chance to talk to you a little bit.

It was just wonderful. So thanks so much for being our guest.

Florence Chee: Thank you for having me.

Steven: Thank you. That was really something. Thank you so much.

Host and Guest Information

Dr. Anne Fernald is the former Special Advisor to the Provost for Faculty Development and co-chair of the university council on Diversity, Equity, and Inclusion at Fordham University. She collaborates closely with the Chief Diversity Officer and his team, and, in spring 2019 she spearheaded the Provost’s Office Initiative on Inclusive Pedagogy and Student Engagement which continues in 2019-2020. At Fordham since 2004, she facilitated the Arts and Sciences Faculty Pedagogy Seminar from 2016-2019. A scholar of modernism with a special focus on Virginia Woolf, she is the editor of the Cambridge University Press Mrs. Dalloway (2014), and one of the editors of The Norton Reader, a widely-used anthology of essays. Born and raised in Seattle Washington, she lives with her family in New Jersey. She occasionally updates her blog, Fernham, and can be found on twitter @fernham.

Dr. Steve D’Agustino is the Senior Director for Online Programs at Fordham University, where he supports the development of distance learning programs. A lifelong educator, he began his career as a public high school teacher and has served as a school district administrator, clinical supervisor of student teachers and as the founding director of a community learning center in the Bronx, that provided access to learning technologies for at-risk high school students and their families. He shares his research and thoughts about effective practices in online teaching through a number of publications and his blog Learning at a Distance.

Dr. Florence Chee is an Associate Professor in the School of Communication and Program Director of the Center for Digital Ethics and Policy (CDEP) at Loyola University Chicago. She is also Founding Director of the Social & Interactive Media Lab Chicago (SIMLab), devoted to the in-depth study of social phenomena at the intersection of society and technology.

Her research examines the social, cultural, and ethical dimensions of emergent digital lifestyles with a particular focus on the examination of artificial intelligence, games, social media, mobile platforms, and translating insights about their lived contexts across industrial, governmental, and academic sectors.

She serves as an External Consultee to the Freedom Online Coalition’s (FOC) Taskforce on Artificial Intelligence and Human Rights (T-FAIR) and is a Key Constituent of the United Nations 3C Roundtable on Artificial Intelligence.

She has designed and taught graduate/undergraduate courses in Digital Media including Game Studies, where students engage with debates surrounding diversity, intersectionality and media production through social justice frameworks.

Follow her on Twitter @cheeflo

Mentioned in this episode:

Marshall McLuhan

Radway, J. A. (1984). Reading the romance: Women, patriarchy, and      popular literature. University of North Carolina Press.

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs          of artificial intelligence. Yale University Press.

Leave a comment