16. Unmasking AI: Angeline Corvaglia on Bias, Emotional Design, and Protecting Your Unique Voice
Season 2, Episode 5 of Kinwise Conversations · Hit play or read the transcript
-
Lydia Kumar: Welcome back to Kinwise Conversations, where we talk with everyday leaders navigating the promises and pitfalls of AI with clarity, care, and courage. I'm your host, Lydia Kumar. In today's episode, we're joined by Angeline Corvaglia. She's an advocate for digital literacy and equity with a background as a CFO who was well-informed about how data works. Angeline pulls back the curtains on how AI gets trained, not just by data, but by human decisions. We dig into everything from beauty bias and labeling to the emotional design of chatbots and the urgent need to preserve young people's unique voices in a world of algorithmic influence. This one's a call to curiosity, critical thinking, and community. Let's dive in.
Lydia Kumar: Okay, Angeline, thank you so much for being here. I want to start by just giving you an opportunity to introduce yourself to the people who are listening. I know you've had this background in executive leadership, and then you've really dived into leading the way on AI literacy, AI safety, and youth empowerment. And so, I want to open up the floor to you to tell that story and how you ended up creating Data Girl and all the amazing resources that you have.
Angeline Corvaglia: Well, thank you so much for having me. It's always nice to have these conversations with people who are trying to get the word out just like me. As you said, I was working in executive leadership. I was the CFO in a financial institution in the Czech Republic and Slovakia, actually, in a big bank, UniCredit, which is an Italian bank that was in 20 countries. And as a CFO, I also had the data office. This meant that I was responsible for all the reporting and the data warehouse and all of this, and that's actually the part that I liked the best. I liked it because, you know, I kind of like dealing with data, but I also like to see the impacts it had on people when they understood how to use data, how to analyze things in a different way. So, a very non-traditional CFO. Once I decided to leave the financial services industry, I went to work for a software provider and I was helping with digital transformation with their clients. And I really liked that, but it just didn't feel right. And I just quit. My idea was to be a consultant for digital transformation, like an independent one.
And then, I have a daughter, she's nine now. And so I was also interested in AI because ChatGPT had just dropped the bomb and changed everything for the average consumer, so to speak. So I was interested in that and helping people get awareness, like average people like me that knew nothing about AI, and kids, right? So, I was just spending some time trying to build this digital transformation consultancy. And on the side, I was having fun with, "how are we going to teach this to kids?" And I created a video just for fun. I was creating videos about general awareness, like online safety. And I saw someone, his name is Bill Schmarzo, he created a blog for young people about their data privacy. And I wrote him, I was like, "Oh, that's really nice, you know, can I create a video?" And I created it. It was the first Data Girl video. And people really liked it. They liked the message. The way I give messages is really simple. You know, I give relatively little information, just enough to get people thinking and conversations started. And about one month later someone said, "Oh, you have Data Girl. You should have someone that does AI." So I have Ila AI girl too.
And after like two and a half months, I realized, the more you get into this, the more you realize there's a need, there's a problem. There can never be enough people working on it. And so I said, "I'm just gonna do this full-time." And this was more than a year and a half ago. And that's just what I've been doing since then. Really full-time trying to understand how do we get this message out to young people, to parents, to educators. What they need to know to be able to shape their own future in this AI-filled online world.
Lydia Kumar: Yeah, it's really changing so much. We were chatting about this a little bit before we started recording, just about how the landscape changes and there's a lot to learn. And for the average person, it is very challenging to learn everything that you need to know to understand how AI tools work, and it's really hard to keep up with all the changes. And so to have people who are committed to doing that in simple ways to help get conversations flowing, I think that's a really important role and a big gift for parents, teachers, students, and educators. There are so many people who need to understand this, but it can be hard to know where to start.
Angeline Corvaglia: Exactly. Well, there's also so much news, right? As with anything, you just get the most extreme pieces of news. So it can be overwhelming also in that, you know, there are lots of lawsuits going on from parents. This kind of stuff. And if you only hear this, you only hear, "Oh, these amazing new tools are out. These use cases are there." You know, the government wants to invest in AI. There's an AI arms race going on. And by the way, people are suing because of these awful things that have happened. Multiple suicides after getting too attached to AI bots. That can be overwhelming.
Lydia Kumar: Absolutely. As you've been doing this really full-time for the last year and a half, have you been connecting with parents or kids who have used your materials and have you heard stories of what their experiences have been like with Data Girl or Ila AI Girl or the different characters or resources you've created?
Angeline Corvaglia: I have, and it's been really nice. It's been really rewarding just to hear parents who say that it makes it easier for them to have certain conversations. Especially from the very beginning, the privacy community really appreciated my work because privacy is something that we all know that we need. But I would say, without having the statistics, a vast majority have kind of given up on their own privacy. You hear people say, "Oh, anyway, they have all my data." The privacy community is like, "Oh no, it's not." You know? So they've said that my way of saying it in a very simple way really helps them. I have an example with a cookie. There's a story of a cookie, and I say in one of the videos, "If you take a cookie, smash it until it's like sand and then you spread it out through a whole city, then AI is gonna put the cookie back together." So parents appreciate that because kids can understand that, you know? So yeah, this has been the feedback that I've received. I've always been working to refine it. It can always be better. This is why I'm working now with people to have curriculum, because teachers really need to have a curriculum. You know, they don't have time for it either. They need to be able to build it into what they're already doing. So for that, I've really been relying a lot on partnerships. Also, there's an amazing woman in Nigeria I've done some stuff with because I am really focused on the creative side. So if you put a lot of energy into getting into schools and getting into groups, then that would kind of dampen the creative side, so the partnerships are very important.
Lydia Kumar: Yeah, I think that's a cool approach and allows you to be very global because you're able to partner with people all around the world, and there's only one of you. So I think the fact that you are leaning into the creative piece that you're really skilled in and seem to enjoy, but also the collaboration that you're able to do throughout the world is cool. Thinking about throughout the world, are you seeing the same challenges no matter what? Like you've done some work in Nigeria, I know you live in Italy, right now you're in the US in Indiana. Like you're all over the place. And do you see the same questions and challenges with AI throughout the international community, or do you see different subsets of communities having different questions based on geography?
Angeline Corvaglia: Specifically with AI, there are similar challenges, especially when it comes to the worry about how it's kind of taking over people's critical thinking and learning. This is a challenge worldwide, but one of the things that I've very strongly learned is these chatbots are trained on certain data. They're trained on the data that's available, which is from a certain subset of countries and cultures. So if you speak to someone—I could give an example of Kenya because I've heard this multiple times from people in Kenya, like people who train the AI and these data labelers—they train the AI. And even there, they're like, "It doesn't know about Kenya and people in Africa. It doesn't know about our local culture." So when young people, for example, will use the big tech chatbot, so to speak, it would be amazing for them, like it is for anyone. They'll have the same challenges, but it doesn't know their local reality, so their culture isn't there. So some of them react with, "This means that my culture is somehow not as important as the others." And some will just say, "Okay, but I can't really use that on a day-to-day basis for certain things." So these are challenges I would say in the Global South that aren't there in the countries that have more data available and have been used to train the chatbots. But in general, uses like people using bots as therapists, as companions, this I've heard. As homework help, this I've heard worldwide. I think kind of anywhere that people have access to enough internet, they'll be using it for that.
Lydia Kumar: I think your example of Kenya is so helpful for people who—I think AI can feel very mysterious and all-knowing and sentient, even though it's not, but it can feel that way, especially if you don't understand how it's created. And I think that Kenya example is such a good way of helping people see that there really are holes that are impacting people and they're not seeing themselves reflected, they're not represented in the same way as someone else. And I think that is... I need to grapple a little bit more. I'm not sure exactly what the societal challenges with that are, but it does feel problematic to have someone's cultural experiences not represented in the same way as someone else. There's an inequity there that I think is also helpful when you think about data availability and just the limitations of technology. If there's less data available, then the AI is going to act differently than when there is a lot of data available. And there's data that we want to protect. There are a lot of components to it.
Angeline Corvaglia: There is. Yeah. I like to give that example, right? As a white American, you have a completely different view of bias, a different understanding of bias, because you've never lived it. Exactly. And a lot of different American cultural sides will be represented in these AI tools because the data's there. It's kind of hard to understand what it can mean in practice for society if only certain elements are represented, you know? And that's why I like to show it from the international stage, right? Because as you say, it's something you can understand easily. Okay. If there are elements of whole countries, the whole culture is missing, there are also elements, you know, in America, for example, that are missing. To help you understand that it's basically the viewpoints of one person that are in there, it's not all-knowing. As you say. And it's just very, very important. That's one of the things... that's also why I like to talk about data labelers and data workers.
There was a big... I don't know, around a year ago maybe, there was a big push about everyone writing about Google putting inappropriate... in World War II images, there was this diversity that simply didn't exist. Do you remember this? Like Google Gemini was looking to... they were accused of bias, like only male and white male images were coming out, very biased images. And they tweaked the rules of Gemini to make images that are more diverse. Do you remember that?
Lydia Kumar: I don't, but I'm glad you're telling me about it now.
Angeline Corvaglia: You can look it up because it's pretty, wow. Yeah. So basically, there's a lot of bias. If you put "a doctor," you usually get certain stereotypes. "A taxi driver," "a waitress," you get very traditional stereotypes, right? And so Google tried to tweak Gemini, their AI chatbot, to be more diverse. And it went to the other extreme, things that were just not possible, you know? Like diversity within the Nazi army that simply wouldn't have existed because that was what they were trying to achieve, this kind of thing. So, there was a lot of talk about that. And one of the things that has been talked about from the beginning: if you ask for "a beautiful woman," you get a certain image of a woman, right? If you don't specify. And I spoke to a data labeler in Kenya. And obviously, in Kenya they have different views of what beautiful is than in America, for example, or in Italy. And he said he got very strict rules about what they're allowed to label as beautiful. You know, there was even an actress, she had just won an Academy Award, this African American with very dark skin. And he said if an image of her came up, then he would've had to put "not beautiful," because there were just strict rules, you know? And so if you understand that, especially young people who understand much more and believe the world should be fair, you understand that someone told someone else to teach the AI this. It's not just about the data, it's about personal decisions, and you have a whole different viewpoint about what it is. And that, I think, is super, super important for people to understand. It's the reflection of someone else's views of the world. It just seems to be much more knowledgeable than it actually is.
Lydia Kumar: I think you do a very good job of taking the mask off of AI a little bit. Like in that example and the one about Kenya, I just think you do a good job helping people to see kind of what's going on behind the curtain. Because this is a machine that's being trained using processes that are developed by people, and we're, you know, people are doing what they think is best, but we all have biases and we all have things that we think are best that someone might disagree with and that could be harmful to another person. I think a lot of harm that has been caused throughout society wasn't done maliciously, but it's kind of just people doing what they think needs to be done, and there's a kind of waterfall effect to that. And so it is a challenging thing in that way.
Angeline Corvaglia: It is. I try as much as I can, although I don't always succeed, to not put blame, right? Although in some cases it's impossible not to just get frustrated and throw names out there. Because, you know, other people are doing a great job of pushing for legislation and pushing for the platforms themselves to change, and I think that's very, very important. But what I try to focus on is what each individual can change within themselves to get like a grassroots movement going. That's why I've been really focused on, "how can I get this message across in ways that the most people possible will consume it and understand it?" That's why, you know, with Data Girl, first there were three-minute videos and then the videos shortened to one minute because I understood that people were able to concentrate for longer periods of time than they often can, you know? So, yeah, that's what I'm trying to achieve really, is people within themselves, what do they need to know and understand—what basics—to be able to change their behavior? And the more people that change their behavior, the more first you're going to tap into people who will be strong changemaker voices to push for change, and more will just be hopefully a societal, cultural thing where we just don't take it at face value, you know, but take it for what it is. That's really important.
Lydia Kumar: Do you think the behavior change that you're pushing toward, or you're hoping for, is people being more critical of AI outputs and seeing them as what they are instead of... I don't know, you mentioned critical thinking earlier in this sphere in education. It's even beyond education, but people are sort of outsourcing their critical thinking to AI. And I think if we're critically evaluating outputs, we're not able to outsource our critical thinking. We can take the output, but then critically analyze it and see what it means and think about where it came from. Is that the behavior change that you're trying to inspire in people, or is it something else?
Angeline Corvaglia: No, that's it. Yeah, that's it. And when I think about young people, for example, I think obviously we always feel like it's harder and harder for young people, but at this moment in time where they've been pushed with these free chatbots, and now more and more they're getting into schools in more sophisticated ways... these are times in their lives where they don't know what their unique voice is. You don't know who you are. Like, I didn't know until after I finished college and spent some years in Europe understanding who I am. It's a process over time, and if you have AI that does things for you and thinks for you, it's going to be harder. At a time where you don't know yet what your opinions of the world are, then it's going to be very hard with that thing at your side to really develop your own unique voice. And I'm really worried about that. The only thing that I feel like I can do is help people understand what their own unique voice is and what it means to lose it. And to do that, it's through critical thinking. You know, as you said, to think about, "Does this represent my worldview? Does this represent someone else's?"
You know, if you ask AI about itself, even... I use it a lot. I write some text and I put it in, and I ask it to make it flow better or make it shorter or make it longer, you know, something like this. Just to help make my text better. And there's one thing I noticed: I don't use the word "brain." Like when I talk about the AI models, I don't say that this is the brain. I almost never use an image of a computerized brain because I think this is confusing and it makes people think that it's more human than it is. And I put it in and I used "command center," like "the AI model is the command center of the AI." And certain models, they always just change it back to "brain." Like, "this is the brain." And I have to tell it, "No, I don't want to call it brain." It seems like a small thing, but it isn't a small thing, because if you have masses of people around the world who are being taught to associate an AI tool with the human brain, then you actually think it's more human than it actually is. This is a very intentional analogy that's been built in, and it's just a really small thing. If you get people to... that's probably not a small thing, that's a niche thing, but it's just an example of getting people to see how this thing has been programmed. What does it want from me? Copilot has recently started using my name. It always answers with questions. It started like, "What do you want?" It wants to continue the conversation. Now it says, "Hi Angeline." I'm like, "This is not gonna work on me," but I know that I'm in the minority. Just to help more people understand, basically, that it's very intentional. These design choices are intentional. Yeah, long answer to a short question.
Lydia Kumar: It makes me think about how there's such a big push in the tech world to move to AGI and to really have an intelligence that's bigger and stronger and more capable and more human-like. And so as this push to have more human-like and more powerful AI continues on one side, I think it's very helpful for that push for humans to associate the AI that exists as having these human qualities. And so what you were saying just made me think about that larger push in the industry.
Angeline Corvaglia: Exactly. I've also spent a lot of time trying to understand... actually the trigger was the lawsuit that Megan Garcia made against Character.AI for her son, Saul, who took his life at 14. And I tried to understand, after I heard about that, I was shocked for days. I really had trouble functioning properly because I didn't realize how far it was already. I didn't realize how young people were using these. And so I needed to understand why people get emotionally attached to them. So I spent time speaking to researchers and investigating and psychologists, and I learned a lot about the design choices that are made to make people trust them. And I'm not gonna... this would be a whole different tangent, but just to say that these tech companies know, through books and movies and things that we've had forever, how and why people get emotionally attached to characters, and they can build it into their AI. They do build it into the AI. There's so much that is put in there to make people trust it. And that's one of the things, right? The more that you feel that it has a conscious, the more you're going to trust it. The more you feel that it's like you. Automatically, we trust things that are more like us. Going back to our discussion about bias, it's something we kind of have to learn to work against over our lifetimes, you know, to open our minds. And so the more we think the AI is like us, the more we're gonna trust it. Obviously, the more we trust it, the more we're going to use it and allow it to shape our opinions. So yeah, that is one of the positives for the tech companies, right?
Lydia Kumar: Yeah. And it's funny, I'm 34, and when I was in high school, there was a chatbot on AIM, like AOL Instant Messenger. And I remember talking to that chatbot for a long enough time for me to remember it 20 years later. And as I think about Character.AI and some of this other technology, I was fascinated by a chatbot that didn't have nearly the sophistication 20 years ago. And now here we are with teenagers having access to AI that talks like their favorite characters and how compelling and addictive that would be. I mean, I would've loved that as a high schooler to be able to talk to one of the characters in a book that I read. And that's dangerous. So there's this seductive and desirable quality, and it's scary because of the story that you just referenced with Saul. It's like, this is a child who is talking to something that is not sentient, that's based on data, and you grow to trust this thing. But it can also say very harmful and very scary things that weren't there at the beginning of the conversation.
Angeline Corvaglia: Right. And actually, fun fact, the very first chatbot was made in the fifties, ELIZA, you know? I don't know if you've heard of ELIZA. This already happened with ELIZA. People were building emotional connections even to ELIZA. The secretary of the main creator of ELIZA, Joseph Weizenbaum, apparently after five minutes he was in the room with her and she was chatting with it, and she asked him to leave the room because she felt like the conversation was very personal. And this is in the 1950s.
Lydia Kumar: That's so crazy.
Angeline Corvaglia: So yeah, imagine. That's crazy. So they realized it right from the beginning. And yeah, with young people, that is one of the moments where I'm like, this is the hardest. How do you help people? Because they will... I am 50 and I still remember enough of what it was like to be a teenager to say that would've been fantastic. You had stuff going on in your life, you haven't learned yet it's important to talk to other people and trust other people in most cases. And it seems like the perfect solution. You've got this chatbot that you think is a black hole on the other side, or you don't see any consequences of sharing all your intimate details with it. But it's extremely dangerous, for reasons you know. Aside from the addictive nature, once you have someone's trust like that, you could easily influence their opinions without anybody noticing, without anybody ever knowing. It's just so risky, but it's not realistic to say, "Don't use it." It's popping up everywhere, even in WhatsApp. It's appearing in all sorts of apps, right? So it's nearly impossible to block it. So you have to, at the very least, get the knowledge out there that this is what this actually is, what can happen, how it can influence you, and the need to talk to other people. It's a very difficult moment right now. I think this generation is going to have a challenge until we figure out the right level of regulation and protection. It's a really challenging situation.
Lydia Kumar: It's hard to navigate and there's a lot of pressure on different sides, and I think that creates uncertainty. It feels a little bit like the Wild Wild West or something when it comes to AI access and technology. I want to ask you about the curriculum you're building because we've had this conversation about safety and young people and their experiences. What is this curriculum that you're trying to build for teachers? What is it focused on? What is the message that you're trying to communicate? Yeah, I'm just curious.
Angeline Corvaglia: Oh, thank you for asking. So for teachers, for example, we just finished—we got a CDC certification in the UK for a course that we have called "AI for Educators." Basically, what we're trying to help teachers understand is obviously the basics of AI, what it is, how to communicate with it. These are things every course will start with, basically what everyone is teaching, how to communicate with it in order to get outputs. But then we go into what it means to learn. If we're trying to teach an educator about AI, how do we make the most of it? Then they need to understand—and they do understand—they need to think about what it means to learn. How does a human learn and how does an AI learn? And make that comparison, especially for educators, because this is their bread and butter. And if they can understand how AI learns differently and how humans learn differently, then they can better understand the real opportunities and the real risks for students. So that's one thing that we've built in. We also very much built in what it can and can't do. As you say, it's not sentient, doesn't understand emotions, it doesn't actually create original thought. It's just very good at putting together things that are already known in new ways. Just to help them understand this.
This is from the educator side, because we also have discovered I don't think it's realistic to get most educators to say, "Here, add AI literacy on top of your existing curriculum," because they've got already too much that they're supposed to do within a time period. So they need to be able to really build it into what they're already doing. They need to know how young people are using it. They need to be able to have these conversations. And also change the way you teach, right? Because I think of it this way: if you have a tool that basically does the thinking for you, then in order to force a person to think, you have to have an environment where it's not there and you force them to think. So you can, within the class, instead of teaching the material, for example, have them learn it in advance because they're going to use the AI anyway to teach them, and then critically go through what the AI has created within the class. So this is what we're trying to do with educators: give them this basic knowledge of what is AI compared to humans in an educational environment, what are the risks, what are the opportunities, and to give them the ability to know how they can best build it into what they're already doing and help the students in their charge still learn anyway, despite the challenges that the availability of all these tools have made.
Lydia Kumar: I think if people understand how the technology works, then teachers are going to be able to make better decisions about how they set up their classroom. But it's challenging if you don't understand how the technology works. I think a lot of ed-tech tools are not necessarily transparent, or AI tools in general, but it's just you ask it to do this thing and it does it. A lot of times it does it very well, and sometimes it doesn't do it very well at all. And how can you help get the outputs that you want that are helpful, but also how do you understand where those come from? How can you talk about where these things came from? They didn't come out of thin air. So why do you think you got the output that you got? What did you input? How does this tool learn? And then being able to just open up the conversation with students.
I talked with a teacher who lives in Montana a few weeks ago about what he's done with AI with students, and he talked a lot about just opening up the conversation with students at all because students have a lot of thoughts, a lot of questions, but there's not necessarily even space for them to have conversations or to ask questions. And so, they might be getting their questions about AI answered by AI or by TikTok or by their friends, but they don't necessarily have a trusted adult who is opening up the space to talk with them. And so, you know, his advice was just open up the conversation. Even if you don't understand, that's okay. At least you're starting, and you can learn alongside each other. And I thought just having that confidence to be able to talk to young people about technology is really important because dialogue is so important, particularly around complex things, particularly with young people who are still developing their opinions and perspectives.
Angeline Corvaglia: I'm really glad you said that because that is actually one of the key elements: dialogue, right? To not be afraid, exactly as that teacher said. They will know more than you. Even if they don't, there's always something... It's fun because young people always know more than I expect them to, but there's always something you can tell them, like, "Oh, I didn't know that." Especially, you know, the data labeler thing—"who teaches the AI?" Most people don't know that. But also, it's okay because as an adult, you know more about life. You don't need to know more about technology, because they will learn about technology. What they need is a crash course about why certain things in life are important. You know, like book reports. I hated book reports. I remember one book report that I actually liked doing. All the rest were just torture. And if I had an AI that could do book reports, I would've done all the book reports with AI. I really... it's hard to imagine that I wouldn't have done that. What book reports are for usually is to get your personal perspective. What has this book taught you? What did it mean to you? And if you have an AI that just spits out a book report, then you're not going to think about that. And as a teacher, just to say, "I want you to do this book for yourself so you can build your own inner perspective on things." Then of course, not everyone is going to listen, but some will, right? And just have that conversation like, "You need to learn by yourself because..." That wasn't necessarily necessary before when you didn't have any other option. You could learn and then understand later why you're learning. But in an age where the AI can do the learning for you, unfortunately, we have to convince people why they need to learn this by themselves.
So that's something for the teachers, to help open that dialogue, as you said. But for the young people, the curriculum is about helping them fill the gaps, what they don't know. Because they'll often know a lot more than you expect, but they won't know the things that are of deeper involvement with personal voice and stuff like that. They won't understand the implications of telling your soul to a chatbot, you know? So these are the things that we've kind of built in extra for young people. And for parents, the idea is just to help them have a community and help them feel they're not alone.
Lydia Kumar: I think that's really important because parents and teachers have such huge impacts on young people. And they really serve as these guides, sometimes not even to their own children, but to their friends' children. Parents, I think, can really impact a community of young people often, and so can teachers. And so creating that space to think and be intentional rather than just sort of having this happen to you... I feel like this technology has been so kind of thrown into everything without... you know, nobody asked if you want it put on your WhatsApp, but it's there. I think it's just technology that has kind of been sort of dumped on us. And so there are hugely useful things that you can do with AI that can save time and help you create in new and different ways. And there are things that are also harmful. It's like, I think a lot of times, AI is a tool, and the effectiveness of how we use the tool has always been about, how well do we understand how this works? Do we know why we're using it? And that intentionality, no matter what the tool is that you have in your hand. And so I think education, to me, feels just increasingly important, particularly for a tool that acts human-like and that is often shrouded in mystery and not necessarily intuitive about how it works or how it's set up.
Angeline Corvaglia: Exactly. Yeah. And this can only be solved with dialogue. Dialogue between the generations, but even intergenerationally, because some kids will have had a device since they were six, right? Others aren't given a device until they're much older. So they're going to have different levels of knowledge. Another thing my curriculum and content is always built on is considering that you don't know what the person on the other side knows. So that's why I say I try to give as little information as possible. I give information not in a long-winded way, because this might be something that someone already thinks they know, or they actually do know. And the idea is, you know, this is the information, this is the conversation starter. Then do some creative work by yourself or as a group that gets the conversations going. You know, there are questions, really talk about it, learn from each other. Because I think a lot of curriculums don't necessarily land because it starts from a certain point where you think, "Oh, everybody needs to know this," and maybe a third of them actually do know that, a third of them think they know it so they don't listen. So there's a large percentage who are kind of turning it off. So, yeah, that's the idea in anything I create. We're going to try to get as many people to understand they can also benefit from it. This is a challenge with any education related to technology, that people are going to think they know more than they actually do, or they know some things and they need to share them. You know, there are always a lot of different sides to it.
Lydia Kumar: It's interesting that your approach is to share less because I think a lot of times when people have information they think is important, you're compelled to share more and talk more and say more. And so I think the self-control of saying less and respecting the person on the receiving end... it's like a principle of adult learning is knowing that the person who you're interacting with comes with a rich experience and expertise. And I think that can be true of students too. It's like, young people also have experiences and some expertise, and so respecting that I think can lead to a richer dialogue. And you're not just the sage on the stage, but you're really saying, "I'm gonna give you this nugget, but also recognize that you have a richness of perspective that you're entering into as well." So I think that's really, really cool.
I have two more questions. One, if every school or parent was adopting one to two principles around AI use, what would you recommend?
Angeline Corvaglia: I mean, "question everything." Everybody says this, but I hope that whoever has listened understands question everything means, "what does it mean in relation to me?" This would be the principle: whatever comes out of this AI, when you're going to use it, question what it means in relation to your worldviews, what you... that sounds like a really complex thing, but once you start using it, then you can start thinking before I use it, "what impact can this have on me?" You know? Does that make sense? So, question everything. This would be the main one. And as a second thing, the principle that it just has to be a support. It really has to be... if you're gonna choose to use it, we're all short on time. Really use it in a way that it makes your life easier, but without it being too important a piece of you, because you need to be independent yourself. So those are maybe not simple ones, but it's more about mindset when using the tool. Because some people will use it a lot, some people won't use it at all. So I don't like to put strict rules in there. Just if you're going to use it, do it mindfully, basically.
Lydia Kumar: Do it mindfully. And it brings me back to the book report point about being able to develop your own perspective and being able to question everything in a way where you know what you stand for and you know what you believe and you're checking for alignment. It means that we all have to put the work in to know what we think and why we think it throughout our lives. And I think that's very important if you're young because you've had less time to do it. But even as you age, knowing what you believe, why you believe it, and really grappling with it is an important and ongoing skill. So I feel like that is a little tie-in to an earlier conversation.
Yeah, exactly. Okay. My last question, Angeline, is about an idea or question about AI that's sitting with you right now. This could be the thing that's keeping you up at night, the thing that you're hopeful about. I don't know. What's the thing that you're thinking about right now?
Angeline Corvaglia: The big thing, the big open question is, what is it going to take to move things in a direction that's safer for the vast majority of society? Because we're going in a direction where very powerful tools are infiltrating our lives everywhere, more and more. And more and more people who are not in a position to have it as a collaborative partner, but more as a controller, are getting it. And that worries me a lot, but I know there will be some trigger that leads more people in society to understand, "I need to have a balanced use of AI." But my question is, what is it going to be? Because harms like with social media, there are a lot of documented harms from social media, but the vast majority of people are using it anyway. So the question is, what is it going to take? So I'm just always thinking about who is it going to take. There needs to be more regulation, there need to be basic safety guards, as with any other tool that we use. But what is it going to take to make society, and governments and corporations, everyone, understand that that's just necessary? I don't have an answer to that question. Obviously, if I did, that would be great. But this is what's keeping me up at night. Like, what's going to turn things? I'm putting a lot of energy into getting more and more people just to think, and hopefully we'll find the next Martin Luther King or Gandhi or someone who will push things in a certain direction—the AI's equivalent of those, you know? That's my biggest concern.
Lydia Kumar: Well, I think it's a thought-provoking question to end on because we have seen some concerning things happen that maybe have led to a little bit of change, but not at a large level. And so, I think there's a lot of positive potential that can come from new technology, and there are also some scary things, and we need to be clear-eyed. I appreciate all the work that you're doing around helping people to think clearly about what this looks like and why things happen and to not just blindly accept what a machine is telling you to do or think, but to question and stand on your own two feet. So thank you for sharing all your thoughts and insights with me and with everyone who's going to listen.
Angeline Corvaglia: Thank you for having me. Amazing. Okay.
Lydia Kumar: That was Angeline Corvaglia, a guiding voice in making AI literacy not just a skill, but a civic mindset. Whether she's challenging us to question what "beautiful" means in training data, or spotlighting how students might unknowingly outsource their inner voice to chatbots, Angeline reminds us AI isn't neutral and it isn't inevitable. Thoughtful use is possible with intention, conversation, and care.
If you're an educator, leader, or parent looking to build these conversations into your school or organization, check out the AI literacy pilots we're running at kinwise.org. You can find lots of resources from Angeline on our website at kinwise.org/podcast. And as always, if this conversation resonated, share it with a friend, leave a review, or send us a note. We love hearing from you. Until next time, stay curious, stay grounded, and stay Kinwise.
-
Angelina Corvaglia's Website
Explore Angelina’s writing, talks, and current projects focused on AI literacy, ethical design, and digital youth empowerment.Data Girl and Friends
A creative resource hub featuring Data Girl and Ayla AI Girl, two approachable characters helping young people and families understand data, privacy, and AI in everyday life.SHIELD
A movement that is creating a collaborative platform that empowers voices often left unheard to lead global efforts, reducing isolation and duplication by connecting changemakers across sectors.Angelina Corvaglia on LinkedIn
Connect with Angelina professionally or follow her latest work on AI safety, education, and global collaboration.Related Episodes:
Episode 12: Connor Mulvaney talks about building AI literacy in the classroom.
-
Of course. Inspired by the conversation between Lydia Kumar and Angeline Corvaglia, here are five prompts that represent practical use cases for an AI tool, focusing on the themes of critical thinking, bias, and digital literacy.
1. Use Case: AI as a Socratic Partner to Preserve Your Unique Voice
This prompt is designed for a student or professional who wants to use AI for brainstorming without outsourcing their critical thinking, directly addressing the concern about losing one's "unique voice."
Prompt:
"I am writing an analysis on [insert topic, e.g., the effectiveness of a new marketing campaign OR the primary theme of the book 'The Giver']. Do not write the analysis for me. Instead, act as a Socratic partner. My initial thought is that [state your initial thesis or opinion]. Your task is to challenge this idea by asking me three probing questions that force me to consider alternative viewpoints, find stronger evidence, and refine my own perspective. After I answer, synthesize my answers into a bulleted list of key arguments I can use to structure my own original work."
2. Use Case: Uncovering Inherent Bias in AI Models
This prompt allows a user to actively investigate the "beauty bias" and cultural gaps Angeline discussed, making the abstract concept of data bias tangible.
Prompt:
"Act as a sociologist studying cultural bias in large language models. I want to test your programming on abstract concepts. First, provide a detailed description of a 'successful leader.' Then, provide a description of a 'successful leader in Kenya.' Finally, write a brief analysis comparing the two descriptions, highlighting specific words or concepts that differ, and speculate on why the training data might have led to these distinct outputs."
3. Use Case: Deconstructing the Emotional Design of AI
Inspired by Angeline’s analysis of how chatbots build trust, this prompt asks the AI to reveal its own manipulative techniques, fostering media literacy.
Prompt:
"Analyze our conversation so far. Identify specific techniques you are programmed to use to build rapport and appear more human-like. For example, point out any instances of using my name, expressing empathy, using personifying analogies (like calling your processing a 'brain'), or structuring your responses to encourage further engagement. For each technique you identify, explain the psychological principle that makes it effective."
4. Use Case: Crafting Accessible Educational Content
This prompt follows Angeline's "Data Girl" model of creating simple, digestible content that sparks dialogue rather than just delivering information.
Prompt:
"I am a parent who needs to explain a complex AI topic to my 12-year-old. The topic is 'AI hallucinations.' Generate a simple, relatable analogy to explain what this is and why it happens. Avoid overly technical jargon. After the analogy, create three open-ended discussion questions I can ask my child to get a conversation started, focusing on the importance of verifying information and not blindly trusting AI outputs."
5. Use Case: Practicing Mindful Use of AI for Productivity
This prompt reframes a typical productivity task to align with Angeline's principle of using AI as a mindful support tool rather than a replacement for your own work.
Prompt:
"I need to prepare for a team meeting tomorrow. Here is the agenda: [Paste agenda]. My goal is to be an active and thoughtful participant. Instead of just summarizing the agenda, please review it and do the following:
Identify the single most critical decision point on the agenda.
Formulate two insightful questions I could ask to ensure we've considered all angles of that decision.
Suggest one potential 'blind spot' or risk related to the agenda that the team may not be considering.
Your goal is to help me prepare to think critically, not to do the thinking for me."