12. Trailblazing AI Literacy: Connor Mulvaney’s Rural Classroom Revolution
Season 2, Episode 2 of Kinwise Conversations · Hit play or read the transcript
-
Lydia Kumar: Welcome back to Kinwise Conversations, where we explore the real-world crossroads of humanity and technology. Today we're heading to the shores of Flathead Lake in Polson, Montana, to meet Connor Mulvaney: science teacher, computer science catalyst, and freshly minted leader of AI EDU's expanded Trailblazers fellowship for rural-serving educators.
Connor has spent the past two years piloting AI literacy lessons with his own students and coaching colleagues who never thought they'd utter the words "large language models."
This Fall, he'll steer brand new Trailblazer cohorts to give teachers the tools, training, and $875 stipend they need to make AI relevant, inclusive, and yes, a little bit fun. If you've ever wondered how to turn AI fear into classroom curiosity, or how a fly-fishing fanatic ended up teaching deepfakes by showing students giant, totally fake trout, stick around.
Connor's story is equal parts practical strategy and big-picture hope. Let's dive in.
Lydia Kumar: Connor, thank you so much for being on the show and sharing about your experiences and your story. Before we dive into specifics about AI and what you've been doing in that realm, I want to know about who you are and what listeners need to know about you to be oriented to the conversation we're about to have today.
Connor Mulvaney: Yeah. So my name's Connor Mulvaney. I live in rural Montana. I live in Polson. For the past five years, I've been working at the high school there, doing some substitute teaching, teaching earth science, teaching computer science—kind of a lot of different content areas all over the place.
For the last school year, I took on an AI leadership role within the district where I spearheaded professional development and professional learning all around AI. I had a couple of different formal professional learning PD workshops that I would give, as well as some informal luncheons, and would talk a lot with teachers about AI.
So I've done a lot of learning for myself around AI. Over the past couple of years, I've been having a lot of conversations with kids and with teachers as well. Then, this fall I'll be transitioning into a new role with AI EDU where I'll be working to organize some professional learning cohorts for teachers in Montana as well as across the US, putting some cohorts together for teachers to build their own understanding of AI and be equipped with some tools to talk to students in their class about AI. I'm sure I'll share a little bit more about some of those opportunities coming up, but yeah, that's a little bit about me, where I'm at, and some of the projects that I'm working on.
Lydia Kumar: That's really cool. I know you've done some computer science education and then now your focus has shifted to AI. How did that happen? How did you become a person who was talking with teachers and students about this technology?
Connor Mulvaney: My first full-time role at Polson High School was partly teaching computer science. I didn't have a computer science background, and the principal said, "Hey, there's an opening here. You have to teach computer science." I got sent to a training, and so I learned how—like many teachers, I was like, "Oh, here's this class you can teach," and then got some training to teach it. I was learning alongside the kids, computer science and computer coding.
Throughout that process, I started exploring ChatGPT just because it was out, it was new, it was something that I had heard about. I think from there, as the AI models continued to progress, I did a lot of going back and forth with ChatGPT to help me refine my code. As I was new to coding, I was experimenting on my own. Then I started to think about how these types of AI models, these large language models, might impact the way that we look at work, the way that kids might look at some of the work that they do in schools.
Once I started playing around with these tools a little bit, you start to think, "Well, these are really powerful AI models. They also have some limitations as well." And that's an interesting point, to see some really great tools that are really good at, say, writing code, and then also really bad at some other simple things like word counting. I feel like those are interesting examples to look at both strengths and limitations. I think part of this education where we look at AI and talk to kids about it and build this understanding of what AI is and what its strengths and limitations are, helps inform when we may choose to use those tools.
Lydia Kumar: I love that example where it's really powerful at writing code and really limited at word count. It's such a paradigm shift for folks. Most people are using computers, and word count has been around forever and feels so simple, and yet it's something that the generative AI tools have struggled with. I think those examples are probably really helpful when you're trying to explain to teachers and students how this technology works.
When did you realize that this was a bigger thing than just you playing around with AI tools and having conversations with students or your coworkers? This is now your job, when did you realize it was moving in that direction?
Connor Mulvaney: I would say it was probably a little over a year ago. A colleague of mine, a friend of mine down in Missoula, Jason Hahn, was running some informal Zoom meetings where he would lead some discussions about AI, just for Montana teachers. Those helped me to start to think deeply about AI's role, or not its role, in education.
Having conversations with other educators and other teachers about this, as well as just looking at an AI tool... it could be writing, it could be writing code, it could be anything. AI is good at doing all of those things. I guess I'll put an asterisk at "good" there because I think one thing it is good at is giving an answer, and that answer isn't always right. A lot of students, maybe they don't want to do the assignment, so they go and have AI help them or do a good chunk of that assignment. And it's almost always gonna give an answer rather than saying, "We should start to think about these things first."
That type of AI use is definitely concerning. I know there's a lot of teachers out there concerned about AI use and cheating. I'm concerned about that quite a bit as well, because if we don't have specific structure for when students could use AI or when they couldn't, then some kids are just gonna use it all the time, others are not gonna use it. There's gonna be some inconsistency there.
I think a lot of people are lumping AI into this computer science thing, but it does affect a lot of different content areas, especially the humanities, even more so than computer science, maybe.
Lydia Kumar: I've thought about that as well. I've been looking for educators who are leading the way in AI, and it's a lot of former computer science teachers, probably because you all are the ones folks are looking to to say, "What do we do with this technology?" But when you think about what large language models are actually good at, it is a lot of the writing. It's very good at creating outputs that are very similar, maybe identical, to the outputs that we would ask students for to assess learning. That impacts everyone.
How do you navigate this fear of cheating? I've talked to educators who are like, "I don't want to go anywhere near it. I don't want to touch it. I am concerned students aren't going to learn." At the same time, students are going to do what students do, particularly if they don't have guidance. How have you been helping teachers navigate this?
Connor Mulvaney: That is a big question. It's like the elephant in the room. I would say one of my favorite things to do with both educators and students is to just have a couple of guiding, informal questions about AI. Some of my favorites are: What's AI good at? What's AI not good at? And what is one thing you want to learn about AI?
Three super easy questions to have a conversation. What I have found is that when I have those up on my board at the beginning of class as a bell ringer, it starts a conversation that may get at student misconceptions. Some students are going to start to be candid about their AI use. It's not about catching this kid because he or she is using AI to write their English papers. I'm not so concerned about that. I'm more concerned about understanding why a kid or how a student might be using those tools. That would be one way, just to have some form of informal conversation like that because I think it breaks the ice.
Another way I've found a lot of success in talking to students about AI is using some AI EDU resources. They put out AI literacy curriculum for free for anyone to download at aiedu.org. One of my favorites are the AI Snapshots, which are these more structured bell-ringers with a news item and a reflection question. We could be talking about something like synthetic voice. There are a lot of companies out there that can make fake voices from a short audio recording. A reflection question following that could be: what are the risks and benefits of this technology? Where could it be used well? Where could it be used, maybe not so well?
That has been a really great way to talk about the ethics of these tools, as well as their limitations or appropriate use cases. For me, those two have been great first steps to not only break the ice with students about AI but also start to build their knowledge and understanding. I'll also put a disclaimer in here that I'm pretty hesitant to give students chatbot access, to say, "Go hop on AI to do this thing or that thing." I think there are a number of reasons that chatbots in the classroom are challenging. There's the data privacy side, there's the bias and ethics of these systems. I'm not quite sure that jumping onto a chatbot is a really great structured assignment. It could be good for exploration at times, but I would rather work all together on something.
Lydia Kumar: I agree with that so much. Being intentional about why you're choosing to use a tool really matters for student learning. That informal conversation is so important because if we're not talking about it, then it is siloed. People sometimes feel ashamed or like they're doing something wrong when they're using AI tools, and maybe they are, but if there isn't space to have a conversation, then you can't surface that.
Vera Cubero, who has done a lot of work in North Carolina with AI, compared the way schools responded to social media to some of the ways we respond to AI, in that we're just like, "Oh, if we ignore it, it'll be fine." The reality is, ignoring social media didn't benefit students and it didn't benefit educators. Students need explicit guidance and they need places to talk and think about this life-changing technology that's in their hands.
Connor Mulvaney: A lot of teachers are not sure that they can have a conversation about AI because they don't know a lot about AI themselves. I've spent a lot of time learning about AI as much as I can, and there are still so many unanswered questions. I don't know that there's ever a spot where you're like, "Oh, I have enough knowledge now to lead some conversations." I think we can all open up the conversation to students and facilitate an informal dialogue about what the use of these tools may look like in their school lives, or just in education in general. Just even asking kids, "Hey, what do you know about AI? Do you think we should teach students how to use AI in school?" Those are good conversation starters.
Lydia Kumar: Do you have some particularly interesting questions or comments that you've received from students? They could be funny or just thought-provoking.
Connor Mulvaney: I was using one of the AI EDU snapshots, and the news item was about new AI models that can do college-level calculus. If those get developed, what are the risks and benefits? Should those be used? That snapshot resulted in a lot of conversations about, "Well, then people wouldn't have to do their math homework 'cause AI can do all the math homework." Or, "Maybe it could discover new things, which would be really fascinating."
Then one student who's fairly vocal said, "How is this new? ChatGPT already does this." And I'm like, "Interesting, let's lean into that a little bit more." And the student is like, "Yeah, ChatGPT is great. It does all my math homework if I want it to." I'm like, "Okay, but we're talking about calculus. Are you in calculus?" The student wasn't.
So I was like, "Okay, well let's find a calculus problem." I pulled one up on the board, and the student then went and asked the AI to solve it and goes, "Look, it got the answer." I was like, "Oh, cool. How do you know the answer's right?" He's like, "'Cause it shows me the step-by-step." I'm like, "Okay, cool. You are following some instructions of how we might prove something. But step-by-step is pointless if I can't read it. If someone gives me the blueprints to build a house, that doesn't mean I can build a house. There are a lot of skills I need to follow those."
Then we went full circle to, "Well, maybe we could ask the calculus teacher to come into class and verify." I'm like, "Great! We have tools like people around us who have a lot of skills in this building, and we can really leverage those tools rather than the ones in our pockets to solve challenging problems like calculus problems."
That one really stands out to me because my AI usage in coding extends just as far as my coding knowledge. If it's writing code that I don't know how it works, then once there's an error, I can't fix it. The same thing can go for math. Okay, great, it gave me an answer, but I can't verify whether or not that answer's correct.
Lydia Kumar: You have to switch from thinking about the output to the process. It's really cool to hear an example of you having that revelatory moment with your students, where you were able to help them understand where this leads.
I know you're working with a lot of educators and leading PD. How do you navigate this fear of cheating while also helping teachers see the potential of these tools?
Connor Mulvaney: I think for many teachers, a good first step is to establish some clear boundaries for AI use in the classroom. A lot of people have referred to this as a "stoplight" system: red light means you can't use AI at all, yellow light means you can use it a little bit, and green light means you can use AI.
In my upper-level computer science class, having a candid conversation with students about what we think is appropriate was really effective. We agreed, if we're learning a new skill, we can't use AI to do that. But maybe for a previously learned skill, AI could help us set up our work to demonstrate that new learning.
At the beginning of the school year, I let that class just explore and build whatever they wanted with a custom chatbot I had created. They saw for themselves that you can't really build everything you want all the time, or you run into errors and you can't fix them because you don't understand the code. Because students got to explore at the beginning, later in the year when we started to have those restrictions, I think they understood why. They started to say, "Oh, I think we should use AI for this, but not for that," and create their own rules.
Lydia Kumar: That's such a good practice—co-creating norms together. You're building your classroom culture around what AI looks like. I think there's so much that experienced teachers already do that can be applied to this generative AI world. It's a different context, but they're not starting from scratch.
Connor Mulvaney: And in that spirit of co-creating, I think co-exploring together is key. I found myself being like, "Oh, wow, that is interesting. I didn't think about using AI in that way." As a teacher, I also had to go in with a more exploratory mindset with the students.
Lydia Kumar: My last question is the one I always like to ask people: what is the question, idea, or thought about AI that you can't stop thinking about?
Connor Mulvaney: AI chatbot companions are something that keep me up at night. I think we talked about the social media analogy. There are some really concerning uses of AI chatbot companions with platforms like Replika or Character.ai. I know that a lot of high school and middle school students are exploring and downloading these apps. Some students are spending a significant amount of time chatting with an AI companion.
When I think about the great benefit of a high school setting, it's that you get to be around all these different people, get to know them, and have those face-to-face interactions. It's concerning to me that a percentage of students are spending a lot of time chatting with an AI chatbot.
Lydia Kumar: It's a good thing to be aware of and to think about. Being aware of what's concerning allows us to enter into conversations with students and each other. Hopefully, those conversations are one way to curve some of the negative effects.
Connor Mulvaney: Exactly.
Lydia Kumar: Well, thanks so much, Connor. Is there anything else on your mind that you want to share?
Connor Mulvaney: Yeah, I'll make a little comment about the AI Trailblazer Fellowship cohorts for rural teachers that I'm putting together through AI EDU. I've been part of these cohorts for the past two years, and this fall I'll be organizing and facilitating some of them. If you're listening to this close to when it comes out, you should be able to apply to join my cohort. It is a 10-week-long cohort that meets every other week. The big component is helping teachers build their own understanding of AI and then equipping them with some tools to talk to their students about it. Teachers who complete this cohort get an $875 stipend and also get connected with the community of AI EDU Trailblazers across the U.S. I'm excited to dive into that and work with a lot of teachers across the country. If you're interested, go ahead and apply or shoot me an email.
Lydia Kumar: That's a wrap on our conversation with Connor Mulvaney, teacher, AI coach, and trailblazer-in-chief.
Three quick takeaways: First, start with questions, not answers. Ask students what AI is good at, what it's not, and what they want to learn. Second, co-create classroom guardrails. A shared AI traffic light rubric—red, yellow, green—builds trust faster than a detector ever could. Finally, community beats solo tinkering. Connor's Trailblazer Fellowship pairs every lesson with a national peer network and expert coaching.
Applications close Friday, August 15th. Hit the links in today's show notes to apply for a Trailblazer cohort, connect with Connor on LinkedIn for PD workshops, or download free AI literacy snapshots from AI EDU.
And if your organization is ready for a roadmap beyond the classroom, Kinwise runs everything from a 30-day teacher AI pilot to a one-day AI leadership lab that helps district teams draft board-ready guidelines. Details and bookings at kinwise.org.
Finally, if you found value in this podcast, the best way to support the show is to subscribe, leave a quick review, or share this episode with a friend. It makes a huge difference. Until next time, stay curious, stay grounded, and stay kinwise.
-
Loved Connor’s wisdom? Keep the conversation flowing.
Explore his favorite resources, apply for the Trailblazers Fellowship, and dive into Kinwise programs that turn AI talk into classroom action:– LinkedIn profile
Apply to the Trailblazers Fellowship– Stipend, virtual sessions, eight new cohorts for Fall ’25.
aiEDU “AI Snapshots” bell-ringers– 5-minute news prompts + reflection questions for any subject.
Co-Intelligence (Ethan Mollick) – Book + Substack for practical, human-centered AI thinking.
Podcasts Connor follows
– AI for Humans, Hard Fork (NYT), and aiEDU Studios for K-12 deep divesKinwise Teacher AI Pilot (30 days)– Recapture ≈ 6 weeks of teacher time per year and use AI with purpose.
Kinwise AI Leadership Lab (1 day + 30 days support) – Draft AI guidelines and rollout plan.
-
1. “Three Questions” Bell-Ringer Builder
Prompt:
You are an instructional coach. Draft a five-minute bell-ringer activity that asks students: (1) What is AI good at? (2) What is AI not good at? (3) What’s one thing you want to learn about AI?
• Provide exact wording for each question, then suggest two follow-up discussion moves for the teacher.
• Tailor it for [grade level / subject] and keep it student-friendly.2. Classroom “Traffic-Light” Policy Generator
Prompt:
Act as a secondary teacher co-creating AI norms with students. Write a one-page “traffic-light” policy (Red = no AI, Yellow = limited AI, Green = open AI) that explains:
When each color applies (with concrete assignment examples).
Why these boundaries protect learning integrity.
A short reflection question students answer before using any AI tool.
Adapt the tone for [course name] and invite students to suggest edits.
3. AI Snapshot Discussion Starter
Prompt:
You are designing a 10-minute mini-lesson that uses the AI EDU “Snapshot” format.
• Select a recent, student-relevant AI news story about [topic—e.g., synthetic voice, deepfakes, calculus-solving models].
• Summarize the headline in ≤ 60 words.
• Pose one ethics question and one “risks vs. benefits” question.
• Include a quick-write prompt plus 2 share-out strategies.4. Code-Review Coach with Limited AI Assistance
Prompt:
Serve as a code-review partner for high-school Computer Science II.
• The student will paste previously written Python code that throws an error.
• Give scaffolding questions that lead the student to locate and debug the error without pasting a full solution.
• Offer an optional hint the student can reveal.
• End with one reflection question about how AI helped—and where human logic was still required.5. Family Newsletter Explainer
Prompt:
Write a 250-word newsletter blurb to families explaining how our class will use AI tools this term.
• Explain the traffic-light policy in plain language.
• Address common concerns about cheating and data privacy.
• Highlight one positive example (e.g., AI-generated reading questions) and its learning benefit.
• Close with an invitation for parents to share questions or thoughts.