13. Beyond the Bot: John Sharon on Protecting Human Connection in Schools

Season 2, Episode 3 of Kinwise Conversations · Hit play or read the transcript

Values-Aligned AI Adoption

How do mission-driven school leaders navigate the immense corporate and governmental push for AI adoption without sacrificing institutional values? With over 30 years of experience in education, John Sharon offers a masterclass in leading through technological disruption. Drawing from his work at a Quaker school, he provides a powerful, values-based framework for vetting AI tools and policies. This conversation moves beyond the typical tech hype cycle to address the core strategic challenges facing superintendents and executives today: How do we balance innovation with human interdependence? How do we build nimble policies for a technology that evolves daily? And most importantly, how do we ensure AI serves as a tool for deeper learning, not a replacement for it? John’s perspective is an essential guide for any leader tasked with designing a thoughtful,

Key Takeaways for Superintendents, K-12 Leaders, and Mission-Driven Executives

  • Lead with a Values-Based Framework: Before adopting any AI tool, evaluate it against your institution's core mission. Use principles like truth, simplicity, and respect to question whether the technology truly serves your community's goals or simply introduces complexity.

  • Adopt a "Brake and Accelerator" Policy: AI is not an all-or-nothing proposition. Effective leadership requires keeping a foot on both the accelerator to encourage innovation and the brake to ensure cautious, ethical implementation. Your policies must be nimble and designed to evolve.

  • Question the Monetization Motive: Be skeptical of the corporate push for AI in schools. Ask the critical question: If no one were making money from this technology, would the urgency for its adoption be the same? This lens helps separate genuine pedagogical value from market-driven hype.

  • Prioritize Human Interdependence: The greatest risk of AI is the erosion of human connection. Design policies that prevent AI from replacing the symbiotic, interactive relationships between teachers, students, and colleagues, which are the true foundation of learning.

  • Focus Curriculum on Critical Discernment: The goal is not just to teach students how to use AI, but to teach them how to be critical, discerning, and ethical users. Curriculum should focus on the metacognitive skills needed to question AI-generated content and understand its impact.

A Veteran's Perspective: Is AI Another Passing Fad?

[John Sharon]: And it is a, um…. It's just a fascinating job to have. To be able to see, um, what other schools are doing, not just in terms of AI, but in terms of all kinds of different. Different trends and innovations. And it's been really interesting to see what innovations have come through over the years that, um, were just sort of…. Revolving door passing fads, and which ones came in and actually stuck. Um, and, and so…. Um… I'm not trying to sound like I'm an old geezer, but I've been doing this a long time, and it's been. Incredibly, um…. Helpful to sort. Be able to see some of the things that have happened in education, especially through the lens of. Of a 30-plus year career.

[Lydia Kumar]: Yeah, absolutely. And I'm curious, because you talked about. Computers coming in. How that felt, and being…. I don't know, maybe unsure if it was what was best for education, and now we have AI coming, and there's a huge push by. Corporations, uh. I mean, just, there's a push by a lot of different…. Types… a lot of different organizations, corporations, the government. To have AI education in schools. And so, I'm curious, how does that land for you, or what are you thinking right now? Does it feel…. Like, it's what's best, or is there a way for it to be what's best?

[John Sharon]: Great question, and what I feel is really happening… hmm…. Now, although I think I felt it more strongly maybe a year ago, um…. But that's maybe because I've gotten used to it. Uh, but… but I… I think…. What I noticed a year ago. Was that… it actually reminded me of the Microsoft push for laptops in schools. Because… because it does feel like a monetization. Of a technology that is good for the people who. Own the technology, and might be good for the users, but not necessarily so. I don't think we've seen enough. We haven't… I don't think we've had enough time pass yet to see how good it is for the users. If… my question is always, if…. If nobody was making money off of AI. Would this be a thing? Would this be such a push in education? I think the answer is no. Can't fully prove that, obviously. But it is something that I'm… I'm careful to…. Pay attention to.

[Lydia Kumar]: Yeah, that's… it's interesting to hear that perspective, I think. I… I've talked to a lot of different people, and I see. Ways, through these conversations and through, you know, some of my own learning and research, where. I see potential for AI tools in education to push student thinking, but. There's also a lot of risks in replacing student thinking, and…. Um, I think there's a lot of uncertainty about how… and…. There's a lot of uncertainty. We're introducing a new technology, and that leads to a lot of complexity, so…. It is…. I think wise to be cautious. Because there's a lot of… there's a lot new here that still needs to be explored and thought through.

[John Sharon]: Right, and I don't mean to sound like a Luddite at all. I'm not afraid of it, I'm not…. Um, I'm not thinking it's, you know, evil… evil digitized, um…. But I do think caution is…. Really important, um…. Because… because I just don't think we know yet. Sort of the impact it's gonna have. On learning, and we don't yet know. Um, what…. What the future is and how it's gonna change, because…. As we started before we started recording, as we were talking about the only thing constant is fluidity. And so, it's gonna change, and it's gonna continue to evolve, and so the question is how that's gonna… how are we gonna be able to respond?

A Values-Based Framework for AI Policy

[Lydia Kumar]: Mm-hmm. I want to kind of…. Pull into this quake… Carolina Friends School is a Quaker school, and. Their Quakers have values that I think. Would be helpful for maybe listeners to understand. So, John, do you want to kind of talk about, I don't know, Quaker values, sort of the. The perspective or the lens that. Um, the Carolina Friends School holds. And then, maybe we could then go from, you know, here are the values, here's the lens, and then think about looking at new technology through that lens.

[John Sharon]: Yeah, sure. Happy to do that. Um, so, so Quaker, I'm not going to be able to sort of completely. Summarize quicker values in a minute, but I can… I can say that there are a few tenets of quicker… quicker values, quicker, they call testimonies. That… that are relevant for this conversation.

Truth, Simplicity, and Respect in the Age of AI

[John Sharon]: Quakers, first of all, believe that truth is continually revealed. What is… what does that mean? One of the questions that we're asking is, what does that mean? Truth is continually revealed. When you're dealing with a technology like AI that. Spits out hallucinations and things that are not true, and sometimes is deliberately intended to deceive. Um, when it's not being used well. So, that's a question that we're wrestling with.

[John Sharon]: Another tenant for Quaker… in Quakerism is simplicity. And how do we reconcile, um, simplicity. With, um, with, with concepts that are… can be super complex in the AI world. Um, and… and one could argue that AI is. Is resonant with simplicity, because it simplifies and synthesizes things. But… but, um…. But to what extent? The question we're asking is, to what extent is. Is… is it being… is it being oversimplified? Are things being oversimplified? Because, um, one of the things that we say around here in our philosophy statement is. Um, that…. Um, that…. Simplicity is actually incredibly complex. And there are nuances to simplicity that AI… technology like AI. Doesn't always…. Isn't always able to grasp.

[John Sharon]: Another tenet of. Of Quaker, uh…. Philosophy is… is there's that of God in everyone, and so…. Um, how do you discern… and therefore, every one is deserving of respect. And in… with technology, like AI, the question in my mind, and that we're wrestling with is. Is… AI feels very impersonal. It feels very, um…. It feels very removed from… from…. Interacting with human beings, even though it's based on. Sort of human production, I guess. Uh, ultimately. But how do we reconcile this notion of there is that of God in everyone when you're dealing with information that's coming from. Things that are not…. Human-produced directly. Um, and, um…. And that also begets the question. Of people who want to use AI for nefarious ways. Um, what do we do with that, right? How do we reconcile that?

The Hidden Cost: Environmental Stewardship and AI

[John Sharon]: The other thing that's really important that doesn't get a lot of airplay in conversations around AI. Is Quaker testimony believes in environmental stewardship. And AI uses a ridiculous amount of water in to allow itself. To function. There was an article in the New York Times just a couple of days ago about Meta opening up an AI facility in an area in Texas. And the people who live right behind this facility, suddenly they have no water. They're on a well, and they're not able to have any water anymore, and. Um, they can't figure out…. How to deal with this. Um, so the environmental stewardship aspect of AI is, is…. Is not getting a lot of attention. And I think Quaker schools in particular are in a unique position to start raising some serious questions. About the ethical implications. Of that, and of its use in long term, um, on, on the environment, and…. Um, and I just haven't seen a lot of other people talking about that yet.

From Policy to Practice: The Challenge of Human Connection

[Lydia Kumar]: Yeah, I think those are…. That's really fascinating, and…. A different perspective to be able to step back and say, here are. Values that are sort of woven through what we do and what we believe. And… and trying to make choices around what technology looks like in a school... I know you're part of a Quaker, like, a consortium of Quaker schools, right, that are thinking about AI.

[John Sharon]: Yes. So, it's just in its genesis stage right now... my observations are a couple of things. One, even in Quaker schools, there's a range of. Embracing of AI... there is a…. To use the driving the car metaphor for a minute, there is a…. There's an equal amount of pressure on the accelerator and the brake. Um, and so the car's moving, but it's not moving super fast. And that feels good. Um, for me and my perspective about how Quakers should be coming to…. Ai, sort of, in education. Um, because we shouldn't put the brakes on fully, but yet we shouldn't…. Fully use the accelerator quite yet, either.

[John Sharon]: Recently, just in the last couple of weeks, a school has joined the conversation where they are piloting. The use of AI. Um, and onboarding new faculty... they have, uh, created, and it's all in its, sort of testing stage right now... The idea would be that if I'm a new faculty member at that school. I could go into this AI tool that's been created just for this school. And if I have a question about. What does it mean to create a lesson plan that is based on Quaker values? Um, at this particular school, I hit…. I hit the send button, and I got a response that is totally in line with the values of the mission of that particular school for that particular school culture. I thought that was really interesting... But I also worry that…. If we have new faculty just clicking a button to get a question answered. What happens to human relationships at that school? Where does the human inter… interactiveness, interdependence. Come in to the equation...

[Lydia Kumar]: Right…. it might take away from your... reaching out to whoever's in HR to ask a question, because you can ask this technology, and then you never meet the person in HR, because you don't have a reason to.

[John Sharon]: Right, right. I saw recently... a friend of mine is an educational consultant... he's developed an AI tool that... that replicates. His advice... you don't have to talk to him, you can just talk to the tool. And… and I…. Thought that was just fascinating... I'm certain I understand why he is doing it, because there's only one of him, and he has probably more demand. Um, and… and I just think it's really interesting, but I'm interested to see how the impact of that that's going to have on…. On human interaction with him and his clients.

[Lydia Kumar]: I have seen a tool... that does that, and thought about it as well as, uh…. This idea of being able to scale yourself, and…. The AI version of me would probably always be nice, you know?

[John Sharon]: Never lose our temper, never get frustrated, always have the right answer, right?

Designing Curriculum for the AI Era: Beyond Information to Discernment

[John Sharon]: I do think that what gets… what's getting lost in the conversation about AI and education is the role that inter… human interdependence plays in teaching and learning... The reality is that teaching is interactive. Teaching is interdependent, and. And between… between adult and child... There is really no distinction between teacher and student... we are all in the process of learning together. And, um, and sometimes I'll ask… I'll ask a candidate when I interview them... what have you learned recently from a child? And I think that's such a great question to ask a teacher. And when you're talking about, um, potentially AI as becoming. More emphasis… there's a greater emphasis on… on just about information. Not about learning. Um, you lose the process, the idea that learning is symbiotic between the teacher and the learner.

[Lydia Kumar]: How are you seeing your staff or your students using AI right now?

[John Sharon]: Well, we've taken a very, uh, an intentionally…. Ground-up approach to AI here... We've given teachers leeway to use AI. Um, as they sort of see fit... teachers are using it as thought partners, as brainstormers for lesson plans. Um, for teaching students, and this is where I'm most excited about it. Teaching students how to be critical users of AI, teaching students how to become. Um, careful discerners of AI content, um, so that it's, it's…. The teachers are doing the metacognitive skills. Of how to teach AI, um, in all of its complexities. Without, sort of, fully on embracing AI and saying, go use AI for this tool, for this assignment, and it'll be fine.

[John Sharon]: But my vision, ultimately, is that we become a school. Um, that…. That really intentionally teaches. Critical use of AI in our students, um, to learn to use it, but use it thoughtfully, constructively, discernfully. And cautiously, um, because…. In the end, they're gonna… when they graduate from here and get on into college, they're gonna see people using AI in all kinds of ways, and not always. For the good. And so we want. To teach our students how to use AI for the good.

Leadership Lessons: Creating Nimble AI Policy for the Future

[Lydia Kumar]: You've been in schools and led through different technological changes before... As you kind of take that knowledge that you have. Accumulated throughout your life and your time leading schools. What… I don't know, are there, like, nuggets of wisdom that you have from the past that you're holding onto?

[John Sharon]: Yeah, that's a great question. I think the, um…. The reason that I really encourage schools to develop AI policies that are nimble... When I say nimble, what I mean there is… is that they're not…. Carved in stone. Um, because…. With other technology, what we found is that. That you needed to kind of pivot... So I think nimbleness, being able to really. Be thoughtful, and in creating policies. Um, if you're going to create policies, be willing to change them from year to year, because the technology is going to continue to change.

[John Sharon]: The other thing that I've been thinking about lately is... AI is a little bit like the discovery of a new animal in a jungle. Um, among villagers who are in the jungle. Um, that they had never seen before. And they don't know what kind of animal this is. They don't know whether it's a friendly animal or it's gonna eat them... They just have to be really careful. Uh, about getting to know this new animal, and letting the animal get to know them. And I think… I think it's very similar to… to AI, because… because I think we are still in that phase of…. Of… of… kind of…. Measuring, careful measuring of the impact of AI... Whether or not... this is gonna be for, ultimately for our good, or will it be for our harm? That's a question I think we are… we continue. To grapple with, and we should continue to grapple with it for the foreseeable future.

[Lydia Kumar]: An idea or a question about AI that you are just continually grappling with?

[John Sharon]: I think what I'm grappling with, and I hope I continue to grapple with, is always looking at AI as a tool for learning and not replace learning. And become the learning itself. That's what I always want as an educator to be doing... I want educators to be curious about it, to learn about. To be willing to ask hard questions about AI, even as it continues to evolve... I also want us to be really paying careful attention to both the environmental impact of AI, as well as the privacy impact of AI... So I think that we have to continue to be keeping our foot on the brake and the pedal at the same accelerator at the same time. We continue to… we can continue to move forward, but I think we have to see it... as a tool for learning and not the learning itself.

Prompts Inspired by John

Policy with Purpose


"Help me design an AI policy for [your school / nonprofit / organization] that reflects the values of [insert key values, e.g., simplicity, integrity, environmental stewardship]. Present it as a two-page document with sections for guiding principles, acceptable uses, and ongoing review."

Student Voices on AI


"Create a set of 10 discussion questions for [student age group, e.g., high school / middle school] to critically evaluate AI’s role in learning. Include prompts about [insert focus areas, e.g., privacy, empathy, environmental impact, human relationships], and format them in a printable handout."

Mission-Driven Onboarding


"Draft onboarding materials for new staff at [your school / organization] that explain our AI tools in a warm, mission-aligned tone. Write it as a one-page welcome letter plus a short FAQ that reinforces our values of [insert values]."

AI’s Environmental Ledger


"Compare the environmental footprint of AI with that of other common technologies used in [your context, e.g., education / nonprofit operations]. Present the findings in a simple table and add three actionable steps to reduce AI’s impact."

A Year of Thoughtful AI Use


"Design a year-long curriculum module for [your target audience, e.g., 9th–12th grade students / adult learners] that teaches them to use AI as a tool for learning, not a replacement for learning. Outline monthly topics, key activities, and a capstone project, with an emphasis on [insert priorities, e.g., critical thinking, ethical use, community values]."

Connect & Resources

About the Guest

With a career in education spanning over three decades, John Sharon is a seasoned leader with deep expertise in curriculum development, progressive education, and navigating institutional change. As an administrator at Carolina Friends School, he champions a thoughtful, values-driven approach to technology integration. His perspective is grounded in decades of experience observing educational trends, allowing him to cut through the hype and focus on what truly fosters deep, meaningful, and human-centered learning for students.

  • Intro

    Welcome to Kinwise Conversations, where we explore what it means to integrate AI into our work and lives with care, clarity, and creativity111. I'm your host, Lydia Kumar.

    Today, we're taking a step back from the rapid pace of AI to ask a more fundamental question: how can timeless values guide our relationship with modern technology? I'm honored to be joined by John Sharon of the Carolina Friends School, a veteran educator with over three decades of experience. John brings a deeply reflective, value-based perspective to the conversation2. We'll explore how Quaker testimonies like simplicity, integrity, and environmental stewardship create a powerful lens for evaluating AI's role in education, the ethical questions his community is grappling with, and why he believes in keeping one foot on the brake and one on the accelerator.

    Transcript

    Lydia Kumar: Thank you so much for being on the podcast, John3. I'm really excited to hear your perspective on AI in terms of who you are as a person and the work you do. I know you work at the Carolina Friends School, which is a Quaker school, and so I want to start by giving you space to talk about your journey in education—personally or professionally—anything you think is important for listeners to understand about you for our conversation4.

    John Sharon: Thanks, Lydia. It's great to talk to you. I'm going to date myself, but this is my 34th year in education5. When I started as a fourth-grade teacher at an all-boys school in Washington, DC, I was still using a mimeograph machine to copy tests and quizzes6. I start with that because I've been doing this a long time and have seen a lot of evolution in education7. Early in my career, I got into school administration, probably too early, but that gave me a window into bigger trends in education and seeing which innovations came in and actually stuck8. I distinctly remember when laptops were being aggressively pushed into schools by Microsoft9. It seemed like a monetization venture for Microsoft rather than something that was good for education10. Part of my job as assistant head for teaching and learning at Carolina Friends School is to keep an eye on educational trends. I oversee the curriculum and learning for pre-K through 12th grade, and it's a fascinating job11. It's been incredibly helpful to see what innovations have happened over the years through the lens of a 30-plus year career12.

    Lydia Kumar: That's fascinating. I'm curious because you talked about computers coming in and how that felt, and now we have AI with a huge push from corporations, the government, and other organizations for AI in education13. How does that land for you? Does it feel like it's what's best for education, or is there a way for it to be what's best?14.

    John Sharon: It's a great question15. What I feel is happening actually reminds me of the Microsoft push for laptops in schools16. It feels like a monetization of a technology that might be good for the users but isn't necessarily so17. We haven't had enough time to see how good it is for the users18. My question is always, if nobody was making money off of AI, would this be such a push in education?19. I think the answer is no, but it's something I'm careful to pay attention to20. I'm not a Luddite and I'm not afraid of it, but I do think caution is really important because I don't think we know the full impact it's going to have on learning21. The only thing constant is fluidity, so it's going to change, and the question is how we're going to be able to respond22.

    Lydia Kumar: Carolina Friends School is a Quaker school, and Quaker values might be helpful for listeners to understand23. Can you talk about the perspective or lens that the Carolina Friends School holds and then maybe we can think about looking at this new technology through that lens?24.

    John Sharon: Happy to. There are a few Quaker testimonies that are relevant to this conversation25. Quakers believe that

    truth is continually revealed26. We're wrestling with what that means when you're dealing with a technology that spits out hallucinations and sometimes is deliberately intended to deceive27. Another tenet of Quakerism is

    simplicity, and we're asking to what extent AI is oversimplifying things, because simplicity is actually incredibly complex28. There's another testimony that says there's

    that of God in everyone, so everyone is deserving of respect29. AI feels very impersonal, so we're grappling with how to reconcile this notion with technology that's not human-produced directly30. Another thing that doesn't get a lot of airplay is the Quaker belief in

    environmental stewardship31. AI uses a ridiculous amount of water to function, and I haven't seen a lot of other people talking about the ethical implications of its long-term impact on the environment32.

    Lydia Kumar: That's a fascinating and different perspective33. It's interesting to step back and make choices about technology through the lens of values that are woven through your community34. I know you're part of a consortium of Quaker schools that are thinking about AI35. Are these the conversations you're having? What are your conversations with other Quaker schools like right now?36.

    John Sharon: The group is just in its genesis stage, and my observation is that there's a range of comfort with AI that's more reflective of school culture than anything else37. There's an equal amount of pressure on the accelerator and the brake38. The car's moving, but it's not moving super fast, which feels good for my perspective on how Quakers should be approaching AI39. We shouldn't put the brakes on fully, but we also shouldn't fully use the accelerator quite yet40. A school recently joined the conversation that is piloting the use of AI to onboard new faculty41. In the instructions they used to develop the tool, they were very intentional about making sure the answers were fully aligned with the school's mission and Quaker values, with directions like, "make sure your answers are warm and friendly and soft"42. The idea is that a new faculty member could go into this tool and ask a question, and the response would be totally in line with the school's mission and culture43. It could be a really helpful tool, but I also worry what happens to human relationships when new faculty can just click a button to get a question answered44. Where does human interdependence come into the equation?45. I'm not saying it will go away, but it has the potential to change the dynamic, and I'm interested to see how that goes46.

    Lydia Kumar: It's interesting because it feels like you have this alien intelligence that you can go to for answers rather than having slower conversations with a person47. The intelligence is trying to mirror the values of the school, but there's a potential for blurriness or over-reliance that could take away from the weak ties you create in a school48. You might not ever meet the person in HR because you don't have a reason to49.

    John Sharon: Right, I saw a friend who is an educational consultant and former head of school recently developed an AI tool that replicates his advice50. You don't have to talk to him; you can just talk to the tool51. I thought that was fascinating, and I'm interested to see how that impacts human interaction with him and his clients52. The AI version of me would probably always be nice and have the right answer53. This idea of scaling my God-given identity by creating a digital clone of myself seems strange54.

    Lydia Kumar: It's a big change55. I was talking to someone who works at one of the two-hour learning schools, a private school model that does two hours of AI instruction in the morning56. The rest of the day, students pursue clubs, entrepreneurship, critical thinking, and public speaking57. The interesting thing about that conversation was the idea that more AI doesn't mean fewer adults58. They want to have more adults and more time for relationships59. This is one of the most AI-forward learning institutions in the country, but there's a huge focus on the human side and relationships60.

    John Sharon: I'm encouraged to hear that because what's getting lost in the conversation is the role that human interdependence plays in teaching and learning61. The reality is that teaching is interactive and interdependent, between adult and child62. Here at my school, we intentionally flatten hierarchies, and there's no distinction between teacher and student. We call them staff, and students call teachers by their first name because we are all in the process of learning together63. I think that learning is symbiotic between the teacher and the learner, and with AI tools, you can take that symbiosis out of it64. You can learn information from AI, but beyond information, are you learning about yourself, about learning, and about the process of learning from a machine?65.

    Lydia Kumar: How are you seeing your staff or students using AI right now?66. Is there a lot of that going on in the school? What does it look like?67.

    John Sharon: We've taken an intentionally ground-up approach to AI68. We've given teachers leeway to use AI as they see fit, and sometimes that's been great and sometimes it's been not so great69. A few years ago, we had a situation where a teacher was having AI write his end-of-year comments for him70. That was problematic because it was putting student information into ChatGPT, which raised privacy concerns71. Our end-of-year comments are very extensive because we don't have grades, and parents and guardians believe they're being written by human beings who know their children, which was not the case72. We now allow teachers who struggle with writing to do a little AI writing as long as they personalize it afterward73. Otherwise, teachers are using AI as thought partners and for brainstorming lesson plans74. This is where I'm most excited about it—teaching students how to be

    critical users of AI75. My vision is that we become a school that intentionally teaches critical use of AI, so students learn to use it thoughtfully, constructively, and cautiously76. They're going to see people using AI in all kinds of ways, and we want to teach them to use it for the good77.

    Lydia Kumar: I think that's so important78. I've seen some emerging research about the impacts of AI tools on learning79. In one study, students perceived that they had learned more than they actually had when working with a chatbot because they saw writing down the answers as actual learning80. It's important to teach people to think critically because interacting with the technology can change how we view ourselves and our learning81.

    John Sharon: Right, and that's a great story82. It reminds me of research from a "Learning in the Brain" conference I went to 20 years ago about students who got perfect SAT scores83. The students who did well on those tests were able to put themselves into the shoes of the person writing the test, and there was this

    empathy factor that other students didn't have84. This was an incredibly human interaction they were exhibiting85. We all need empathy for whoever is selecting the training data our AI is trained on86.

    Lydia Kumar: You've led schools through different technological changes—dot-com, cell phones, social media, and now AI87. What are some nuggets of wisdom you have from the past that you're holding onto, or mistakes you want to avoid with this new innovation?88.

    John Sharon: The reason I encourage schools to develop nimble AI policies is because with other technology, we found that you needed to be willing to change them from year to year89. I've also been thinking about AI as being a little bit like the discovery of a new animal in a jungle90. The villagers don't know if it's friendly, if it's going to eat them, or whether it should be tamed91. They have to be very careful about getting to know this new animal and letting the animal get to know them92. I think we're still in that phase with AI, carefully measuring its impact93. We continue to grapple with whether this is going to be for our good or for our harm, and we should continue to grapple with it for the foreseeable future94. Social media showed us that more than one thing can be true—there were a lot of pros and a lot of harm95.

    Lydia Kumar: What's an idea or a question about AI that you are just continually grappling with right now?96.

    John Sharon: I think what I'm grappling with is always looking at AI as a tool for learning and not as a replacement for learning itself97. I want educators to be curious about it and to be willing to ask hard questions, especially as it continues to evolve and increase in usage98. I also want us to be paying careful attention to both the

    environmental impact of AI and the privacy impact of AI99. For example, when teachers have AI grade papers, that data is not secured or private, and student content is suddenly out there100. So, I think we have to continue to move forward, but we have to keep our foot on the brake and see AI as a tool for learning and not the learning itself101.

    Outro

    I enjoyed that thoughtful and grounding conversation with John Sharon. Thank you for sharing your wisdom with us, John. I was particularly struck by his call to view AI not as the learning itself, but as a

    tool for learning102. His use of the Quaker value that

    truth is continually revealed 103 frames the critical, ongoing process of discerning AI's role in our schools and our lives.

    To dive deeper into today's topics with John, I've put everything for you in one place. Just head over to the resource page for this episode at kinwise.org/podcast104. There, you'll find the full transcript, more about the Carolina Friends School, and other resources inspired by our conversation105.

    For the leaders and teams listening, if these insights have you thinking about how to build a real AI strategy for your own work, I invite you to learn more about the Kinwise Pilot Program. We partner with organizations to create practical, human-centered professional development and policies that empower your team to use these new tools with confidence and care. You can learn more at kinwise.org/pilot.

    Finally, if you found value in this conversation, the best way to support the show is to subscribe, leave a quick review, or share this episode with a friend. It makes a huge difference.

    Until next time, stay curious, stay grounded, and stay kinwise.

Previous
Previous

14. Throw Away the Coffee: Cary Wright on AI, Teacher Well-Being, and Better Lesson Plans

Next
Next

12. Trailblazing AI Literacy: Connor Mulvaney’s Rural Classroom Revolution