The Skeptic and The Optimist: Navigating AI in Higher Education
Episode 27 of Kinwise Conversations · Hit play or read the transcript
Episode Summary: The Critical Shift in Academic Integrity and Workforce Readiness
In an era where a doctoral dissertation, or a district improvement plan, could theoretically be generated in minutes, educational leaders face a profound dilemma: How do we prioritize the process of learning when the product can be automated?
In this episode, Lydia Kumar sits down with Dr. Nicole Schilling and Dr. Jason Margolis from St. Bonaventure University to dissect the evolving role of Artificial Intelligence in high-stakes education. Approaching the topic through a "critical friends" model, they offer contrasting yet complementary perspectives. Dr. Margolis, a self-described skeptic, warns of the cognitive atrophy that occurs when we outsource critical thinking to algorithms. Conversely, Dr. Schilling shares powerful examples of how AI can serve as a partner for simulation and rigorous feedback rather than a replacement for human intellect. Together, they explore the nuances of bias, the limitations of rigid policy, and the urgent need to redefine ethical guidelines for the next generation of leaders.
Key Takeaways for Educational Leaders
Process Over Product: Efficiency is often the enemy of deep learning. Leaders must design assessments and workflows that value the journey of critical thinking (the struggle) over the final output, which AI can easily mimic.
Transparency as the New Standard: Rather than banning tools, effective governance requires radical transparency. Students and staff should document how they used AI (e.g., keeping logs of prompts and revisions) to maintain integrity.
AI as a Simulation Partner: Beyond text generation, AI offers immense value in role-playing complex scenarios, such as simulating a dissertation defense or a difficult stakeholder conversation, to sharpen human communication skills.
The Policy vs. Guidelines Dilemma: Because technology evolves faster than bureaucracy, rigid policies often become obsolete instantly. Flexible "guidelines" that focus on ethics and intent are more effective than punitive rules.
Bias and Power Dynamics: We must remain vigilant about who builds these models. AI is not neutral; it reflects the biases of its creators, requiring users to constantly question and "push back" against the outputs they receive.
The Skeptic and the Optimist: Origins of Perspective
Lydia Kumar: I want to give you both a chance to introduce yourselves and share a little bit about where you lean when it comes to artificial intelligence. Jason, I’m going to start with you.
Jason Margolis: Sure. I’m Jason Margolis. I’m a former New York City public school high school English teacher. I went the academic route, and I’ve become very interested in teacher leadership, school reform, and doing things better for teachers so that teachers can do better for students. On AI, it’s a little complicated. In summary, I’m definitely an AI skeptic, while trying to maintain a healthy dose of skepticism towards my own skepticism. But I definitely have some concerns.
Lydia Kumar: Nicole, what about you?
Nicole Schilling: Hi, my name is Nicole Schilling. I am a professor of educational leadership at St. Bonaventure University. I am a former Columbus City School teacher. My research interests are really in educational leadership, specifically superintendents, as well as business officials. I come at this more from a leadership lens, but at heart, I’m a teacher.
I believe that there is definitely a place for AI. Our students are using it, and I believe that we can provide a safe space for them to learn how to use it as well as model that for them. I am a supporter of it in a transparent, efficient way. I’m not skeptical of it, but focused on making sure that we use it ethically.
Jason Margolis: I think it’s important to not only talk about what we believe, but to really think about how we come to these beliefs. In the early 80s, I became very part of this underground bulletin board system community. It really subsumed my life for a while, and I became very unhealthy. When I pushed away from this, I became very anti-technology because I perceived technology as creating sedentariness, passiveness, and keeping you from other important life goals.
At the same time, I do believe that there are some solid examples and data about how we’re in a very worrisome spot right now. If we outsource critical thinking to machines—especially machines that now themselves can get smart the more they practice critical thinking—then we become less thoughtful, and the machines become more powerful. These tools are very powerful. You can do in 8 minutes what could take 80 days. But there are some real serious consequences to human development.
Defining Ethical Use in High-Stakes Academics
Lydia Kumar: What do you think counts as ethical use of AI in graduate-level work? And who gets to decide how far is too far?
Nicole Schilling: I think that’s still in the process. I had a student say, "Hey, I’d like to use AI in my dissertation. I would like to use it as part of my analysis framework, and to use it as a partner, not doing the analysis for me." We as a committee decided to provide a safe space for him to do that.
We erred on the side of caution. We asked him to narrate and journal how he used it. We actually asked him to include the prompts and some of the screenshots throughout his dissertation in the spirit of replicability. It doesn’t replace you. It does not replace anything you did in your dissertation. It’s just another lens. He actually talks about how much he had to push back on AI during this conversation, and how much he learned.
Jason Margolis: That is a very high-level, nuanced, careful use of AI. My experience is that what we have is a lot of people under a lot of stress trying to get degrees. There are a lot of tools out there to help them get work done very quickly. In some cases, it could be accidental improper use, and in some cases, it’s intentional because it just needs to get done. In the long run, that will be detrimental to the education system.
My concern is not only are they kind of evading the system, but I’m really concerned about what that’s doing to people’s brains. There’s been a lot of research recently about the diminishing of brain activity when you outsource your thinking to ChatGPT.
The Cognitive Cost of Efficiency
Jason Margolis: Some high school students were asked to do math problems with and without AI. The AI users did really well in the practice sessions but forgot almost everything they’d learned. They bombed a closed-book test on the material because they hadn't done their own processing in their brain.
There is a famous study where they hooked people up to EEG machines. Those who didn't use the tools remembered much more about what they wrote about 20 minutes later than those who did use the tools. It may help us be efficient, but learning is a process. If we lose the process, we may lose everything.
Lydia Kumar: Efficiency is something that I think we maybe overvalue in our culture. What strategies or ideas have you used where people are helping students make choices that lean toward process and clarity of thinking rather than replacement of thinking?
Jason Margolis: One tool is to move away from everything being text-based. In our program, we’re having students do more individual video reflections and video-based assessments because you really have to stand and deliver more. We are doing more checkpoints along the way to encourage them to draw from their own experiences and to develop their own academic voice.
Nicole Schilling: Video provides a more authentic way of having discussions. It mimics being in the field, talking to a fellow teacher or an administrator about how to solve a problem of practice.
Practical Application: Simulation and Feedback
Nicole Schilling: We attended a presentation where students and faculty worked together to train a model to be a dissertation committee to help the students practice what a defense looks like. They shared who their dissertation committee members were, they gave even personality quirks about committee members so that the AI would mimic it. These students then practiced having academic conversations about their dissertation studies.
The results were phenomenal. The students could speak to their work in these defenses incredibly well. Can it help strengthen their research questions? Can it strengthen their methods? Absolutely. Again, it’s just another perspective or lens, like having a critical friends conversation.
Bias, Power, and the "Gold Rush"
Nicole Schilling: We also talk about hallucination and bias. For example, you ask AI for an image of a doctor serving underprivileged children. You see what AI produces, and how much you have to push back against AI to get more than one type of image. We have to have power conversations: Who is not represented in this conversation? It’s a model trained on humans, so our own biases come through in AI.
Jason Margolis: I think exploring bias and power is really important. The thesis of a presentation we saw was essentially that AI has many racist undertones because the people who build the algorithms are largely white, upper-middle-class individuals who have their own view of the world. We need to remember that even though the technology is powerful, there are still humans pulling the strings.
This ties into a critical question: Whose interests are served? I know that within our industry of education, the same companies that develop the AI detection software, like Turnitin, are developing the AI detection evasion software for students. There’s a lot of money to be made. This is our modern-day gold rush.
Can AI Solve "Problems of Practice"?
Lydia Kumar: If a student uses AI to solve the problem of practice they’ve identified, is that a valid skill?
Nicole Schilling: It can’t replace the human. It cannot replace what that human knows about that context, about those people, about the stakeholders. When you’re a principal of a building, or you are running a non-profit, there are things that you know about that space. You can prompt AI all day, but it cannot replace the human that you are in that context.
We’re already starting to see people who come out of these programs who relied on it too heavily. They go into practice, and guess what? They can’t solve the hard problems. Their people know that they’re using AI for everything, and they’re not moving up.
Jason Margolis: The same type of thinking you do to figure out what the literature says is the same type of thinking you need to do with complex information at your worksite. If you haven't done the type of thinking that those assessments are supposed to measure, it will catch up with you.
Navigating Policy in a Fast-Moving Landscape
Lydia Kumar: When you think about setting policy or guidelines for your students about AI, how do you approach that?
Nicole Schilling: I think we have to be very careful around policy. I would actually like to see the American Educational Research Association provide some guidelines—not necessarily policy, but guidelines on how we approach AI.
I’m not sure that we’re at a policy place yet. I’m not sure the technology is even there for us to be 100% sure that a student used AI to enact consequences for that. Whereas guidelines can provide guidance for ethical and transparent use.
Jason Margolis: I think there’s a real reluctance to create policy at this point. One reason is because as soon as you create the policy, the technology has advanced to make your policy irrelevant. The other thing is, people are scared to be on the wrong side of this. There’s a fear that if you come down too hard, or if you’re too skeptical on AI in your policy, that you’re going to lose students or customers. Ethical advancement versus financial advancement is probably one of the main tensions of human history, and here we are again.
Connect and Resources
Connect with our Guests
Dr. Jason Margolis: St. Bonaventure Profile | ResearchGate
Dr. Nicole Schilling: St. Bonaventure Profile | LinkedIn Profile
Resources Mentioned & Related Concepts
CPED (Carnegie Project on the Education Doctorate): The consortium mentioned by Dr. Schilling that is redesigning the Ed.D. to focus on solving complex problems of practice rather than just theoretical research.
Critical Friends: A professional learning model developed by the National School Reform Faculty where educators commit to questioning and critiquing each other’s work in a supportive, improvement-focused manner.
Problems of Practice: A core concept in modern educational leadership programs where research is applied to solve specific, contextual challenges in a school or district.
About the Guests
Dr. Jason Margolis is a Distinguished Professor and the Program Director for Educational Leadership at St. Bonaventure University. A former New York City high school English teacher, his research focuses on teacher leadership, school change, and complexity theory. Jason describes his teaching philosophy as the "ancient art of paying attention," and he brings a critical, human-centered lens to the adoption of new technologies. He is the author of numerous articles on teacher development.
Dr. Nicole Schilling is a Professor of Educational Leadership at St. Bonaventure University. With a background as a teacher in Columbus City Schools and an educational consultant for the Ohio Department of Education, Nicole specializes in the superintendency, school business officials, and online teaching and learning. She is a Quality Matters Master Reviewer and a dedicated "social constructivist" who believes in using technology to help students solve authentic problems of practice. Her recent work includes research on superintendent resilience and editing the upcoming book, Problems of Practice: Case Studies of the Superintendency.

