Redesigning the Syllabus for Deeper Learning: AI, Empathy, and Assessment

Episode 26 of Kinwise Conversations · Hit play or read the transcript

The Shift in Educational AI Integration

As Generative AI reshapes the landscape of academia, institutional leaders face a critical choice: attempt to police the technology or fundamentally redesign the learning experience. In this episode, we sit down with Dr. Dana Riger, Clinical Associate Professor and the University of North Carolina at Chapel Hill’s first Faculty Fellow for Generative AI.

Dr. Riger shares her evolution from a "fear-driven" assessment overhaul to a sophisticated leadership framework that prioritizes student autonomy and pedagogical rationale. She explores the strategic distinction between AI-avoidant assessments, which lean into experiential and multimedia creativity, and AI-integrated designs that use technology to scaffold complex professional skills. This conversation moves beyond the "shiny new tool" narrative to address the existential questions facing mission-driven leaders: How do we maintain academic integrity while fostering the intrinsic motivation to learn? Dr. Riger argues that the future of education lies in distilling the uniquely human elements of teaching—empathy, presence, and critical dialogue, that no machine can replicate.

Key Takeaways for Mission-Driven Leaders

  • Move Beyond Policing: Shift the institutional focus from "catching" AI use to redesigning assessments that make outsourcing learning near impossible or undesirable.

  • The "Middle Ground" Framework: Avoid one-size-fits-all policies. Strategic implementation requires a nuanced approach that allows for both AI-avoidance and intentional integration based on specific learning goals.

  • Process Over Product: To ensure academic integrity, leaders should encourage faculty to grade the "learning journey" (drafts, process logs, and revisions) rather than just the final output.

  • Intrinsic Motivation is the Best Deterrent: When students understand the "why" behind a skill and how it connects to their future workforce success, the desire to bypass learning with AI diminishes.

  • The Human Value Proposition: AI serves as a mirror that forces educators to refocus on the irreplaceable human-to-human interactions—like empathy and emotional regulation—that define high-quality education.

Redesigning the Syllabus: AI, Empathy, and the Future of Learning

Lydia Kumar: Welcome to Kenwise Conversations in AI. Today, we're joined by Dr. Dana Riger, Clinical Associate Professor of Human Development and Family Science at the UNC School of Education and UNC's first Faculty Fellow for Generative AI at the Center for Faculty Excellence. Drawing on her deep research into relationship formation, online dating, and teletherapy, Dr. Riger shares her journey from early AI adopter to a campus leader, guiding faculty on how to thoughtfully and ethically navigate the integration of AI into course design and assessment.

The Paradigm Shift: From Fear to Adoption

Dana Riger: My research has focused on the impact of technology on relationship formation. I actually wrote my dissertation on online dating, looking at how technology shapes the way that couples share the story of how they met. Over the last couple years, since I've been at UNC, my interest in technology has naturally shifted towards education, specifically how technology shapes classroom engagement and teacher-student relationships.

In the fall of 2022, I became a pretty early adopter of AI. I saw a TikTok about ChatGPT and kind of panicked because I thought my students had been using it. I spent the entire winter break of 2022 redesigning most of my assessments to be more AI avoidant. Then, when I got into the classroom in 2023, I realized I was actually pretty ahead of the curve. I reached out to our leadership and found there wasn't much guidance. Everybody was trying to figure it out, so I took it upon myself to do my own research, test out tools, and talk to students about their AI use.

Lydia Kumar: You were so aware of what was going on. I remember it was Thanksgiving 2022, and my mind was blown, but then I went back to work and nobody was talking about it.

Dana Riger: It was actually the second moment in my life where I had what felt like a paradigm shift. I'm an elder millennial, so I remember getting the internet for the first time in middle school. To me, that moment of first uploading an assignment into ChatGPT to see how it could respond—my mind was just blown. I was excited and very fearful.

Curriculum Design: Balancing AI-Avoidance and Integration

Lydia Kumar: How has your approach to redesigning your classes transformed over the past few years?

Dana Riger: Early on, it was very fear-driven. I focused on avoiding AI. For example, if I gave an experiential assessment, like an interview, I added elements where students had to document the experience to ensure they weren't using AI to "hallucinate" an interview. I also used an attestation form to learn how they were using tools.

Over the last two years, I’ve become more intentional. I’ve shifted in two directions:

  1. AI-Avoidant: Creating assessments that are experiential, multimedia, and creative in nature.

  2. AI-Integrated: Giving written assessments that intentionally integrate AI in specific ways to build skills.

Lydia Kumar: How do you explain your thinking when they're allowed to use AI versus when you're discouraging it?

Dana Riger: In my classroom, students can use AI any way they want. I don’t want to be the "AI police." There aren’t reliable plagiarism detectors, and policing can amplify disciplinary biases. Instead, I design assessments so it’s near impossible to outsource the learning. Day two of all my classes is an "Assessment Day." We talk about bias, data privacy, ethics, and what responsible use looks like. I also respect student autonomy; if a student has an ethical objection to using AI, I work with them to develop an alternative.

Institutional Leadership: Avoiding One-Size-Fits-All Policies

Lydia Kumar: As you work with other faculty, what resistance or breakthroughs are you seeing?

Dana Riger: It ranges from enthusiasm to skepticism. Many fear the erosion of student learning or environmental and privacy risks. I see two types of breakthroughs:

  1. The Refreshed Assessment: When a faculty member redesigns a course to be AI-avoidant, they realize it actually makes the course more engaging and meaningful. AI forces us to "up our game" and revisit learning outcomes that may have been stale for years.

  2. The Enhanced Process: When they discover AI can genuinely enhance learning by providing faster feedback or scaffolding complex tasks.

Lydia Kumar: What are the common missteps leaders make when navigating this change?

Dana Riger: The biggest misstep is the impulse to streamline policies into a "one-size-fits-all" approach. Every faculty member has different capacities and course contexts. We must resist black-and-white thinking, either viewing AI as the "downfall of education" or as a revolutionary tool. We need to maintain a middle space that is nuanced and reactive to specific pedagogical needs.

Workforce Readiness: Developing Student Agency and Human Skills

Lydia Kumar: What is important for students to understand as they move into a professional setting where AI is at their fingertips?

Dana Riger: They need to be able to:

  • Assess Context: Recognize when AI aligns with the purpose of the task.

  • Identify Limitations: Understand bias in training data and erroneous content.

  • Justify Choices: Articulate why they chose to use (or not use) AI in a specific situation.

I had a student recently tell me she was taking a voluntary "ChatGPT detox." She said she felt like she was "glitching out"—over-relying on AI to articulate her thoughts even in social texts. She noticed it was eroding her ability to confidently speak for herself. That self-awareness is a vital professional skill.

Lydia Kumar: What is the "Why" for educators in this new era?

Dana Riger: AI challenges us to uncover why it is important for a student to know something. If they can just "Google" or "AI" it, what is the value of them internalizing it? It forces us to connect the material to their personal and professional lives in a way that builds intrinsic motivation.

Future Outlook: Distilling the Human Element

Lydia Kumar: What is the hope or concern that is staying with you lately?

Dana Riger: I’m focusing on what gives a human educator unique value. If my role no longer includes just "imparting knowledge," what is my value? I believe AI offers us an opportunity to refocus on empathy and presence—qualities grounded in human connection. I’m currently doing a scoping review on the role of AI in psychotherapy education. Can AI help a student develop empathy? Or is that the "variable" that remains uniquely human? That is where I’m looking for the research to lead us.

Connect and Resources

  • LinkedIn: Dana Riger's LinkedIn Profile

  • UNC Profile: Dr. Dana Riger at the UNC School of Education

  • UNC Center for Faculty Excellence (CFE): The UNC center where Dr. Riger serves as the Faculty Fellow, dedicated to supporting faculty teaching and development.

  • AI-Avoidant Assessments: Strategies for designing assignments that are inherently difficult or impossible for current generative AI tools to complete authentically, often by making them experiential, localized, or multimedia-based.

  • AI-Integrated Assessments: Strategies for intentionally incorporating AI tools into the learning process to build specific skills, such as using AI for outlining, brainstorming, or practicing difficult conversations.

  • VoiceThread: A collaborative multimedia tool used by Dr. Riger for asynchronous online classes, allowing students to respond to lectures and peers via video, promoting spontaneous engagement and community.

  • Generative AI in Higher Education: The broader topic of how large language models (like ChatGPT) are forcing universities to re-evaluate academic integrity, learning outcomes, and assessment design.

About the Guest

Dr. Dana Riger is a Clinical Associate Professor of Human Development and Family Science at the University of North Carolina at Chapel Hill's School of Education. She currently serves as UNC's first Faculty Fellow for Generative AI at the Center for Faculty Excellence (CFE).

Drawing on her doctoral research in Marriage and Family Therapy, which included studies on online dating and teletherapy, Dr. Riger holds expertise in how technology shapes human interaction, relationships, and learning. In her role as Faculty Fellow, she has developed and led numerous discipline-specific workshops across campus, guiding faculty to make informed, ethical, and practical decisions about integrating AI into their course design and assessments. Her work champions a mission to empower faculty and ensure students graduate with the critical reasoning skills to use AI responsibly and intentionally. Dr. Riger holds a Ph.D. in Human Development (with a concentration in Marriage and Family Therapy) from Virginia Tech.

Next
Next

25. Danelle Brostrom on Leading AI: Privacy, Humanity, and Progress in Schools