20. How to Teach Intentionally with AI featuring Brian Jefferson

Season 2, Episode 9 of Kinwise Conversations · Hit play or read the transcript

Episode Summary: The Strategic Shift in Professional Education

How do we prepare students for a workforce where their first job will be to audit work they've never performed themselves? In this episode, Brian Jefferson, a Lecturer at the University of Maryland's Robert H. Smith School of Business, tackles this critical challenge head-on. Drawing from his 20-year career as a partner at PricewaterhouseCoopers, Brian provides a strategic playbook for educators and institutional leaders. He outlines how to move beyond teaching rote procedural skills, many of which are being automated by AI, and instead cultivate the essential human judgment required for the future. This conversation is a masterclass in shifting curriculum from simple task completion to sophisticated "cognitive enhancement," ensuring the next generation is equipped not just to use AI, but to lead with it.

Key Takeaways for K-12 and University Leaders

  • From Doer to Auditor: The primary goal of education must shift from training students to perform entry-level tasks to training them to critically evaluate, question, and refine AI-generated work. The "auditor mindset" is the most valuable skill for the future.

  • Co-Create to Build Ownership: Instead of imposing AI tools, leaders should facilitate a process where students and faculty co-create custom AI assistants for their classrooms. This collaborative approach dissolves fear, builds buy-in, and fosters a deeper understanding of the technology.

  • Play as Strategic Policy: "Awe and whimsy" are powerful institutional tools. Encouraging low-stakes, creative experimentation with AI (like designing posters with DALL-E) is the most effective way to reduce anxiety and build a culture of confident exploration.

  • Define Competency First: Before investing in any technology, leaders must answer a fundamental question: "What does an AI-proficient student look like at our institution?" A clear vision of the end goal should drive all policy, purchasing, and professional development.

  • AI as a Practice Partner: The most powerful application of AI in an educational setting is as a private, infinitely patient simulator for practicing high-stakes human skills, such as public speaking, difficult conversations, and critical debate.

The New Mandate: From AI Anxiety to Cognitive Enhancement

Lydia Kumar: Hi Brian. I'm really excited to hear about how you've been navigating this evolving landscape in education. We know that AI is changing things, and I'm curious about what your journey has been like and what has drawn you to think about how AI shows up in your work as an educator at the University of Maryland.

Brian Jefferson: Hi Lydia. First, thank you for having me on today. I'm really excited to be talking about this topic. I'm very passionate about it for the future of our students and for the future of business as well. As you said, I'm now an educator at the University of Maryland. A bit of background on me for your listeners: I was formerly a student at the University of Maryland, and I was lucky enough early on in that journey to have two internships with a job that ultimately became my career. After leaving the University of Maryland, I worked for over 20 years for a large accounting firm called PricewaterhouseCoopers, where I eventually became a partner and an owner.

Three years ago, I decided that I wanted a new challenge, and I left PricewaterhouseCoopers. I was so lucky in that my former alma mater, the University of Maryland, had an opening for a need in my area of technical expertise, which is multinational taxation. But I also felt very fortunate to be able to jump right in and teach ethics-related classes, which really were some of the first sparks for me of how important AI was going to be as I started to bring that into the classroom.

Lydia Kumar: It's interesting because three years ago, the conversation about AI was just in its infancy. You mentioned ethics. Was that the reason why you started paying attention to AI—because you were teaching an ethics class—or was there another reason?

Brian Jefferson: That's a great question. My students would probably tell you that I'm a pretty voracious reader. So this would have been January, two and a half years ago. Those of us that were not so in the know about what was going on in the AI space started to see more and more advancements, particularly coming out of OpenAI and some of the things that they were doing. In my classes, I started to use it just as a discussion point around whether we are reading the right things and gathering information correctly. I started raising questions like, "Have you read the latest breakthroughs over at OpenAI?"

Not hearing enough back on that, I started to try to make it more practical. One of the first discussion points that I brought into my ethics class was asking them what they thought of a product called Lensa. I don't know if you'll remember this, but Lensa basically would let you upload a number of your photographs, and it would stylize them. It would give you back a hundred images of, say, Lydia on the surface of the moon. It was putting you in really interesting scenarios. Some of that seems very basic now, but two and a half years ago, it was really mind-blowing what you could do in such a short period of time. That sparked a really great conversation with my students around artistic integrity and property rights. That just started the discussion around AI. By the end of that first semester, my students were at least having discussions in ethics and doing elevator pitches on where AI might go. But it does, as you said, seem like forever ago because we've experienced so much change in business, in the educational space, and in our portfolios based on the advancement of AI.

The Workforce Shift: Training Auditors, Not Just Doers

Lydia Kumar: It's interesting because AI has changed so much. There's so much space to talk about it. As I've chatted with different people, I have not talked about accounting or taxes as a place where AI is impacting a field. I'm very curious about how the AI conversation played out in your other classes that are not ethics-related. What does a class on taxes or accounting look like when you're considering artificial intelligence?

Brian Jefferson: Maybe let me start with why it's interesting to our students before we even get to the ethical piece. Many of my students will go on to careers where they're going to be auditors at big accounting firms. Many will go on to be tax nerds like me, but that's probably only 20% of the future CPAs that come through my classroom. But all of them either have the concern or should be concerned about what it means for their career.

You know, there are a lot of potential existential threats to what they do. A few weeks ago, there was a proposal that we no longer need quarterly earnings reports from companies; it would be fine if those just happened every six months. Imagine how many fewer public accountants we would potentially need if, all of a sudden, you're only reporting half as much. My students have learned that so much of the work that I did starting out in 2001 is now not only done in pieces by AI—especially the repetitive and time-consuming work—but this has been going on for 20 years. We started to figure out that we don't necessarily need students from top business schools to do the most basic, entry-level accounting, so we started to offshore that to cheaper places. The next stage of that is, what can you offshore and what can you automate?

What that means for our students is there's less overall work to be done. So how do I become someone so attuned to the technology that I'm able to jump into a career that has changed so much? I think for my students, it's understanding how to use the aspects of AI that can enhance their ability to do work quickly and effectively, but also how they embrace it to develop new skills so that they're actually using it as a cognitive enhancement rather than just a work enhancement tool.

One of the first questions I ask my students in all my classes is to talk about their current AI usage. In that first class, I have heard more and more over the past year my students say some version of, "I'm trying to use AI less." As someone who is very concerned about education and its future with AI, I'm thankful that they're thinking about it so proactively. But I'm also trying to prepare them as best I can for the future workforce. When I hear that, I really want to use the semester to get them more excited about the aspects of AI that can make them better professionals and better people.

Curriculum Design: From AI Replacement to AI Partnership

Lydia Kumar: I think this cognitive enhancement piece is so fascinating because if you use AI as a cognitive enhancement, you can get a lot farther than if you say, "I'm scared of this technology, and I don't want anything to do with it." It sounds like your students fall somewhere on that spectrum. I'm wondering, how do you teach in a way that leads to cognitive enhancement? Do you have advice for educators or leaders in developing an environment where students are excited about that ability, and there's less fear around cheating? How do you build a culture that allows for that?

Brian Jefferson: I think it's a couple of things. Number one, we don't have enough examples out there of people truly using it in ways that are pushing our minds forward. I still think we are a bit in the stage where people are using ChatGPT as a Google 2.0, rather than thinking about how to creatively use it more as a part of themselves—to act more like a cyborg rather than having it be "me here, machine here."

Part of it is just teaching them that, because this "trying to do it less" comes from a very thoughtful place. They are noticing in themselves that they are working less hard and being less creative, and the science backs them up. The science would tell you if you use AI in certain ways and on certain tasks, there is cognitive decline that we've seen in some of those students. So it's really saying there are uses where you'll want to use AI as an expedient, but there's also this much greater way to use it, and I want to show you how.

Co-Creating a Custom GPT to Build Ownership

One of the things we do early on is I get up on the whiteboard and I ask my students, "We are going to be using a custom GPT in class this semester. It is yours. You are going to develop it. We are going to own it together. So, I want you to tell me, what are the types of tasks we want this to be able to do? But also, what do we want it to sound like? How do we want it to work with us?" If they need more prodding, I'll ask, "What have you experienced with really great teaching assistants or professors, or in a really great study group? How did that group interact?"

Lydia, they're rarely saying something like, "They give us the right answer as fast as possible." It's usually that a really good TA or study partner is great at creating an iterative process for learning. They're not just giving you everything at once; they're giving it to you in pieces, they're pulling you along, they're forcing the learner to work with the AI. It's been really interesting to hear how all of them articulate that. Almost always, it comes back to that. You can see the faces of the students that are trying to use it less, and you can see there's something there that makes them excited, like, "Okay, I might get back into it to use it that way."

Then we talk a lot about tone. What do they want the tone to be like? Who do they learn from best? How do we want to train this on the public domain so that they're getting the best, most accurate answers? And what kind of interactions can we program this to default to that will feel organic and fun and will help them learn? By creating that custom GPT that way, number one, it's theirs. They own it. They help develop it. They ask for the things that are in it. That just creates a different level of responsibility because we've talked about it as a group. It helps level up those people that may not have even started yet to see, "Okay, first of all, this is a professor that wants us to do this and is, in fact, asking us to do it. So I have permission."

Institutional Policy: A Vision for AI Competency

Lydia Kumar: That's very cool. I'm so curious about how to replicate what you have done in other classrooms. Do you have thoughts on what leadership in a university or a K-12 setting could do?

Brian Jefferson: I do. I think it depends on the atmosphere and where you're coming from. My experience at the University of Maryland is that our leadership in the business school is very aware of AI and very focused on the way it's changing the experience of our students now and in the future. So we are devoting a ton of resources to that. There are no ideas that are too big.

But the first question I would ask if I was sitting down with an administrator is, "What does AI competency look like for your students? What's the end goal?" Then we can decide what's the right level of exposure and play. The other thing that tends to work for me, and I think about it with administrators working with much more junior populations, is I give a lot of examples about how I use a custom GPT for things that are completely fun and unacademic. I tell them, and it's true, this is exactly what we did as a family to develop a vacation custom GPT. The goal is that I can just type in our needs—we've got a vegetarian who needs vegetables, one person who must have great coffee, one person who needs a hike every day—and if I just tell it the individuals and the timeframe, it can design an entire trip for our family based on our specific interests. Starting to give people these ideas on ways they would use it outside of academia is really fun. And if I was in a school setting, that's the type of play I would have kids start doing.

Using "Play" as an Antidote to Fear

To make it clear to my students, before even the custom GPT, one of the exercises I had my tax class do, in the spirit of play, was I said, "I want you to use any generative AI you're comfortable with, DALL-E, whatever. I want you to come up with a creative image that shows a particular tax subject in a convincing and interesting way. Develop your own poster." As I was ideating this, we had a nine- and a ten-year-old sitting with me doing their own homework, and they said, "Wait, tell me more about what you want to do." So I handed them my computer, and they both developed their own tax posters. I could show my students, "This is how easy it can be before you even start to iterate." Hopefully, that's something they take away and can apply to a project at their school.

Lydia Kumar: Right now, there's a lot of fear around generative AI in our country. A large swath of the population feels afraid of the future, and people do not believe that this technology is going to make our lives better. I think fun and play can be an antidote to fear.

Brian Jefferson: Yeah, awe and whimsy are phenomenal educational tools. The more that we can use technology like AI to inspire those, I think it's phenomenal.

The Future of Professional Development: AI as a Practice Partner

Lydia Kumar: Amazing. And I have a little side question for you because I know you have this background as a tax expert and now you're an educator. Reflecting back, how did you do that?

Brian Jefferson: I would first give a lot of credit to our groups at Maryland that help make us better teachers. Beyond that, coming from a professional services environment at PricewaterhouseCoopers is really about serving others. It is listening and responding with empathy and taking into account other people's KPIs. This era of technology has probably made this an easier jump for me because it allows me to focus my students on all those critical aspects of client service that go beyond just the technical answer. It's, "How do I build a relationship? How do I have a point of view? How do I convince others of that point of view?"

This makes me think about cognitive enhancement. One of the areas I think AI serves us best and makes us better humans is teaching us rhetorical technique, teaching us to be better conversationalists and debaters. Here's another way I like to use it to make me a better professor: I have an hour-and-a-half commute. On that commute, I'll have already prepared my entire lecture for my tax class. I will get in the car with my advanced AI voice assistant, Maple, and I will say, "Maple, I'm talking today about standard deductions and itemized deductions. You know a lot about my students—19 to 22 years old, highly educated, have been through an accounting curriculum. How do I come up with some examples on this topic that are very relevant to them?" She might give me some examples. I'll say, "That sounds great. I'm going to try to do that piece of the lecture while incorporating that new example, and I want you to be as critical as possible." I could also ask her to interrupt me as I'm going if I anticipate that might happen in class. It's almost like a way to incorporate cognitive behavioral therapy into what you're doing if you're trying to do something hard. AI can be really good at training you for that stuff. I'll spend a good hour of my ride doing some of that back and forth.

Practicing for High-Stakes Conversations

Lydia Kumar: I think one kind of amazing use case with AI is to do that practice before you're in front of real people so that you're better for the people that you're with. It doesn't have to be perfect, but it allows you to kind of have your first at bat in this safe environment.

Brian Jefferson: That's very well said. One of the questions I always ask is, "Is there a conversation with someone that's important that you've been avoiding having?" Can we think about how to use AI to prepare you to move that conversation forward, whether it's with a romantic partner or a kid or a coworker, how do we develop that, right? your Socrates, whatever that personality is, so that you can have a realistic conversation and start to see around the corner. And like you said, Lydia, like it's, it's a kindness to the other person because you are putting in the work so that you show up more present, more prepared.

The Unanswered Question: Reviewing Work You've Never Done

Lydia Kumar: you have shared so many great useful ways of using ai, but have you had missteps along the way?

Brian Jefferson: Well, I think it depends on your frame of reference. I had a student last week, and they are testing some of their homework problems on our class GPT and one said, "It gave me the wrong answer." That is completely within my range of expectations as a professor. My student viewed this as a failure. And I said, "Listen, one of the reasons we're doing this is because your next job, a lot of those jobs are called 'auditor,' which means you are going in and your first job is reviewing someone else's work. How do you get that experience if you're not looking at someone else's work critically? This is a really great chance for you to find the errors. How are we gonna find the errors in AI-generated work if you're not used to looking for them?"

Lydia Kumar: I have one final question. What is the thing that you keep thinking about?

Brian Jefferson: As a recovering CPA, I have to start with the skeptical and the negative. I am very concerned about the future of work, particularly for new students coming out of university. I don't have a great answer for how do you prepare for a career where you used to be at first doing the thing—preparing the financial statements, preparing the tax return—in a future where I'm being told that's either going to be in large part done by artificial intelligence or offshored, and now you're going to be reviewing work when you've never actually done it yourself. That is a question that I have not heard great answers to from my former firm or any of its competitors yet. I think it's really worth figuring out how we create great professionals that are going to be excellent at client service when they haven't come up in exactly the same way that I did.

Now, I am filled with a ton of hope. If we can take the time we're gaining from offloading rote tasks to AI and shift that time to working on all these great client service and human elements, that excites me. I do know that this maxim is true: people that really understand and are open to working in a new way with artificial intelligence will be better fitted for the jobs of the future. So I just want to give my students the best chance for success because the story is still in progress.

Lydia Kumar: Absolutely. We're in a moment of change, and I really appreciate your leadership in thinking about how you help our young people navigate that.

Brian Jefferson: I want them to increasingly, over the next couple of years, view that challenge as an opportunity and have the skills to be able to make that mental transition themselves.

Connect and Resources

LinkedIn Profile:Brian Jefferson on LinkedIn. Follow Brian for his frequent video tutorials on using AI for professional and personal productivity.

  • Faculty Page:University of Maryland, Robert H. Smith School of Business

  • AI at Smith: Explore how the University of Maryland's Robert H. Smith School of Business is integrating AI into its curriculum and research. Learn more here.

  • How to Create a Custom GPT (Zapier Guide): A step-by-step beginner's guide on how to create your own custom version of ChatGPT, similar to the process Brian uses with his students.

  • Ethan Mollick's Work: The hosts discussed Wharton professor Ethan Mollick’s ideas on using AI as a sparring partner and creating different AI "personalities" to improve creative and critical work. You can explore his work at his popular Substack, One Useful Thing.

Prompts Inspired by Brian

1. The Classroom Co-Pilot Builder

Brian’s most detailed example was co-creating a Custom GPT with his students to act as their teaching assistant. This prompt initiates that process by turning ChatGPT into a facilitator to help an educator design the "Instructions" for their own classroom GPT.

Use Case: Designing a Custom AI Tool for a Specific Group

The Prompt:

Act as an expert in pedagogy and AI integration. I am a [Your Subject, e.g., "High School History"] teacher, and I want to build a Custom GPT to act as a 24/7 teaching assistant for my students. Your goal is to help me write the "Instructions" for this GPT. To do this, ask me a series of questions inspired by Brian Jefferson's method. These questions should help me define: 1. The GPT's primary role and purpose (e.g., study partner, Socratic questioner, project brainstormer). 2. Its personality and tone (e.g., encouraging, witty, formal, like a helpful peer). 3. The specific tasks it should be able to perform. 4. The constraints on its behavior (e.g., "Do not give direct answers, instead guide the student with questions."). Start by asking me the first question.

2. The Creative Concept Illustrator

Brian described using "play" as an antidote to fear, giving his tax students a fun, low-stakes assignment to use an image generator like DALL-E to create a poster for a complex topic.

Use Case: Visualizing Abstract Ideas and Making Learning Fun

The Prompt (for an image generator like DALL-E):

Create a visually engaging and slightly humorous poster designed to explain the tax concept of "capital gains." The style should be like a vintage travel poster from the 1950s. The poster should feature a rocket ship labeled "Investment" taking off towards a planet made of gold coins. Include the tagline: "Capital Gains: Your Ticket to Financial Frontiers!"

3. The High-Stakes Practice Partner

Brian detailed how he uses his AI voice assistant on his commute to practice his lectures, asking it to be critical and interrupt him. This prompt simulates that high-stakes practice for any professional presentation.

Use Case: Role-Playing and Public Speaking Rehearsal

The Prompt:

I need to practice a 5-minute pitch for my company, Kinwise, to a group of skeptical school district superintendents. I will present my pitch, and you will act as the role of a superintendent. After I am done, I want you to give me critical feedback from that perspective. Ask me 2-3 challenging follow-up questions that a real superintendent might ask, focusing on budget constraints, data privacy, and the challenges of teacher training and adoption. Let me know when you are ready for me to begin my pitch.

4. The Personalized AI Exploration Guide

Towards the end of your conversation, Brian shared a powerful method he used to help his mom: having her list her passions and challenges, and then asking GPT to create a personalized guide for how its features could help her.

Use Case: Creating a Custom Onboarding Plan for a New AI User

The Prompt:

Act as a personal AI tutor. I am providing you with a list of my personal hobbies and current professional challenges. Your task is to create a personalized "AI Exploration Guide" for me. The guide should suggest 3 specific features of ChatGPT (e.g., data analysis with Code Interpreter, image generation, brainstorming) and explain how I could apply each one to either enhance my hobbies or help solve my challenges. My Hobbies: - Marathon running - Exploring new restaurants in Durham, NC - Planning international travel My Professional Challenges: - Finding new clients for my consulting business, Kinwise. - Coming up with fresh topics for my podcast, "Kinwise Conversations." - Managing my time effectively between business development and client work.

5. The Socratic Sparring Partner

You and Brian discussed moving beyond AI as a simple answer machine to using it as a Socratic partner that challenges your thinking, a core component of "cognitive enhancement."

Use Case: Deepening Understanding and Critical Thinking

The Prompt:

I want to explore a belief I hold. Your role is to act as a Socratic questioner. Do not provide direct answers or opinions. Instead, respond to my statement only with questions that challenge my assumptions, ask for definitions, and force me to provide evidence for my claims. Help me examine the foundations of my belief through questioning. The belief I want to explore is: "AI will ultimately reduce, not increase, the workload for educators."

About Brian Jefferson

Brian Jefferson is a Lecturer at the University of Maryland's Robert H. Smith School of Business, where he pioneers the use of AI in accounting and business ethics education. With over 20 years of experience as a Lead Tax Partner at PwC, Brian brings a real-world perspective on how technology is reshaping professional services. His teaching is dedicated to preparing students for a human-led, technology-enhanced future by developing their critical thinking, professional judgment, and a collaborative, problem-solving mindset.

Previous
Previous

21. AI Engineer Vihaan Nama on Privacy, Practice, and Empowered Learning

Next
Next

19. The Frontier Classroom: McKenna Akane on Rural Innovation and Emerging Tech