20. How to Teach Intentionally with AI featuring Brian Jefferson

Season 2, Episode 9 of Kinwise Conversations · Hit play or read the transcript

  • Lydia Kumar: Today on Kinwise Conversations, we're joined by Brian Jefferson, a professor at the University of Maryland's Robert H. Smith School of Business. After a successful career in multinational taxation at PricewaterhouseCoopers, Brian transitioned to academia where he has pioneered the use of artificial intelligence in his ethics and accounting classes long before it became a mainstream topic.

    If you've ever wondered how to move your students from fearing AI to using it for cognitive enhancement, how to co-create a custom AI with your class to build excitement and ownership, or how to prepare the next generation for a workforce where their first job might be to audit work they've never performed themselves, you're in the right place.

    (Music Fades)

    Lydia Kumar: Hi Brian. I'm really excited to hear about how you've been navigating this evolving landscape in education. We know that AI is changing things, and I'm curious about what your journey has been like and what has drawn you to think about how AI shows up in your work as an educator at the University of Maryland.

    Brian Jefferson: Hi Lydia. First, thank you for having me on today. I'm really excited to be talking about this topic. I'm very passionate about it for the future of our students and for the future of business as well. As you said, I'm now an educator at the University of Maryland. A bit of background on me for your listeners: I was formerly a student at the University of Maryland, and I was lucky enough early on in that journey to have two internships with a job that ultimately became my career. After leaving the University of Maryland, I worked for over 20 years for a large accounting firm called PricewaterhouseCoopers, where I eventually became a partner and an owner.

    Three years ago, I decided that I wanted a new challenge, and I left PricewaterhouseCoopers. I was so lucky in that my former alma mater, the University of Maryland, had an opening for a need in my area of technical expertise, which is multinational taxation. But I also felt very fortunate to be able to jump right in and teach ethics-related classes, which really were some of the first sparks for me of how important AI was going to be as I started to bring that into the classroom.

    Lydia Kumar: It's interesting because three years ago, the conversation about AI was just in its infancy. You mentioned ethics. Was that the reason why you started paying attention to AI—because you were teaching an ethics class—or was there another reason?

    Brian Jefferson: That's a great question. My students would probably tell you that I'm a pretty voracious reader. So this would have been January, two and a half years ago. Those of us that were not so in the know about what was going on in the AI space started to see more and more advancements, particularly coming out of OpenAI and some of the things that they were doing. In my classes, I started to use it just as a discussion point around whether we are reading the right things and gathering information correctly. I started raising questions like, "Have you read the latest breakthroughs over at OpenAI?"

    Not hearing enough back on that, I started to try to make it more practical. One of the first discussion points that I brought into my ethics class was asking them what they thought of a product called Lensa. I don't know if you'll remember this, but Lensa basically would let you upload a number of your photographs, and it would stylize them. It would give you back a hundred images of, say, Lydia on the surface of the moon. It was putting you in really interesting scenarios. Some of that seems very basic now, but two and a half years ago, it was really mind-blowing what you could do in such a short period of time. That sparked a really great conversation with my students around artistic integrity and property rights. That just started the discussion around AI. By the end of that first semester, my students were at least having discussions in ethics and doing elevator pitches on where AI might go. But it does, as you said, seem like forever ago because we've experienced so much change in business, in the educational space, and in our portfolios based on the advancement of AI.

    Lydia Kumar: It's interesting because AI has changed so much. There's so much space to talk about it. As I've chatted with different people, I have not talked about accounting or taxes as a place where AI is impacting a field. I'm very curious about how the AI conversation played out in your other classes that are not ethics-related. What does a class on taxes or accounting look like when you're considering artificial intelligence?

    Brian Jefferson: Maybe let me start with why it's interesting to our students before we even get to the ethical piece. Many of my students will go on to careers where they're going to be auditors at big accounting firms. Many will go on to be tax nerds like me, but that's probably only 20% of the future CPAs that come through my classroom. But all of them either have the concern or should be concerned about what it means for their career.

    You know, there are a lot of potential existential threats to what they do. A few weeks ago, there was a proposal that we no longer need quarterly earnings reports from companies; it would be fine if those just happened every six months. Imagine how many fewer public accountants we would potentially need if, all of a sudden, you're only reporting half as much. My students have learned that so much of the work that I did starting out in 2001 is now not only done in pieces by AI—especially the repetitive and time-consuming work—but this has been going on for 20 years. We started to figure out that we don't necessarily need students from top business schools to do the most basic, entry-level accounting, so we started to offshore that to cheaper places. The next stage of that is, what can you offshore and what can you automate?

    What that means for our students is there's less overall work to be done. So how do I become someone so attuned to the technology that I'm able to jump into a career that has changed so much? I think for my students, it's understanding how to use the aspects of AI that can enhance their ability to do work quickly and effectively, but also how they embrace it to develop new skills so that they're actually using it as a cognitive enhancement rather than just a work enhancement tool.

    One of the first questions I ask my students in all my classes is to talk about their current AI usage. In that first class, I have heard more and more over the past year my students say some version of, "I'm trying to use AI less." As someone who is very concerned about education and its future with AI, I'm thankful that they're thinking about it so proactively. But I'm also trying to prepare them as best I can for the future workforce. When I hear that, I really want to use the semester to get them more excited about the aspects of AI that can make them better professionals and better people.

    Lydia Kumar: I think this cognitive enhancement piece is so fascinating because if you use AI as a cognitive enhancement, you can get a lot farther than if you say, "I'm scared of this technology, and I don't want anything to do with it." It sounds like your students fall somewhere on that spectrum. I'm wondering, how do you teach in a way that leads to cognitive enhancement? Do you have advice for educators or leaders in developing an environment where students are excited about that ability, and there's less fear around cheating? How do you build a culture that allows for that?

    Brian Jefferson: I think it's a couple of things. Number one, we don't have enough examples out there of people truly using it in ways that are pushing our minds forward. I still think we are a bit in the stage where people are using ChatGPT as a Google 2.0, rather than thinking about how to creatively use it more as a part of themselves—to act more like a cyborg rather than having it be "me here, machine here."

    Part of it is just teaching them that, because this "trying to do it less" comes from a very thoughtful place. They are noticing in themselves that they are working less hard and being less creative, and the science backs them up. The science would tell you if you use AI in certain ways and on certain tasks, there is cognitive decline that we've seen in some of those students. So it's really saying there are uses where you'll want to use AI as an expedient, but there's also this much greater way to use it, and I want to show you how.

    One of the things we do early on is I get up on the whiteboard and I ask my students, "We are going to be using a custom GPT in class this semester. It is yours. You are going to develop it. We are going to own it together. So, I want you to tell me, what are the types of tasks we want this to be able to do? But also, what do we want it to sound like? How do we want it to work with us?" If they need more prodding, I'll ask, "What have you experienced with really great teaching assistants or professors, or in a really great study group? How did that group interact?"

    Lydia, they're rarely saying something like, "They give us the right answer as fast as possible." It's usually that a really good TA or study partner is great at creating an iterative process for learning. They're not just giving you everything at once; they're giving it to you in pieces, they're pulling you along, they're forcing the learner to work with the AI. It's been really interesting to hear how all of them articulate that. Almost always, it comes back to that. You can see the faces of the students that are trying to use it less, and you can see there's something there that makes them excited, like, "Okay, I might get back into it to use it that way."

    Then we talk a lot about tone. What do they want the tone to be like? Who do they learn from best? How do we want to train this on the public domain so that they're getting the best, most accurate answers? And what kind of interactions can we program this to default to that will feel organic and fun and will help them learn? By creating that custom GPT that way, number one, it's theirs. They own it. They help develop it. They ask for the things that are in it. That just creates a different level of responsibility because we've talked about it as a group. It helps level up those people that may not have even started yet to see, "Okay, first of all, this is a professor that wants us to do this and is, in fact, asking us to do it. So I have permission."

    Lydia Kumar: So for your class, you have one custom GPT that the whole class uses together? It's not like each individual student has one. It's like, as a group, we have one for this class that we use, and we've created norms about how this looks.

    Brian Jefferson: Yeah, that's right.

    Lydia Kumar: That's very cool. I think something I have talked with a lot of educators about is that this new technology requires us to have conversations exactly like what you're talking about. I think that's the magic of teaching anyway—when you can say, "Here's what we agree to as a group. We're excited about it. We're working towards something bigger than ourselves together." When you know your students and you're able to do that, you can make incredible things happen. We've seen teachers create that kind of environment in the classroom just without this emerging technology. So it's so cool to hear your story and think about that example for other educators as a way to create norms around the technology.

    Brian Jefferson: Yeah. And I like the way that you said that. Really, what I'm trying to do—I'll just use my ethics class as an example—is ask, "Should we be teaching differently?" In a world where a lot of information is democratized and at our students' fingertips, should we be teaching differently? For my classes, they are very Socratic; they're often flipped, where students are in front of the room teaching. And so that idea that they are all teaming together to create this GPT is a consistent theme with the rest of the class. They're evaluated not only when they're the one at the front of the room presenting, but we also set the standard that when you are presenting, you are meant to be taking care of everyone in that room. You are meant to be educating them and entertaining them. And that goes both ways. As an audience member, what do you give back so that it's a great cycle for everybody involved? So teaching and looking at that custom GPT as a team exercise is very consistent with the way that we've decided we want to teach in an AI-driven world. We need to all think about ways to be better teammates.

    Lydia Kumar: That's very cool. I'm so curious about how to replicate what you have done in other classrooms. Do you have thoughts on what leadership in a university or a K-12 setting could do?

    Brian Jefferson: I do. I think it depends on the atmosphere and where you're coming from. My experience at the University of Maryland is that our leadership in the business school is very aware of AI and very focused on the way it's changing the experience of our students now and in the future. So we are devoting a ton of resources to that. There are no ideas that are too big.

    But the first question I would ask if I was sitting down with an administrator is, "What does AI competency look like for your students? What's the end goal?" Then we can decide what's the right level of exposure and play. The other thing that tends to work for me, and I think about it with administrators working with much more junior populations, is I give a lot of examples about how I use a custom GPT for things that are completely fun and unacademic. I tell them, and it's true, this is exactly what we did as a family to develop a vacation custom GPT. The goal is that I can just type in our needs—we've got a vegetarian who needs vegetables, one person who must have great coffee, one person who needs a hike every day—and if I just tell it the individuals and the timeframe, it can design an entire trip for our family based on our specific interests. Starting to give people these ideas on ways they would use it outside of academia is really fun. And if I was in a school setting, that's the type of play I would have kids start doing.

    To make it clear to my students, before even the custom GPT, one of the exercises I had my tax class do, in the spirit of play, was I said, "I want you to use any generative AI you're comfortable with, DALL-E, whatever. I want you to come up with a creative image that shows a particular tax subject in a convincing and interesting way. Develop your own poster." As I was ideating this, we had a nine- and a ten-year-old sitting with me doing their own homework, and they said, "Wait, tell me more about what you want to do." So I handed them my computer, and they both developed their own tax posters. I could show my students, "This is how easy it can be before you even start to iterate." Hopefully, that's something they take away and can apply to a project at their school.

    Lydia Kumar: Right now, there's a lot of fear around generative AI in our country. A large swath of the population feels afraid of the future, and people do not believe that this technology is going to make our lives better. I think fun and play can be an antidote to fear. I love this idea of bringing in play and figuring out how do we make this accessible, understandable, and fun? Because we're not as scared of things that are fun. The beginning is just to try, to experiment, to play, so that then you can use it to be the cognitive enhancement that you talked about earlier.

    Brian Jefferson: Yeah, awe and whimsy are phenomenal educational tools. The more that we can use technology like AI to inspire those, I think it's phenomenal.

    Lydia Kumar: Amazing. I have a side question for you. You have this background as a tax expert, and now you're an educator. From everything you've said, you're a very good educator using principles that are really building a great educational environment alongside this emerging technology. Reflecting back, how did you do that?

    Brian Jefferson: I would first give a lot of credit to our groups at Maryland that help make us better teachers. I leaned into that, and they were phenomenal, especially in those early years, in helping me understand how important building connection was with my students. It first starts with, "They've got to know how much you care before they care how much you know."

    Beyond that, coming from a professional services environment at PricewaterhouseCoopers is really about serving others. It is listening and responding with empathy and taking into account other people's KPIs. That part of it is very organic. In fact, this era of technology has probably made this an easier jump for me because it allows me to focus my students on all those critical aspects of client service that go beyond just the technical answer. It's, "How do I build a relationship? How do I have a point of view? How do I convince others of that point of view?"

    This makes me think about cognitive enhancement. One of the areas I think AI serves us best and makes us better humans is teaching us rhetorical technique, teaching us to be better conversationalists and debaters. Here's another way I like to use it to make me a better professor: I have an hour-and-a-half commute. On that commute, I'll have already prepared my entire lecture for my tax class. I will get in the car with my advanced AI voice assistant, Maple, and I will say, "Maple, I'm talking today about standard deductions and itemized deductions. You know a lot about my students—19 to 22 years old, highly educated, have been through an accounting curriculum. How do I come up with some examples on this topic that are very relevant to them?" She might give me some examples. I'll say, "That sounds great. I'm going to try to do that piece of the lecture while incorporating that new example, and I want you to be as critical as possible." I could also ask her to interrupt me as I'm going if I anticipate that might happen in class. It's almost like a way to incorporate cognitive behavioral therapy into what you're doing. AI can be really good at training you for that stuff. I'll spend a good hour of my ride doing some of that back and forth. If she brings up something I hadn't thought about, it's a great point to press pause and ask her, "I don't know that term you just used. Can you explain it to me more fully? What's the history of that?" That's what I mean when I say there are ways to use it every day for cognitive enhancement.

    Lydia Kumar: That's such a fun and useful way to use AI. The new voice aspects of AI are really incredible in how well they can hear and respond in a very human-like way. If you're listening and haven't tried it, I recommend it. For folks who are maybe hesitant, this is a great way to start—just sharing your ideas and having something debate you. I saw a great example from another leader in the AI space, Renée Lazi, who uses AI like Socrates, interacting more Socratically. She has two examples where one is a fourth-grader saying, "I'm writing about sharing, can you help me write the introduction?" and it just writes it. The other one is they have an argument about whether sharing is always good or bad, and the child is able to develop a much more nuanced concept because of that back and forth. A lot of people talk about sycophancy with AI and how it will always tell you what you want, but if you instruct it to be more critical, you can end up with a sparring debate partner that makes you sharper.

    Brian Jefferson: I think that's great. I think Professor Ethan Mollick talks about that in his book—about creating four different editor personalities.

    Lydia Kumar: Mm-hmm.

    Brian Jefferson: He didn't want to completely give up the sycophancy—he likes to get a pat on the back once in a while—but then he has three other personalities that are on a range of churlish and will really battle him back on things. So I really like that.

    Lydia Kumar: I was also thinking about adult learning principles, and one way adults learn is practice. I used to do a lot of professional development for teachers who would start coaching other teachers. They have a ton of experience educating children, but not other adults. So we would have these teacher leaders practice coaching each other, so the first time they were coaching, it wasn't on the actual person. I think one amazing use case with AI is to do that practice before you're in front of real people so that you're better for them. It allows you to have your first at-bat in this safe environment so you can show up more prepared, more present, and more confident.

    Brian Jefferson: That's very well said. I have a few individual clients, but they're more like friends who I've offered to help if they're interested in this journey. We'll talk about friction points at work or at home. One of the questions I always ask is, "Is there a conversation with someone important that you've been avoiding?" And we can think about how to use AI to prepare you to move that conversation forward, whether it's with a romantic partner, a kid, or a coworker. How do we develop that Socrates personality so that you can have a realistic conversation and start to see around the corner? And like you said, Lydia, it's a kindness to the other person because you are putting in the work to show up more present and prepared. It's not about winning the conversation; it's about helping you be more empathetic and see more sides to it.

    Lydia Kumar: Absolutely. You have shared so many great, useful ways of using AI, but have you had missteps along the way where it didn't work out the way you wanted, or you feel like your results were worse because you used AI?

    Brian Jefferson: Well, I think it depends on your frame of reference. If you were to ask my students... I had a student last week, and we're relatively early in the semester, so we just generated our class GPT. They're using it for the first time, testing some of their homework problems on it, and one said, "It gave me the wrong answer." That is completely within my range of expectations as a professor. My student viewed this as a failure. And I said, "Listen, one of the reasons we're doing this—and I'm very confident letting you use our class GPT after you make a good-faith first effort—is because in your next job, a lot of those roles are called 'auditor,' which means your first job is reviewing someone else's work. How do you get that experience if you're not looking at someone else's work critically? This is a great chance for you to find the errors. You're going into a workplace where AI will be used pretty consistently. How are we going to find the errors in that if you're not used to looking for them?" So let's talk about what it got wrong and why, because a lot of times it's for the same reasons that humans get things wrong. I am really excited about using failures and teaching them in a way that's a success. I don't expect this to be perfect. And I like to say to my students, "The worst version of AI that you're going to use, you're using it right now. And now again, and now again." It's only going to get better.

    Lydia Kumar: Your example makes me think about this "human-in-the-loop" idea that I think Ethan Mollick coined, and you've talked a little bit online about keeping humans at the center of this work. Are there other examples or ways you think about how to keep our human judgment central when you're using technology that has so many abilities?

    Brian Jefferson: Yeah, it's a great and deep question. I think it's part of our job as educators to talk with our students about, "What did the AI produce, and then what did you add to the situation?" Or vice versa. Talk to me about your iterative process. If you used it to help you brainstorm, did you just use the ideas that it brainstormed, or did you branch off from those? Did you work together? So I think it is really about understanding the process. The more examples we give them of how it can act as a cognitive enhancement tool rather than a replacement tool, the better. And if we see our students using it as a replacement tool, that might be enough in some circumstances, but oftentimes there's an opportunity for additional learning and growth. That puts it on us as educators to be savvy on it ourselves and to play ourselves, so that we're really practicing what we preach.

    Lydia Kumar: How do you know if a student does an assignment and they have just totally offloaded their thinking to generative AI? How do you recognize that and respond to it?

    Brian Jefferson: I think it requires educators to get students on their feet more and get them talking. It's really hard to defend the work you've done unless you've done a good portion of it yourself. Maybe it gave you a really great idea that you followed through on, but once your fellow students and Professor Jefferson start really digging in, it becomes really obvious how much of this is yours and how much ownership you have.

    Lydia Kumar: Sometimes I feel like the assessments we give in education, K through college, are not necessarily authentic to the experiences students have in the workforce. A lot of the time, your work is not evaluated by the same criteria. A lot of the way it's evaluated is your ability to just talk and explain what you've done. That is a huge part of being a working adult, and that isn't always how assessment looks in a classroom. It sounds to me like you are working hard to create opportunities for students to talk about their work more often.

    Brian Jefferson: Yeah, it's true. One of the things that I found out pretty early on is that getting up on your feet and talking about something in front of your peers and your professor is a high-stakes environment for modern students. It's hard. So I want to lower the stakes of this environment so that they are free to do their best work and also to screw up. So that when they're in the room with Brian the partner, it doesn't feel like the first time. I think a lot of that is just about practice.

    So one of the things I've incorporated in all my classes, even the technical ones, is not only group presentations where they're very prepared, but I also have them do individual elevator pitches. They're very short, and oftentimes they're choosing the topic. They just give us two minutes on it, why it's important, convince us of something, and then we have a little bit of time for Q&A and feedback. It's just about taking the dread out of the experience by doing it. For accounting students, they don't have many opportunities in the current environment to really get up on their feet and talk about their work. I think that's a bit of a failure on our part as educators to prepare them for what's next.

    Lydia Kumar: We keep coming back to this practice. ChatGPT, Gemini, and generative AI are great tools to help you practice, and they enable more practice to happen in the classroom because you can do some of the mental practice beforehand and then show up more ready to take an at-bat.

    I have two more questions for you. If you're thinking about an education leader who wants to know where to begin with AI—whether they're a teacher or someone leading an educational institution—what advice would you give them?

    Brian Jefferson: I have to go back to the starting point being, "What do I want an AI-proficient student to look like?" Developing those markers will really inform the process. And then, for almost everyone, especially the younger the students are, really make it about play and fun. You're giving them a framework to analogize going forward. If you use the family vacation GPT, you start to go, "Oh my gosh, that might be really good for this group project I'm leading. Maybe we should have a custom GPT to make sure we're all on the same page." I think it is really about developing the skills where people get used to playing with it and having access, first and foremost.

    Lydia Kumar: So that dual mandate of play, but also a vision of what proficiency looks like. Do you have a short list of things that you believe an AI-proficient student looks like?

    Brian Jefferson: Well, I'll tell you, we are just starting that process as a group. I co-chair our teaching and education committee in the business school at the University of Maryland, and our first task is to figure out what we want that to look like. I think it's someone who constantly asks the question, "Is there a way that this experience can be enhanced by using some sort of automation or artificial intelligence?" No matter what the work product is, it's about developing people that are asking that question and showing that curiosity.

    Lydia Kumar: So it's a design question, almost, of how do we design our experiences to be augmented by technology? New things are possible. It's an opportunity to be really creative about how we approach different tasks because there are new possibilities that weren't there before. That can be scary, but it can also be really exciting and an opportunity to have fun.

    Brian Jefferson: Yeah, my wish for a student would be exactly what I do with my individual clients. We would sit down, talk through the list of school challenges and personal things they're thinking through, and then give them an overview of some of the very specific tools in, say, ChatGPT. We'd brainstorm: "What could we apply deep research to? What could we use the 'Brian commuting plan' type of discussion for? What would be a good use of trying agentic AI for the first time?" It just gets them used to the possibilities. "Have you written using the Canvas feature before? Could that be helpful as you're preparing your next essay?" I get excited because seeing people use the technology for the first time is a true light bulb moment. I'd love for our students to just have tried all of that stuff once so they have a framework for it.

    Lydia Kumar: And having support while trying technology can be really helpful because some folks have a disappointing experience and then write it off. If you have some guidance, you can move past that and become more creative. I think just discovering use cases is an ongoing exploration for different folks because the technology is so new and powerful. Thinking about what we do with it is a great question to explore.

    Brian Jefferson: Can I give five seconds on what I would tell those people?

    Lydia Kumar: Yeah.

    Brian Jefferson: I just did this with my mom the other day. She is excited and willing to get into it, but very easily frustrated, and it's not her primary proficiency. So I had her do something very similar to what I'm doing with custom GPTs. I said, "I want you to just write down on a piece of paper everything that is important to you, that makes you you. Then write on another piece of paper the biggest challenges you're going through right now. Take a picture of both of those, upload them into GPT, and say, 'Please create a guide for me to try a number of features of ChatGPT to help push forward my favorite hobbies or to help me solve the problems I'm currently working on.'" I had her do that in deep research, and it gave her a 10-page guide. She's more used to reading long documents. She had her guide from GPT telling her how to better use GPT.

    Lydia Kumar: I love that, Brian. I feel like you have a good understanding of people and technology. I love the question of, "How can technology make us more human and more connected to each other?" I think that's a great example of you really listening to your mom and her needs and then thinking about how to help her in the space where she's at. That's a really cool example.

    Brian Jefferson: And importantly, now she owns it, and I don't have to intercede.

    Lydia Kumar: I have one final question. I'm curious about an idea or question about AI that is sitting with you right now, sparking hope, concern, or curiosity. What is the thing that you keep thinking about?

    Brian Jefferson: As a recovering CPA, I have to start with the skeptical and the negative; it's just in our nature. I am very concerned about the future of work, particularly for new students coming out of university and what their role is in the space that I've come up working in. I don't have a great answer for how you prepare for a career where you used to start by doing the thing—preparing the financial statements, preparing the tax return—in a future where that will be in large part done by artificial intelligence or offshored, and now you're going to be reviewing work when you've never actually done it yourself. That is a question that I have not heard great, fully fleshed-out answers to from my former firm or any of its competitors yet. I think it's really worth figuring out how we create great professionals that are going to be excellent at client service when they haven't come up in exactly the same way that I did.

    Now, I am filled with a ton of hope when I think about the fact that so many of those things that I was doing felt like I was wasting brain cells and wasting time. If we can take the time we're gaining from offloading rote tasks to AI and shift that time to working on all these great client service and human elements, that excites me. But I don't think that playbook is fully written yet. I think my job right now is to keep pushing employers to tell me what that vision looks like, and from my perspective, continuing to prepare students so that they just know all the tools and are ready. I do believe that for the majority of folks, I don't know if AI is going to take your job, but I do know that this maxim is true: people that really understand and are open to working in a new way with artificial intelligence will be better fitted for the jobs of the future. So I just want to give my students the best chance for success because the story is still in progress.

    Lydia Kumar: Absolutely. We're in a moment of change, and I really appreciate your leadership in thinking about how you help our young people navigate that. There's always uncertainty, but right now it feels like we have more uncertainty, and so being able to prepare them for work that's going to look different than it did in the past is a really important job.

    Brian Jefferson: I want them to increasingly, over the next couple of years, view that challenge as an opportunity and have the skills to be able to make that mental transition themselves.

    (Outro Music)

    Lydia Kumar: What an inspiring conversation with Brian Jefferson. He is a true innovator in business education, and we thank him so much for sharing his on-the-ground insights from the university classroom. I have a few takeaways from our discussion that I want to share.

    First, we can move from fear to fun. Brian's approach shows how awe and whimsy can be powerful antidotes to AI anxiety, creating an environment where students are excited to experiment and build their skills.

    Second, we can look beyond automation to enhancement. Brian's experiences show the potential for AI to act as a cognitive partner—a tool that helps us practice difficult conversations, debate ideas, and sharpen our own thinking before we ever walk into the classroom or boardroom.

    And finally, we must prepare students for the future of work, not the past. This means training them not just to use AI, but to audit it, critique it, and keep the human-in-the-loop, developing the critical judgment that will be invaluable in their careers.

    To learn more about Brian's work, check out the links in today's show notes at kinwise.org/podcast. And if your own school or district is ready to build a community for ethical and effective AI use, Kinwise offers everything from educator PD pilots to leadership labs that help you draft board-ready guidelines. Details and bookings are at kinwise.org/pilot.

    And finally, if you found value in this podcast, the best way to support the show is to subscribe, leave a quick review, or share the episode with a friend. It makes a huge difference.

    Until next time, stay curious, stay grounded, and stay Kinwise.

    (Music Fades)

    • LinkedIn Profile: Brian Jefferson on LinkedIn. Follow Brian for his frequent video tutorials on using AI for professional and personal productivity.

    • Faculty Page: University of Maryland, Robert H. Smith School of Business

    • AI at Smith: Explore how the University of Maryland's Robert H. Smith School of Business is integrating AI into its curriculum and research. Learn more here.

    • How to Create a Custom GPT (Zapier Guide): A step-by-step beginner's guide on how to create your own custom version of ChatGPT, similar to the process Brian uses with his students.

    • Ethan Mollick's Work: The hosts discussed Wharton professor Ethan Mollick’s ideas on using AI as a sparring partner and creating different AI "personalities" to improve creative and critical work. You can explore his work at his popular Substack, One Useful Thing.

  • 1. The Classroom Co-Pilot Builder

    Brian’s most detailed example was co-creating a Custom GPT with his students to act as their teaching assistant. This prompt initiates that process by turning ChatGPT into a facilitator to help an educator design the "Instructions" for their own classroom GPT.

    Use Case: Designing a Custom AI Tool for a Specific Group

    The Prompt:

    Act as an expert in pedagogy and AI integration. I am a [Your Subject, e.g., "High School History"] teacher, and I want to build a Custom GPT to act as a 24/7 teaching assistant for my students. Your goal is to help me write the "Instructions" for this GPT. To do this, ask me a series of questions inspired by Brian Jefferson's method. These questions should help me define: 1. The GPT's primary role and purpose (e.g., study partner, Socratic questioner, project brainstormer). 2. Its personality and tone (e.g., encouraging, witty, formal, like a helpful peer). 3. The specific tasks it should be able to perform. 4. The constraints on its behavior (e.g., "Do not give direct answers, instead guide the student with questions."). Start by asking me the first question.

    2. The Creative Concept Illustrator

    Brian described using "play" as an antidote to fear, giving his tax students a fun, low-stakes assignment to use an image generator like DALL-E to create a poster for a complex topic.

    Use Case: Visualizing Abstract Ideas and Making Learning Fun

    The Prompt (for an image generator like DALL-E):

    Create a visually engaging and slightly humorous poster designed to explain the tax concept of "capital gains." The style should be like a vintage travel poster from the 1950s. The poster should feature a rocket ship labeled "Investment" taking off towards a planet made of gold coins. Include the tagline: "Capital Gains: Your Ticket to Financial Frontiers!"

    3. The High-Stakes Practice Partner

    Brian detailed how he uses his AI voice assistant on his commute to practice his lectures, asking it to be critical and interrupt him. This prompt simulates that high-stakes practice for any professional presentation.

    Use Case: Role-Playing and Public Speaking Rehearsal

    The Prompt:

    I need to practice a 5-minute pitch for my company, Kinwise, to a group of skeptical school district superintendents. I will present my pitch, and you will act as the role of a superintendent. After I am done, I want you to give me critical feedback from that perspective. Ask me 2-3 challenging follow-up questions that a real superintendent might ask, focusing on budget constraints, data privacy, and the challenges of teacher training and adoption. Let me know when you are ready for me to begin my pitch.

    4. The Personalized AI Exploration Guide

    Towards the end of your conversation, Brian shared a powerful method he used to help his mom: having her list her passions and challenges, and then asking GPT to create a personalized guide for how its features could help her.

    Use Case: Creating a Custom Onboarding Plan for a New AI User

    The Prompt:

    Act as a personal AI tutor. I am providing you with a list of my personal hobbies and current professional challenges. Your task is to create a personalized "AI Exploration Guide" for me. The guide should suggest 3 specific features of ChatGPT (e.g., data analysis with Code Interpreter, image generation, brainstorming) and explain how I could apply each one to either enhance my hobbies or help solve my challenges. My Hobbies: - Marathon running - Exploring new restaurants in Durham, NC - Planning international travel My Professional Challenges: - Finding new clients for my consulting business, Kinwise. - Coming up with fresh topics for my podcast, "Kinwise Conversations." - Managing my time effectively between business development and client work.

    5. The Socratic Sparring Partner

    You and Brian discussed moving beyond AI as a simple answer machine to using it as a Socratic partner that challenges your thinking, a core component of "cognitive enhancement."

    Use Case: Deepening Understanding and Critical Thinking

    The Prompt:

    I want to explore a belief I hold. Your role is to act as a Socratic questioner. Do not provide direct answers or opinions. Instead, respond to my statement only with questions that challenge my assumptions, ask for definitions, and force me to provide evidence for my claims. Help me examine the foundations of my belief through questioning. The belief I want to explore is: "AI will ultimately reduce, not increase, the workload for educators."

  • Brian Jefferson is a Lecturer at the University of Maryland's Robert H. Smith School of Business, where he pioneers the use of AI in accounting and business ethics education. With over 20 years of experience as a Lead Tax Partner at PwC, Brian brings a real-world perspective on how technology is reshaping professional services. His teaching is dedicated to preparing students for a human-led, technology-enhanced future by developing their critical thinking, professional judgment, and a collaborative, problem-solving mindset.

Next
Next

19. The Frontier Classroom: McKenna Akane on Rural Innovation and Emerging Tech