Episode 2: What's Ours to Hold? Travis Packer on AI, Systems, and the Future of Human Work
Episode 2 of Kinwise Conversations · Hit play or read the transcript below
-
Lydia: Welcome to Kinwise Conversations, where we explore what it means to integrate AI into our work and lives with care, clarity, and creativity. Each episode, we talk with everyday leaders navigating these powerful tools—balancing innovation with intention and technology with humanity. I’m your host, Lydia Kumar.
In today’s rapidly evolving landscape, understanding how AI is reshaping not just individual tasks but entire organizational systems is crucial. That’s why I’m so pleased to be speaking with Travis Packer. Travis is a consultant at The Ready, dedicated to helping organizations navigate change and improve how they function, with a rich background spanning law, education, and executive coaching. Travis offers a unique lens on how individuals and systems adapt—or resist—new technologies like AI. We’ll explore his journey, the real-world applications and challenges of AI he’s encountering, and some of the bigger questions about what we want to hold onto as humans in this technological wave.
One note about this episode: Travis was having some internet connectivity issues, and at times the volume cuts in and out. I really hope you’ll stick with us because this conversation is rich and important.
Lydia: Travis, thank you so much for being here with me today. To get started, I want to hear a little bit about who you are and your journey. I know your background—you went to law school, you’ve taught, you’ve done executive coaching, and now you work at The Ready. I’d like you to orient our listeners: who are you?
Travis: That’s a great question. I don’t know how to summarize it better than you just did. Maybe the through-line is that I’m someone who’s always searching for what feels most useful to myself and to the world. I love learning and trying out new things. Over my career, whenever I hit a point of frustration or boredom, I’ve been fairly willing to move on to the next thing rather than stay in one place. I guess I’m just unsatisfied with staying put.
Lydia: That makes sense. It feels clear from the fact that you’ve been in a lot of different industries over time. A lot of your work is around leadership and helping organizations function well. Do you have any thoughts about how your journey has shaped what you’ve learned about leadership or transformation? Beyond always being ready for something new, what’s been consistent for you in your perspective?
Travis: I hope I’m still learning, but what’s become clear is that there’s no silver bullet. I got into coaching because I needed coaching myself and found it so useful—you know, “I want everyone to have this.” Once I started doing executive coaching, though, I realized that coaching a leader is only part of the answer. That leader goes back into a system that also needs work; otherwise they end up banging their head against the wall again. I love watching how that intersection works: the individual within the system. It’s about asking, “Which lever do you pull, at what time, and how hard do you pull it?” That holistic view—balancing individual coaching with systemic change—has been a through-line for me.
Lydia: That’s really interesting. Since this podcast is about how people are using generative AI—a new, disruptive technology—how AI changes work and interactions, when did you first begin seeing its usefulness within systems or in how people operate within systems?
Travis: At first, I was very skeptical—maybe even resistant. One of my colleagues was saying two years ago, “This is going to be part of our work soon; we have to start now.” I thought, “No way. I don’t play around with it; it’s terrible. I don’t see the utility.” I was also worried that I was training something that would eventually replace me. But over time, I saw practical use cases. For example, it’s surprisingly good at writing emails—basic stuff, but time-saving. So about a year ago, we started exploring it in our company. As AI models improve, they get better at tasks that matter. Practicing prompt engineering is like playing piano: the more you practice, the better the output becomes. Today, AI still can’t do everything, and sometimes switching between tools is inefficient. But as I practice it more, I see that it improves. It can handle routine writing, which frees me up for other work.
Lydia: Is there something you used to spend a lot of time on that AI now does for you—so you can use that time differently? Any recent examples?
Travis: Yes—on a recent sustainability project, my colleague John and I did a lot of what we call “discovery interviews.” We talked with people at all levels of the company to understand what was happening. Previously, one person would interview while another transcribed every word, then we manually combed through transcripts for themes and quotes. Now, we record on Zoom, get the transcript automatically, and feed 25 interview transcripts into ChatGPT. We ask: “Here are our big questions; pull themes and relevant quotes, and indicate whether people agree or disagree on each point.” AI compiles that in seconds, saving us dozens of hours. But we still manually verify quotes because AI can hallucinate. Also, when AI delivers themes and quotes, our client often needs time to absorb it before moving on to design principles. AI can move faster than humans process information, so part of our role is pacing the work so that clients can keep up.
Lydia: That resonates with me. Generative AI speeds up content generation, but then you need to step back, validate and evaluate—which is tiring. The tool moves faster than we do. How do you work alongside something that processes more quickly than humans?
Travis: One place where AI isn’t great yet is replicating a unique writing style. When you read something obviously produced by AI that hasn’t been refined, it all sounds the same. You can tell someone just fed a few words into ChatGPT and hit send. I don’t want to read that because it hasn’t been shaped. So I still spend time writing to ensure my voice comes through. Authorial voice is distinct from AI—at least right now.
Lydia: I agree. I’ve seen people craft prompts that sound more human, but usually you have to train the model on a particular writing style. Otherwise, it defaults to its own voice. Have you had conversations with your team or clients that felt energizing, concerning, or surprising around AI?
Travis: Yes. A couple of papers came out recently—one called “AI 2027” and a rebuttal—that are both concerning and interesting. They suggest that we might create something we can’t control. The conversation I’m most interested in is: what is the future of human endeavor? If AI can do almost everything we do—starting with knowledge work and eventually other fields—what will humans hold onto? For example, AI could compose music perfectly, but I have zero interest in AI-generated music. The story behind the artist matters to me. Even if you gave AI a backstory, I’d still prefer a human composition. Similarly, AI might coach sports teams better than any human, but I don’t want that. There’s something powerful about a human-to-human coaching relationship. In education, AI tutoring tools have potential, but there’s a difference between learning from a computer and learning from a person. That social, human element is essential.
I also wouldn’t want to read an AI-generated book. I want to know why a real person wrote it—what motivated them, what perspective they bring. AI can’t replicate that. So I think humans will need to focus on what only humans can do and what other humans prefer humans to do. It’s fine if AI writes a basic email, but I’d care if AI wrote a personal letter. Determining when to use AI and when not to is crucial.
Lydia: That reminds me that education isn’t just about subject matter; there’s a socialization aspect. AI can’t teach kids how to interact face to face. And with podcasts, I wouldn’t listen to a podcast that was obviously AI-generated. There’s no story. It’s not about the perfect sentences—it’s about the human behind it. It will be fascinating if we automate so much that people have more free time—what will they create? What will they do with that time? Do you see your clients having these conversations, or are some ahead of the curve and some lagging?
Travis: It varies by client. Some say, “We have Copilot on our computers, and we use it occasionally.” Others are obsessed, trying to automate everything. Internally, and with clients, the first question is always: “If you intend to use AI—and you probably should—how will you bring people along?” When personal computers first arrived, we didn’t just give one to everyone and say, “Figure it out.” We provided training. With AI, it’s more disorganized. Many clients have just enabled Copilot, so now they see that 10 percent of users use it regularly, 20 percent know how to use it at all, and the rest are confused. So the question becomes, “How do we modernize our workforce, not just by teaching skills, but by addressing the social side: ‘Will this tool replace me?’” Companies haven’t fully figured out that conversation yet. We’ll be working on it over the next year.
Lydia: That’s top of mind for me too. I realize I should ask you to explain what The Ready does.
Travis: Sure. I work for The Ready, a small organization that helps companies solve cross-functional problems, reduce bureaucracy, and figure out how to work better together. That includes—but isn’t limited to—helping them decide how to use AI and how to bring everyone along.
Lydia: That feels very relevant. It’ll be interesting when different companies are ready to have that conversation because it takes time and might require fundamental changes to how teams operate.
Travis: Exactly. And the answer will differ by organization, but what won’t differ is that you need to start figuring it out.
Lydia: I want to ask a couple more questions about how using this technology has changed how you show up as a leader.
Travis: I don’t think it has—yet. In an ideal world, AI would create space so I can show up the way I want to. AI can’t set and hold my intention; only I can do that. It can reduce some overwhelm—the feeling of being pulled in a thousand directions—but I still feel as busy as before. I hope one day AI helps me carve out room to be more intentional, but I haven’t found that yet.
Lydia: I’ve been reading Slow Productivity by Cal Newport. He talks about knowledge work and how we constantly push ourselves to our limit: “get as much done as possible.” With AI, if I write emails faster, I might end up responding to more emails because everyone’s moving faster. That concerns me. In a perfect world, AI gives us meaningful space to create better work and show up with intention. But our culture rewards “more is better.” Now we can do more faster than ever—will we have the self-control to slow down? If we do, we could create higher-quality work and build better relationships. If we don’t, we’ll just churn out more stuff.
Travis: I agree. AI can produce a lot of “slop” quickly—unfiltered AI writing getting passed around. In the long term, we’ll need to reorganize how work is structured. In the short term, though, we’re already experiencing a flood of AI-generated content. We’re the middle people between AI and our audiences. Companies and individuals need to be very purposeful and slow down to aim for quality over quantity.
Lydia: Right. If I flood my coworkers with AI-generated reports, they’ll probably run it through AI themselves just to process it. It creates a loop: AI writes, humans skim, AI summarizes. How do we break that?
Travis: I think the average worker isn’t quite at that point yet. Once everyone uses the tool all the time, we’ll get flooded with AI content—hopefully we’ll pare it down to essential, high-quality work, but that requires discipline on the human side.
Lydia: Definitely. Right now, if you use AI in an organization that hasn’t discussed it, you can look like a star. There’s little motivation to be transparent. If your organization isn’t talking about it, you can get ahead. But once there’s awareness, you’ll need to be careful not to overwhelm colleagues with AI slop. Social pressure can be helpful in curbing excessive use.
Travis: Agreed. Also, we need to recognize human cognitive limits. For instance, I drafted an internal governance proposal using ChatGPT. It took our ideas from a meeting and turned them into a coherent document, but it was way too long. I had to pare it down multiple times because only a tiny fraction of a busy person’s workweek can be devoted to reading a proposal. More content doesn’t equal more value. People can only process so much.
Lydia: If you wrote a giant proposal by hand, people would admire your hard work. If you generate it with AI, they think, “Oh, Travis used ChatGPT.” In a company that hasn’t discussed AI, that pushes you to be more intentional about what you produce. You don’t want to annoy your colleagues with AI swill. A little social pressure can shape better behavior.
Travis: Exactly.
Lydia: Two last questions: First, what do you hope brave, intentional work looks like in a world increasingly shaped by AI?
Travis: I think “intentional” is the key. Organizations need to answer: what do we want AI to help with, and what do we want to keep as distinctly ours? What is our fundamental offering—what do we do best, what do we enjoy doing, and what will other humans continue to appreciate? That question requires intentionality and will evolve over time. You might have one answer today and a different answer in a year—and that’s fine.
Lydia: That’s a great question to carry forward. Finally, is there an idea or question about AI you can’t stop thinking about—something that keeps you up at night or you bring up with friends or coworkers?
Travis: I’m fascinated by whether AI is “the tech to end all tech” or just another technology—and how we treat it. What do the next two to five years look like? Human history shows that, with every transformative technology, some people benefit enormously while others get left behind. I’m concerned about AI’s environmental impact and workforce impact. If AI replaces, say, 25 percent of the workforce in the next five years, that’s a monumental shift. How do we re-employ those people? How do we take care of each other? Historically, our track record isn’t great. I hope we become more generous, but I’m not sure how. That question—how do we take care of each other through this transition—keeps me up.
Lydia: I’ll hold onto that, Travis, because you ended on a downer in a meaningful way. How we take care of each other is both a personal challenge and a societal question.
Lydia (closing): My thanks again to Travis Packer for sharing his experiences and insights on navigating AI in the workplace. His reflections on the future of human endeavor, practical applications of AI in organizational development, and crucial questions about maintaining our values and taking care of each other during this transition are incredibly timely. It leaves us with much to consider about the choices we’re making, both individually and collectively. As AI continues to evolve, the intentionality Travis spoke about will only become more critical—a commitment to continuous learning and adaptation.
Coming up on Kinwise Conversations, we’re diving deep into the world of education with Vera Cubero, who is leading the charge on developing AI guidelines and fostering AI fluency in North Carolina schools. If you found Travis’s thoughts on systemic change compelling, you’ll appreciate Vera’s practical approach to navigating AI in our K–12 systems. And if today’s conversation about maintaining human values and evolving workplaces resonated with you, you might also enjoy revisiting my earlier chat with Celia Jones, where we discussed discernment and human-centered communication.
I hope this discussion has encouraged you wherever you are on your own AI path. If you enjoyed this conversation, please subscribe and consider leaving a review—it truly helps other thoughtful listeners find us. You can learn more about how to approach AI with intention, explore resources, and join the Kinwise Collective by visiting kinwise.org. If you or someone you know is doing interesting work at the intersection of AI and humanity and has a story to share, we’d love to hear from you. Until next time, stay curious, stay grounded, and stay Kinwise.
-
-
1. For Defining "Human Work"
"Act as a strategic consultant for a [Your Industry, e.g., marketing] team. We want to integrate AI to handle routine tasks so we can focus on higher-value, human-centric work. Based on the principle of 'holding onto what's human,' lead me through a process to identify:
Tasks that are ideal candidates for AI automation.
Core activities that should remain human-led because they require deep empathy, strategic intention, or complex relationship-building.
A set of guiding principles our team can use to make these distinctions in the future."
2. For Analyzing Qualitative Data
"I'm providing you with the transcript from an interview below. Please analyze it and perform the following tasks:
Identify the 3-5 major themes that emerge from the conversation.
For each theme, pull 1-2 direct quotes from the text that best illustrate it.
Summarize any points where the speakers are in strong agreement and any points where they seem to have tension or disagreement.
Do not invent information. If a quote cannot be found to support a theme, state that.
[Paste your transcript here]"
3. For Pacing AI Integration with a Team
"My organization is introducing a new AI tool [Name of tool or its function, e.g., 'that helps write internal communications'] to our team. I'm concerned about overwhelming my colleagues and want to manage the rollout intentionally. Draft a communication plan that addresses:
How to introduce the tool without causing fear or anxiety.
A plan for training that respects that people learn at different paces.
How to create a space for open conversation about what's working and what isn't."
4. For Finding Your Voice in AI-Generated Text
"I've used AI to generate a draft of an email, but it sounds robotic and lacks a personal voice. I'm providing the draft below. My goal is to sound [Choose 3-5 adjectives, e.g., 'collaborative, insightful, and clear, but also warm and approachable']. Please help me rewrite it to better reflect this voice. Point out specific changes you made and why.
[Paste your AI-generated draft here]"
5. For Envisioning the Future of Your Role
"I am a [Your Job Title] and I want to proactively think about how my role will evolve over the next five years due to AI. Act as a future-of-work strategist and help me brainstorm:
Which parts of my current job might be automated?
What new, uniquely human skills will become more valuable?
What are 3 concrete actions I can take in the next year to start preparing for this shift and make myself an indispensable 'human-in-the-loop'?"