17. AI at Scale: Susan C. McLeod on Pilots, People, and Knowing the Problem
Season 2, Episode 6 of Kinwise Conversations · Hit play or read the transcript
-
Of course. Here is the revised transcript without quotation marks.
Welcome back to Kinwise Conversations, where we explore the real crossroads of humanity and technology. Today we're joined by Susan McLeod, the newly appointed VP of Data Center Market Development at Hitachi Energy. She was also an executive advisor of Envira Global, a woman-owned business accelerating sustainable infrastructure and smart cities before stepping into the energy and sustainability space.
If you're a leader wondering how to prep your systems, people, and data for AI, this conversation is for you. Susan shares lessons from the enterprise world that apply just as powerfully in education, including why communication is the skill that will shape our AI future.
Lydia Kumar: Hello everyone. I am so excited to be here today with Susan C. McLeod. Susan's this rare leader who can talk code with engineers and strategy with execs and make both sides feel heard. She's helped big companies actually make AI useful, and she cares about doing it in a way that's good for people and for the planet.
Susan C. McLeod: Thank you, Lydia. Thanks so much for having me, and I'm excited to talk to you and just have this discussion. My background is over 20 years in IT, specifically focusing on data applications within data centers and enterprise businesses. I really started with a focus in professional services and services delivery and then evolved over time into the support organization—the post-sale side of the business. It's been an incredible journey for me, and I'm excited to share some of the learnings and insight specifically focused on generative AI and what's happening in our space today.
Lydia Kumar: Amazing. And speaking of AI, I am really curious about how AI first showed up on the picture for you. I think different people became aware of artificial intelligence or generative AI really recently. So for you, what did that look like and what was your experience?
Susan C. McLeod: That's a great question. In the last seven years, I ran global support and success for Hitachi Vantara, which was a great experience. As you can imagine with services and support, AI and generative AI specifically have been looked at as one of the low-hanging fruit areas that can really bring value to services and support call center organizations. I'll be honest, I was very heads-down in looking at bots and how we enable self-service from our customer service and support lens at Hitachi Vantara.
Generative AI came onto the scene back in 2021-22 really quickly. At the time, I was still very much focused on enabling the bot through the Salesforce CRM tool. We had to stop and pause and say, 'Okay, this is what we're working toward.' In a large organization, it takes time. You can't just go turn it on, right? You're working from established data sets and established tools, and we had to just take a break and say, 'Okay, wait a minute. Do we continue down this path of enabling this feature function when we could pause and actually leapfrog to the future state, the next generation offerings that bring in true generative value?'
And we did. We had to pause and say, 'Okay, let's level-set and figure out how we work to ready our data, ready our tools, and ready our solution to be able to take advantage of this new offering through generative AI,' which the CRM platform—Salesforce—that we were using brought that value. So, I don't wanna say it caught me off guard, but it was moving so fast that with large enterprises, you have a timeline to ready for certain rollouts. It was moving so quickly and bringing so much value, we had to pause and say, 'We're not even gonna try to do A, we're gonna wait, ready, and jump to C,' if that's a good way to describe it.
Lydia Kumar: Yeah, it makes a lot of sense. AI in general over the past few years has changed and moved so fast. So you had some foresight to be able to say, 'How do we prepare ourselves to be able to take advantage of the new technology?' I think recently there was a report that came out about how 95% of AI pilots are failing. And so I think part of this failure rate has to do with probably the preparation that underlies it. I'm curious, as you were at Hitachi and deciding to take advantage of this technology, what kind of preparation did you have to do as an organization to make that happen?
Susan C. McLeod: That's a great question, and I read that article as well. I believe what's happening in the space is everyone is just moving so quickly and there's a lot of pressure to take advantage of and roll out generative AI within enterprises because it's just the hot topic right now and it's so new.
Companies and corporations still have to do their due diligence in understanding, 'What is the problem we're trying to solve? What is the right use case for our business where we can have success?' and bite it off in small chunks. Don't just go out and say, 'We're gonna invest X dollars in generative AI,' and just start developing and testing. You still have to go through the process like you do in any solution, which is understanding the problem you're trying to solve. Understand, is it gonna bring financial savings? Is it gonna bring just time management savings? What is the return on investment gonna look like so that you can plan for it correctly and be successful?
I would say with our team and the team at Hitachi Vantara, they did a really good job at identifying use cases that we could be successful in, attacking those small use cases first versus trying to be everything to everyone. I think a lot of companies are trying to reel it back now to say, 'All right, we've invested in these tools, we've invested in this licensing. Now how do we actually use them to create value and efficiencies for our organization?'
Lydia Kumar: Right. There's been so much hype and pressure to take advantage of this technology because it seems like it can create a tremendous amount of value at the organizational level. I think, in part, this is because as individuals, it's very easy to adapt and create individual value, but at an organizational level, there's a lot more data and many more challenges in terms of identifying the right use cases, ensuring that you have the right data in place to use the technology effectively, and navigating the legal or policy components. So it's very complex at an organizational level, even though it's incredibly simple at an individual level to open up a generative AI tool and start using it to collaborate or create something that can help you work more effectively.
Susan C. McLeod: Yeah, that's a great way to describe it. Microsoft Copilot or Gemini or Salesforce's agents. And you also have to think about the security and compliance—all of this security wrapper around enterprises. They are creating their own environments where employees can utilize AI to do their work, but do it in a protected fashion, right? So the data they're using is still protected within their cloud and their security cloud. So you're not just releasing proprietary IP out into, say, me just going out to OpenAI's ChatGPT, which I do personally and use for my own personal creation. You can't necessarily do that through an enterprise. You have to do it through your protected data sets.
And it takes time for organizations, and IT organizations especially, to ready that and make sure the security policies are in place. But there are so many different ways as organizations do that, that the individual employee can use it for efficiencies. And you know, why start from scratch in creating a document when you can feed the prompts to your enterprise AI tool and ask it to create something for you, giving it the guidelines, giving it even templates. It'll get that individual 70% of the way there, which is great. And then you just take it and you customize it. You put your own message and tone, make sure it reflects your goals, and then you've saved tremendous time for efficiency gains for individuals.
Lydia Kumar: Absolutely. And being able to set up those systems so that individuals can use them in a compliant way is very challenging. I think for people who may have relied on ChatGPT or some sort of publicly available tool, that's change management in and of itself, to move from using a tool like ChatGPT to an enterprise-based tool. And being able to understand the types of data or information that is proprietary that you don't want publicly shared versus, I don't know, maybe there are some things that you can brainstorm that are not proprietary but may be related to your work. There's this education component of helping people make choices and also the change management piece of helping people understand how important it is to protect certain types of data so that they stay inside your enterprise system.
Susan C. McLeod: Yeah, that's great. And it is change management and just awareness and training. To your point, you were spot on. It's training, and there are things, for my work now, where I'm still using my personal ChatGPT or DeepSeek. I actually have three and I'll go to each and ask for different ideas because you have to validate your sources and you wanna make sure you're not getting bad data 'cause it's feeding you off of what's out there. And there are so many just bad, incorrect sources out there as well. So you have to really validate your sources.
But I will do research on overall trends and market trends, but I can't feed it anything specific to my business. Because once you enter it into DeepSeek or ChatGPT, it's out there now. It's out and utilized in the public domain. So you do have to be very cautious of that. And I think the training that enterprises need to do for their employees is absolutely critical in understanding how to use your internal IT AI tools versus your personal and public AI tools. And it's changing every day. You and I were talking last week about how quickly this industry and this specific space is changing, and it's a challenge for organizations to keep up and be able to train and educate their teams.
Lydia Kumar: Yes, and I think understanding how the technology works, even at a very basic level—about how AI tools are trained, how data labeling works, where the data comes from—I think all of that is important too. Because if you understand at least a little bit of how generative AI or AI tools function, then you can begin to make choices that are more aligned with the way that you should use the tool. You can be more critical. You can question and say, 'Based on what I understand about how AI tools work, I wouldn't wanna put this information into DeepSeek or ChatGPT.' So I think building that understanding is... there's this compliance level of saying, 'Okay, it's really important that you don't put X, Y, and Z in a public generative AI tool.' And then I think there's also this component of, 'And this is how it works,' so that employees can make choices that are more aligned to what they hope to accomplish.
Susan C. McLeod: Yeah, absolutely, I agree with you.
Lydia Kumar: You've been at the head of and been leading some pretty complex initiatives at Hitachi, and you're moving into doing some energy-focused work. I'm curious about a lesson over time that you've learned about turning a complex technology into something that is usable and a win for people on the ground.
Susan C. McLeod: That is a great question. I would say the learning has been, and it's gonna get back to one of our prior discussions, which is really understanding the problem you're trying to solve. And I think you have to stay true to that. You really have to understand whether it's an internal rollout or working with a customer solutioning for an organization, 'What is the problem statement that that customer internally has that you are trying to solve?' And if you can stick with that and really, as you define the problem and the solutions that you can bring to the table to roll it out, if you can stick to that and stay true to that, you are gonna avoid so many of the misfires. Just like, again, to your point about the article that came out last week about pilots failing. I think you really have to stay true to what it is you're trying to solve.
And as a leader, within any organization or even just personally as you're rolling out, you have to be able to communicate and have that strong communication tool mindset so that you can ensure that everyone on your team or the people that you're working with, that you're influencing, you're all really marching to the same drum. You're marching to the same path of the solution versus everyone being out there trying to do really good things and with good intent, but not being in sync on how they're rolling it out. And that's how a lot of companies get into trouble. So those communication skills are absolutely critical, and especially now with generative AI. You know, we think that it's gonna take over everyone's jobs in certain roles. What I believe, in my opinion, is it's gonna make the communication for these businesses and leaders even more critical because we have to set the tone and we have to set the strategy that the company is driving to, or the solution sets you're driving to.
Lydia Kumar: The communication piece makes me think... a couple of years ago when I first started my career, I was a teacher. And something that teachers say a lot is, 'Just because you taught it doesn't mean students learned it.' And I think there's some tie-in in communication in general. Just because you say something doesn't mean your team is on board. It doesn't mean that the message that you said one time is there. And so it's really important once you identify a problem for everyone to be working towards solving that problem in an aligned and collaborative way. That focus on communication in our increasingly complex world is so key in creating alignment between individuals. And acknowledging that people don't necessarily understand what you said the first time. Communication is an ongoing experience. And so just because you have communicated it once doesn't mean that everyone's on the same page. Being able to validate that and go back to the table, I think can be really valuable and powerful too.
Susan C. McLeod: Absolutely. And I'm not sure where I heard this throughout my career, it was in one training years ago, but it's, you know, you have to say it and reinforce it seven times. Say it, say it, say it again, seven times quite often before people truly absorb it, understand it, and then can echo it back to you. And I think as leaders, that is something we have to remember. Everyone is wired a bit differently. Everyone absorbs information differently. So as you're talking to your teams and readying your teams, especially for change management and transformation, you have to be very crystal clear in setting the direction or the North Star of the organization and where you're going, and then continue to reemphasize it to the different groups to make sure that they truly are on the same page. AI in particular has just added a level of complexity and change.
Lydia Kumar: Along with the many other things that are happening in our world right now that organizations have to navigate. And that ability to communicate when you're in a time of immense change is even more critical. A moment ago you said, 'I'm not worried as much about people's jobs being taken as how do we build that skill of communication so that we can use the technology that we have and move effectively forward.'
Susan C. McLeod: Very true. And look, the jobs are gonna change. I think that's the key, and there's a lot of fear. There's a lot of fear and uncertainty, and I understand it, especially as organizations are so quickly rolling out generative AI solutions or AI solutions, and we're seeing it. We're seeing jobs change, and we're seeing some impacts.
I think for the next generation—I know for you, your passion is education—and as you think about the next generation coming up, I think we really need to think about what those new roles of the future look like. And, you know, we are seeing it. So again, for me to say I'm not stressed about the jobs, I'm not being intellectually honest. The roles are changing. And if you're in certain roles that can be done now through tools that are being rolled out within organizations, you probably do need to really think about how you evolve and adapt with the new tool sets that are coming out.
AI is a tool. Generative AI is truly a tool, and companies are using it to do things more effectively and efficiently, which means they may need fewer people to do that. So as that is evolving, individuals need to think about how they modify or adapt to these new roles. And that's where, to your point about communication, we're always gonna need people to communicate with customers, to speak to customers, to determine what you want that user interface to look like. We may use AI and generative AI to build and code in the future, but those individuals are still gonna be needed to define what that solution looks like. So the jobs are evolving and the roles are evolving, and I think that's what everyone needs to really think about, especially the younger generation getting ready to go into college, right? Where people before were so excited about being coders, I think those individuals need to really think about how you put more of a business lens, a finance lens, a communication lens around the technology to ensure that you have flexibility and agility when you come out of college.
Lydia Kumar: It's making me think about something that I feel like I've heard a lot recently, which is this importance of almost soft skills—of your ability to interact with other people in a way that is effective. You're able to communicate, get your point across, understand the situation that you're in, because a lot of the analytical pieces, that kind of left-brain thinking, is something that AI can do very effectively. And so as we're upskilling the current adults in the workforce, upskilling the college students who are about to enter the workforce, and then we have this whole group of K-12 students who are in school right now, and just thinking about how you become more communicative, how do you understand maybe the macro environment that you're in so that you can make some of those decisions? It feels like a piece of it. What do you see for teachers right now, and maybe just educators in general? What do you recommend prioritizing to help them upskill students for the changing world?
Susan C. McLeod: That's a great question, and what a challenge they have, right? First of all, this generation is gonna come up with AI there, so they're just naturally going to know how to use it. They're using it every day and may not even realize what it is. It's just their norm now. And in education, especially K through 12, of course, continuing that training, helping them understand how to use it... I believe, though, it's gonna be even more important for the teachers to find ways to delineate between the technology and the tools, 'cause it's a tool, and keep focus on overall human critical thinking.
And how do you ensure that while they've got this incredible tool set access in everyday life, everything they're doing today, they continue to focus on critical thinking and how to think through different options, decision making, and validating the data that's presented to them. That's the big thing. Have you checked the sources? How do you vet the data? How do you know the data is correct? And for me, I think that's something teachers are really gonna have to think through: how to ingrain that into this generation, that what's coming out of these tools is not just solid truth, right? It's not the source of truth from a data perspective. You have to vet it. You have to know your source.
Lydia Kumar: It's interesting because I was working with another organization helping them develop an AI strategy and how they wanna recommend AI be used within their organization. In our conversation, we talked about how they were seeing some employees give their own critical thinking a backseat and say, 'Oh, this output looks so good. I'm just gonna use this AI-generated output,' but not necessarily evaluate it. They kind of lessen, sort of... not respect their own expertise because in five minutes you can write some prompts and get this amazing-looking document that doesn't have any grammatical errors. And it's so easy to say, 'Oh, this is perfect. Why would I do my own thing?' And we even see adults doing this, at least when they first interact with the tool, which is concerning. You want to be vetting and understanding what you are creating, and why it's there, and what you think and what you believe about what you've done. If AI can just do all of your work for you, then you're not useful anymore.
It was a very interesting conversation because as an organization, they did not want their employees generating everything through AI. They wanted them to really put the work in and think. The work they're doing is important, and they need to carefully vet what they create before they put it out into the world. And so I think it is sort of seductive to see something created so fast with no obvious errors. Right? Like all the punctuation's in the right place. So I see that with children, I see that with K-12, but I also see that with adults who are maybe newer. Maybe as you work with a technology more, you're less susceptible to this perfect-looking creation that technology made.
Susan C. McLeod: It's happening everywhere. Look, it just happened to me the other day. I was writing an email, and it was a little technical. Three years ago, I would've never questioned myself writing that email. I'm very clear on what I know, what I can talk to, and the point of delineation that, 'Okay, now I need to bring somebody else into this conversation so that they can go deeper.' I started questioning myself, and I went into ChatGPT and started asking it. I had to stop. I'm like, 'What are you doing? You never did this before. You're very comfortable and confident and capable of communicating this without having to lean on a tool.'
And so it's happening to me. I know it's happening to a lot of other people. We should never devalue our experience and our expertise. Use the tool if you have questions about, 'Oh, is that technically accurate to say it that way?' Of course, then you might wanna vet it through a tool or another subject matter expert. But it is concerning that individuals with incredible experience and IP are gonna start devaluing that. And then you've got the younger generation who are just coming up and they're so used to it being there. Are they going to even be able to learn how to critically think? And how to put that down without just immediately doing what they're used to? It's gonna be a very interesting dynamic in the future and how things evolve.
Lydia Kumar: Absolutely. I think it just really... I feel like education about how the technology works and perpetually reinstating the importance of critical thinking and making sure you know what you think and why you think it and how the world works. I think all of those are such important skills that can complement the technical expertise that many people are already bringing to the table.
I wanna pivot us a little bit because you have done such interesting work, particularly with women leaders. So I know you've done some work with Wake and you also have a blog where you've brought together different executives and talked about AI. I'm curious about what has inspired you to bring different people together to talk about AI or your desire to support this larger community?
Susan C. McLeod: Thank you for asking that question. This is a very important passion for myself. The blog series... when I left Hitachi Vantara last August, I really found what I believed foundationally readied the organization to now start training the LLM models, which they're doing successfully. And I love seeing and hearing about it. It was such a learning for me of what needed to happen within established enterprises, large organizations that have so much data, legacy tools, and multiple tools to actually be prepared to utilize AI and especially the LLMs, the large language models, to be able to execute and bring those efficiencies.
After that happened, I decided, 'You know, I'm gonna sit back and I'm going to use this time to just document it and educate myself even more on tools, generative AI, and what it would take.' And as I started this, I was gonna do it all in one blog. And then I realized very quickly, 'Oh, there's so much more to this.' And in speaking to really good mentors, friends of mine in this space, they recommended, 'Susan, this should be a series. There are so many different items that have to be tackled from an organizational perspective to ready and move these forward. This could be a full series.' So I took their guidance and decided to break it up into truly how we as an organization had readied and moved it forward and broke it out into different articles. And as I was talking to these great peers and mentors of mine, I thought, 'Wait a minute, they all have their insights as well and their own experiences.'
So I decided to bring others—and they were female. It didn't start that way. It wasn't intended for it to be all female, it's just how it evolved. And I thought, 'You know, this is actually really cool.' You've got a mix of women leaders talking about generative AI in real-world experiences and how it works, what doesn't work, and lessons learned. So that's how the blog series started. I brought in what I called co-authors to help drive different messages. Renee La Londe, who's an incredible leader, CIO-level, on boards, really sharing information and data on how to talk to boards, how to ready boards, how to ready your executive team for having successful solutions and launches using this new tool. It's been a great pleasure to work with all of these women and get their viewpoints from different industries as well and launch the blog.
In parallel to that, I've done a lot of work with a group called WAKE, and it's Women's Alliance for Knowledge Exchange. They do some incredible work within the US and globally, helping bring female industry leaders with different levels of experience and technical skill sets into groups of younger female entrepreneurs to help give guidance and feedback on how to help them be successful in the launch of their business. And it's an absolutely great opportunity and probably one of the best things I've ever done. It is one of the best things I've ever done personally and professionally is working with the WAKE organization.
Lydia Kumar: There's this level of giving back, right? You're able to communicate back. You've learned all these things over your career, you've been able to give a lot to the organizations that you've worked in, but being able to come together with a group of women and whether it's creating dialogue or education about AI adoption initiatives or supporting the next generation of women business leaders, you're able to give back. When I think about generative AI, there's this very technical, kind of human... it's a machine, but it's human-like. There are a lot of things that it can do, but there are some things that it can't do. One of those I think is mentoring and sharing real-life experience. And so even though you've been a leader who's able to prepare a large corporation for AI adoption, you've also been able to lean into that human side and do the things that artificial intelligence can't do: mentor, guide, support, share personal experiences. And I think that's really cool to hear about both and that balance that exists within you as a person is being able to ready the machine, but also ready the person.
Susan C. McLeod: We have to remember at the end of the day in the business that the team members that are utilizing these tools, creating, feeding the data to the tools, they're human. And we have to make sure that we're taking care of those individuals as well. And it is a balance and it's a challenging balance, especially today because the industry is changing and moving so quickly. I think leaders just have to find a way to ensure that they're taking advantage of the technologies and the tools, but don't lose sight of your people and ensuring that you protect the human capital and the IP, which you can't get back—that knowledge that is in people's brains.
You know, so many people are retiring. I've started shifting, as you mentioned earlier, into the energy sector, which I'm so excited about, starting with Hitachi Energy. So taking the generative AI knowledge, the enterprise IP knowledge into the data center focus within Hitachi Energy. One of the challenges the energy sector has is the loss of individual knowledge. So many people that have run these power generation substations, OT skilled, are of retirement age, and they've got a big gap in the sector. There's a very small group of people that now the energy companies need, utility companies need, but also now the hyperscalers need. And it's a challenge in the industry and it's one of the things that I know Hitachi Energy and Hitachi as a whole is working on, to continue to educate and share knowledge. But it's something leaders have to think about. As these individuals retire or maybe they're not happy or they feel like they're not getting a balance of growth opportunities within their company, if you lose that IP to a competitor or they retire, do you have that backfill? Is there someone readying, you know, that is shadowing them, learning from them before they retire so you can pass that knowledge on? And that's really important that companies have to think about.
Lydia Kumar: It's interesting because we have this new technology that everyone's trying to upskill around, generative AI, but we also have a wealth of knowledge that's been accumulated over many, many years of personal experience that you have to maintain and figure out how to pass on. And so I feel like there's this balance right now of people trying to upskill their employees, upskill themselves, but we also have this incoming wave of folks who are ready for retirement. And they have... just brilliant people who've worked for many, many years and have a lot of specialized knowledge and leadership capacity. I think that's a real challenge for companies right now, balancing those two needs at the same time.
Susan C. McLeod: It is a big challenge for corporations. You know, everyone is trying to do the right thing and to find solutions to capture knowledge and IP. And knowledge management systems are absolutely critical for organizations so that people can... you know everything that's in your head as an individual that's worked in a certain technology for 30 years and are maybe getting ready to retire. How do you capture some of that into a knowledge management system? And guess what? By the way, that knowledge management system is now feeding into the AI, into the LLMs, the big models that are being created so others can now utilize that IP. And we can't lose sight of, yes, we're training it with new technology, but also the IP of the individuals that we have to find a way to capture, for those individuals that are retiring and then also not lose that IP that a lot of companies have invested in that, you know, maybe for whatever reason are considering leaving that company.
Lydia Kumar: Yeah. It's definitely interesting and something that is important, and I hope that as you step into the energy sector, you're able to learn a lot and help fill some of those gaps because energy is so important for all of us and being able to act in a sustainable and responsible way is really critical. And I think losing that knowledge is a risk for a lot of people, even who don't work directly in that sector. So it's an exciting place for you to move to and a very important place as well.
Susan C. McLeod: Absolutely. I'm very excited to bring this knowledge into the energy sector and to learn. I mean, there's so much within the sector that is truly the IT and the OT intersect. And energy is now right in the middle of such a demand from the generative AI boom and explosion over the last two to three years. So, a very exciting place to be.
Lydia Kumar: Amazing. Okay. I have one last question for you, Susan. I always end this podcast asking folks to share an idea or a question about AI that is really sitting with you right now. So it might be the thing that's keeping you up at night, or just something that you're kind of chatting about or thinking about throughout your day. So I'm curious for you, what's your question or idea related to generative AI right now?
Susan C. McLeod: The thing that I would say is keeping me up or that I'm very thoughtful about, and we touched on it earlier, is I'm very concerned about or curious—I'm gonna say curious—about the next generation and the ability to not lose that critical thinking function and capability. Again, this next generation is gonna grow up with these tools just at hand. They don't even realize it. To them, it's just gonna be part of their daily life on their iPhones or their iPads or their systems. And it's gonna come so simple to them just to ask a question and get, to your point, a really beautifully formatted response and answer. How do we ensure that this next generation doesn't lose that capability of critical thinking?
And that is something that is concerning to me. And I know the K through 12, as you mentioned, and the education system, I think is a bit of a headwind for them of how are they going to overcome that and keep that ability for the younger generation to use that. And that is concerning to me. And I don't wanna say laziness, but you could see with this type of functionality and tool, how the human race could ultimately become very lazy. And that concerns me from a critical thinking perspective, but also just how we do our day-to-day life. I have to say I love my pool robot now. I call him T. He cleans the bottom of the pool. That used to be really challenging to do. Okay, that's actually a great thing and I don't want that to go away. But what, you know, what's next? I mean, at some point, how does it go too far? That is one of those things, if you sit down and really think about it, that can be a little scary at times.
Lydia Kumar: And thinking about what do we want to continue to keep really sharp in ourselves as humans and what do we wanna lean into and develop in ourselves and in the next generation? I think there have been lots of cycles of innovation that have led to some skills becoming a little weaker and other skills becoming stronger. And so what do we wanna prioritize? I wonder about the intentional aspect of that. I do think we have a choice as people about what we wanna prioritize in terms of what we sharpen and how we sharpen it, and how we kind of mentor and encourage the next generation too, to pay attention to the skills that they can really bring up in themselves. It is kind of frightening to think about people not thinking and losing those critical skills, but also there's a lot of potential as well. And so trying to hold those two things is a perpetual balance.
That's a wrap on our conversation with Susan McLeod, tech leader, AI strategist, and connector of connectors. Three takeaways to carry forward:
First, slow down to speed up. As Susan puts it, pausing to align your systems, people, and use cases is what turns AI pilots into real value.
Second, communication is the killer app. In a world of automation, human alignment is everything, from boardrooms to classrooms.
And third, the AI shift isn't just technical, it's generational. As leaders and educators, it's on us to model curiosity, critical thinking, and care in how we teach and use these tools.
If you're ready to build your team's AI muscle, Kinwise offers everything from a 30-day teacher pilot to a one-day AI leadership lab for boards and leadership teams. Learn more or get started at kinwise.org.
And if this episode sparked something for you, the best way to support Kinwise Conversations is to subscribe, leave a quick review, or share it with someone you're building the future with. Until next time, stay curious, stay grounded, and stay Kinwise.
-
Susan C. McLeod on LinkedIn Connect with Susan professionally to follow her latest work at the intersection of AI strategy, enterprise technology, and the energy sector.
Susan C. McLeod's Website Explore Susan’s writing, including her insightful blog series on enterprise AI readiness and collaborations with other female tech leaders.
WAKE International Discover the Women's Alliance for Knowledge Exchange, an organization Susan works with that empowers female entrepreneurs through mentorship and guidance from experienced industry leaders. https://www.wakeinternational.org/
Hitachi Energy Read the press release on Hitachi’s historic investment in America’s energy infrastructure, highlighting the critical work Susan is now helping to lead.
-
1. The "Pilot Project" Scoping Prompt
Act as an AI adoption strategist. My company wants to use AI to improve [insert business area, e.g., our marketing content creation]. Based on the principle that most AI pilots fail from being too broad, guide me through a series of questions to identify a single, high-impact, low-risk use case. Start by asking me about our biggest pain points, then help me define a clear problem statement and a measurable goal for a 90-day pilot project.
2. The "Knowledge Transfer" Prompt
Act as a knowledge management consultant. A key expert on our team, who has 30 years of institutional knowledge about [insert specific domain, e.g., our supply chain logistics], is retiring in six months. Design a structured plan to capture their expertise using AI. Your plan should include:
A list of 15 targeted interview questions about undocumented processes and critical decision-making.
A process for using AI to transcribe the interviews.
A strategy for organizing the insights into a searchable, interactive knowledge base for new team members.
3. The "Leadership Communication" Prompt
I am a leader rolling out a new internal AI tool for my team of [e.g., 50 sales representatives]. The goal is to [e.g., automate lead qualification]. Following the "say it seven times" principle for effective change management, generate a multi-channel communication plan that introduces the tool, explains the benefits for the team, and addresses potential fears. The plan should span two weeks and include an email announcement, talking points for a team meeting, a one-page FAQ, and two follow-up messages to reinforce the key information.
4. The "Critical Thinking" Educational Prompt
Generate a learning exercise for [e.g., high school history students] designed to improve critical thinking and media literacy. First, write a one-paragraph summary of [e.g., the causes of the American Revolution] that is well-written but contains three subtle factual inaccuracies or biased statements. Then, create a worksheet with five questions that guide students to identify the flaws, question the source, and use their own knowledge to correct the AI-generated text. The goal is to teach them not to trust AI output blindly.
5. The "AI Readiness" Assessment Prompt
Create a simple AI Readiness Checklist for a non-technical business leader. Organize the checklist into three sections:
Problem & Use Case (Are we solving a real, specific problem?)
People & Process (Is our team culturally ready for this change?)
Data & Tools (Is our data accessible, clean, and secure?)
For each section, provide 4-5 key questions the leader should ask to assess their organization's readiness before investing in an AI solution.