21. AI Engineer Vihaan Nama on Privacy, Practice, and Empowered Learning
Season 2, Episode 10 of Kinwise Conversations · Hit play or read the transcript
Episode Summary: The Strategic Shift in AI Education and Policy
In this episode, we meet Vihaan Nama, an AI Engineer and researcher at the Duke Trust Lab, who brings deep expertise from both corporate tech (JP Morgan, Samsung) and academic instruction (Duke teaching assistant). Vihaan confronts the strategic challenge facing K-12 and organizational leaders: how to responsibly harvest the "gold mine" of institutional data without sacrificing privacy or ethical standards. He argues that AI's true power lies in its ability to generate rules and knowledge from data where human coding fails, making it a critical tool for personalized instruction and efficient operations. However, this power demands new policy. Vihaan provides an actionable roadmap for leaders on budgeting, data readiness, selecting between vendor tools and open-source AI, and defining the ethical guardrails necessary to ensure AI empowers students and teachers rather than replacing them.
Key Takeaways for K-12 Leaders and Mission-Driven Executives
Data is the New Scarcity: AI shifts the focus from an abundance of data to a scarcity of knowledge, making institutional data a powerful, yet often untapped, asset.
The Power of Open Source: For sensitive student data (COPPA/FERPA), open-source AI models offer the necessary strategy to maintain control, build custom values, and ensure privacy within a local, protected ecosystem.
AI's Hidden Environmental Cost: Leaders must account for the significant energy consumption of AI, advocating for efficient, compressed models and promoting concise prompting to reduce the environmental footprint.
Guardrails Define Ethics: Responsible use requires setting explicit, organizational-specific safety guardrails (e.g., teaching problem-solving instead of giving direct answers) that go beyond the default settings of public LLMs.
The Relevance Imperative: To avoid future redundancy, educators and students must focus on evolving alongside AI, concentrating on critical thinking, invention, and complex application while offloading redundant tasks.
AI, Data Strategy, and the Future of Learning
Lydia Kumar: Today we're visiting Duke University to meet Vihaan Nama: an AI engineer, graduate researcher, and teaching assistant. At Duke, he supports courses like explainable AI, AI product management, and managing AI in business, breaking down big complex systems into ideas students can actually use. If you've ever wondered how to make AI education more human, or how student learning data could bring something more meaningful than dashboards, Vihaan brings both clarity and care. Let's dive in.
Lydia Kumar: Hi Vihaan. Thank you so much for being on the podcast today. I'm so appreciative of you being here to bring your perspective and your expertise on AI to our audience. To get started, I want to give you a chance to talk a little bit about your journey and how you ended up working in the artificial intelligence space.
Vihaan Nama: Perfect. Thank you Lydia, for having me on this. I'm quite excited as well to start talking about it. About my journey in AI. So it all started in undergrad when I was in a small project with a couple of my professors, very nascent stages of my learning of AI. And I was kind of going through the whole research field and wanting to publish my work. Wanted to get my voice out there, but I didn't know where to start. And I was very young, very early into it. I was like 18 years old at that point. And my professor told me, like, "Hey, why don't you build a small AI system that tells you the difference between the advantages of using AI for a problem statement versus doing the same problem statement without AI and seeing how good AI actually works in helping you and does it actually make sense." And I did. I thought this was an amazing idea for me to just get my feet wet, just start working on it. And it was a small little project on sentiment analysis and I decided whether a rule-based system was better or an AI system was better. And in this whole process, I noticed that the AI far outperformed a rule-based system. And that's when my interest in AI got sparked and I was like, okay, this is something that can really help us on a larger scale. I was still 18, so it was such a small, tiny, little problem, but I could envision how big this could get. And that's when my interest started. And as my career progressed, I worked in multiple places. I've worked in JP Morgan, I've worked in Samsung. I'm currently working at a company called PSNS as an applied AI engineer. And I'm just going ahead, diving through the field. And throughout this whole time, I'm also continuously working on research. So I worked in multiple research labs in my undergrad. I'm working at the Duke Trust Lab currently right now. And I'm figuring out where AI can take us. And I really feel like AI is the next big thing, and I'm grateful to have to be able to speak with you today about this.
Lydia Kumar: It's interesting because when you talk about that first project, you were kind of looking at what is human versus machine and how this works and where to use them. Is that right?
Vihaan Nama: Yeah. So the way I like to think about it is that traditionally all computer programs were written in a particular way where like the humans would actually write the code or in layman's term, define the rules. And then this code would be generated and the user would give it an input. Based on his or her rules, you would get an output for the system. But the way I think AI is different is that when it's AI, it makes the rules. You give it the data and you give it the expected output and you tell it, "Give me the rules. I'm not great at understanding what rules should be made for this data. You figure it out." And that's where I feel AI is different because for the small problem of sentiment analysis, basically it was tiny, small movie reviews and I was trying to analyze whether they were good or bad. I went through a particular process of doing it, but then I realized the AI had other ways of thinking about it. And that was my first look into the field.
Lydia Kumar: That's really helpful for you to kind of flesh out what that means and what that looks like. It reminds me, Ethan Mullick posted recently about the Bitter Lesson and how if you just feed a lot of data into the machine, the computer is going to be able to solve the problem better than when we try and code all of these rules into how it works. And you learned that lesson pretty early on, just when you were 18. Yeah. So that's kind of incredible that you were thinking about that so early in your professional career.
Vihaan Nama: Yeah, it was amazing. After that, I think all the professional experiences just strengthened that, that today, like data is so readily—there's so much data out there—and there's so little knowledge that's available from this data, that I think actually having machines that actually define these rules and generate patterns, understand certain ways of representing data and how a pattern should be recognized within these large purposes of information. I think that's the next big gold mine there is. And it's already being—we are already seeing that's what's happening. But especially when we look at like larger companies, like for example, at my time in JP Morgan, I noticed that a lot of data that was once considered obsolete, which had just thrown away into an archive somewhere, was actually being retrieved back. And everyone—they're pulling out records from long ago because there's so much information, data that they didn't want to sift through. But now that you have AI that's going to be able to help you go through that information, bringing it all back, and they were trying to, like, you know, what information can we gain from this? It's something we threw away a long time ago, but actually there might be a lot of gold in here that we can learn from and we can train these models on. And I think we're just at the tip of the iceberg of where it could be.
AI in Curriculum and Personalized Learning
Lydia Kumar: That's so exciting because I think a lot of our listeners are people in schools and in education, and education institutions have tons of data. And there's—without AI, there's a lot of interest in using, in something called data-driven instruction, where you are thinking about what students know and then you're trying to tailor your teaching around what they actually need instead of just teaching them, you know, word for word what's in the curriculum. You can kind of say, okay, I have this information and I'm going to tailor what I teach to the needs of my students, or these subsets of students. And so I think that's an example of how teachers have been using data in the past, but AI could take all the loads of data that exists—and obviously there could be some privacy concerns here—but you have all of this data in a school setting and you could learn something about students, the way people learn, the trends that have happened over time in a totally new way.
Vihaan Nama: Yeah, it really just unlocks so many possibilities that just a few years ago, because of even just lack of funding in general, were considered like such far-fetched goals. But now that there's so much funding, there's so much research, there's so much stuff going on in the field, especially even in generative AI, which is the new buzzword that people are throwing out there. I think it is unprecedented and I think the amount of value that we can gain just from this previous amounts of information that we've just pushed aside, thinking that it might be useless, but actually there's so much information there. I think we can gain a lot.
The Student as Data Owner: Custom Tools
Lydia Kumar: I'm curious, you said something a minute ago about a student having, or a teacher or someone having access to their own information. And as if you were a student and you had access to your own information, how do you see that as—what, what does that mean and how do you see that as a value add? Or what, what kind of information might someone want?
Vihaan Nama: So, as a student, like even looking at a micro use case of just a single student—me as a student, 'cause I'm currently a master's student—has so much notes available on my iPad, on my laptop. I have like, nearly a hundred PDFs every semester that are just stored over there. And this information is actually quite useful for me... I just wish I had that, that instant. Like if I could just say like, "Hey, can you please tell me how to build this system based on my notes?" And my particular note came up because I wrote it in a way that I understood. And if that, that process became so much easier, that would be such a big win because I feel like, it's not just me. I think every student out there writes their own notes, writes their own pieces of stuff that they're writing down in the way that they understand it... Even having, imagine having a small, you know, chat interface where I could chat with my own notes, be like, "Hey, um, I spoke about topic X a few months ago. Um, I'm building this kind of a project. Do you think I could use that topic and tell me how to build a system?" And having an LLM in between... I think that would be such a big win in the space of education. And even when it comes to like revising before like, semester-end examinations or mid-semester examinations, I think having an AI study buddy kind to help you to go through your notes, to be able to quiz you, um, that really helps as a student.
Vihaan Nama: I feel that so much brainpower can now be used for so many other better things because the redundant heavy lifting can be done by the AI, and you can just focus on discovering, inventing, creating something new, something that people can use and people can love without having to worry about the technical complexities.
Strategic Policy: Data Privacy and Open Source AI
Lydia Kumar: I'm curious 'cause as, as you know, I think all of this data is so useful, but on the flip side, there's this privacy element where... I'm wondering about how you, in your studies or your perspective on data privacy, how, how AI uses that data, what risks are involved for people who may be interested in taking the, the information and, and finding insights about things that may be specific to individuals?
Vihaan Nama: Yep. So for data privacy, I would say like, it is very—it's, it is a very known fact amongst the AI community that anything you put onto ChatGPT, Claude, Perplexity, any of these major services, unless you explicitly opt out of their retraining or data retention, they are going to retain it and they are going to train their model. Sometime in the future might not be immediately, but maybe even 10 years down the line, your data could be reused. Right. So it is a very known fact within the AI community, but outside the AI community, I think it's still not as well known... the minute you put it onto their platforms, it automatically becomes their property. And if once it's their property, they have the right to use it when and how they want... And to fight this, I think the advent of like the open source field of AI has really started booming, especially with a lot of organizations... they want the power of AI, but they don't want to give out their information because their information is where their secret sauce is... So now, nowadays, there's some stuff called open source AI or open source LLMs... which companies can invest in. Even education institutes, anyone can invest in that. And all the difference is, is that you have to set up your own infrastructure... but actually doing this one-time capital investment, it's going to allow you to bring in these large language models or any kind of AI model locally, and then be able to train it and run it without having this information go onto the internet for the companies to retrain it, because it's all going to be within your own ecosystem.
The Open Source Advantage
Lydia Kumar: If I was a superintendent or someone at a district and I was looking to invest in an AI tool... What are things that I would need to pay attention to or questions that I should ask to make sure that my data would be protected in that system?
Vihaan Nama: I think your first step in that field would be to do your own research, firstly into any machine that you're buying... and actually understand what the system is doing, not just from the customer point, but how they're using your data on the backend... I think you should always have an AI expert on hand where you hire someone who's well versed in the field of AI... If it doesn't work out for you with this because of the data privacy regulations... I would say again, open source is a great way to go because what happened then is that you're just getting the raw model... You're able to build, like hire a couple of developers and tell them what you value. And you're able to build an AI system around this model that you pull down, which is open source, free to use. You're able to build an AI system that imbibes your cultural organization's values and what you require there to be.
Lydia Kumar: What are, what are tasks or things that school districts could be doing or preparing to do if they wanted to build something custom like this for their, for their institution?
Vihaan Nama: I think number one is getting your data ready because so many times... we get stuck in a place where their data is not ready and our machines can't progress without their data... Number two, I think it would be like budget allocation. Um, speaking to people, understanding, um, even in different domains and fields, how expensive this is looking, and then trying to draw parallels... I think your planning should be really good before you start.
Building Trust and Responsible AI
Lydia Kumar: What have you learned about building trust and clarity for teams or organizations who are interested in using AI, particularly considering there are a lot of people in the United States right now who are very nervous about AI, the, the technology, and what that means for the future.
Vihaan Nama: So when, when we talk about building trust and clarity, I think we need to understand that these AI systems should not be relied upon, because they can make mistakes... And that's very important to know because AI is, in the end of the day, AI is a... a single file, which contains like the world's knowledge in it... there could be chances where it gives you the wrong information or mixes facts up, mixes stuff up, and even in the end, even makes up, makes up wrong things... This is known as like LLM hallucinations... I think the first question you should ask... is just basically where is this information grounded in?... actually asking AI to cite its sources and you know, like telling it like, "Hey, you're going to give me this information, but I also want you to cite where you got it from," actually helps the model a lot... It like the trust comes down to understanding where, where does the information come from in the first place... you can trace your entire supply chain of the data that was given to you.
Ethical Guardrails and Climate Impact
Lydia Kumar: For organizations who want to adopt AI, what does responsible use look like? And then for students or people teaching students, what does it look like for a student to responsibly use AI?
Vihaan Nama: Responsibility comes in many facets over here. Firstly, there is the responsible use in the sense that it should be used for only ethical use cases... we need to put in those safety guardrails saying that, hey, if a student's trying to cheat on an examination or ask questions that are trying to give them the answer, instead of giving them direct answer, teach them how to think, teach them how to go down this process of problem solving. So I think, um, that's very important... When it comes to the climate, AI is as much as it's helping us, it's also destroying us because there is so much bad that's happening with the increase in increasing use of energy... by 2028, they're assuming that 6.7 to 12% of the entire United States' electricity is going to be consumed by data centers... we need to understand the macro impact of this.
Vihaan Nama: So when it comes to building these systems, I think the first step is understanding that, make sure your manners are there when you're talking to humans. But when you're talking to AI, be very efficient with the way you speak. Make sure your prompts, the stuff that you're telling it are to the point, concise and only what is required... Also I think you need to keep updating these AI systems... you're then going to not just bring your own electricity bill down... But you're also going to be helping out the environment as a whole.
Staying Relevant in the AI Workforce
Lydia Kumar: What do you keep thinking about in the AI space?
Vihaan Nama: These models are improving, their capabilities are improving. And it's making us as human beings more redundant because it's going to be able to do a lot of the heavy lifting... I would like AI to do my dishes so that I could focus on art, but the way it's going right now is that AI is doing the art while I'm focusing on my dishes. And I think that's quite... startling to me... what I think about is as this keeps moving on, as this field keeps progressing, how do I stay relevant? How do I make sure that I am collaborating with AI rather than, rather than being replaced by AI?... Rather have it as AI is growing incrementally, you grow with it and you know, you keep evolving with it because it's still growing.
Connect and Resources
LinkedIn Profile: Connect with Vihaan on LinkedIn to follow his work at the intersection of AI, education, and engineering.
Personal Website: Explore Vihaan’s portfolio, project write-ups, and teaching philosophy in greater depth.
Google Scholar Page: Browse Vihaan’s academic contributions and research citations across AI, explainability, and systems design.
Pratt Energy & Sustainability Club: Learn more about Vihaan’s leadership role in Duke’s campus-wide effort to examine the environmental impact of AI systems.
YouTube: How GenAI Is Reshaping Education: Watch Vihaan’s insights on AI in the classroom from a Duke panel discussion, including open-source tools, student empowerment, and responsible innovation.
Prompts Inspired by Vihaan
Student-specific Knowledge Retrieval and Synthesis: Access all my uploaded notes, lecture transcripts, and submitted assignments from the 'Explainable AI' course. Summarize the core differences between LIME and SHAP methods, and then generate five practice quiz questions, complete with answers and citations to the specific documents where the information was found.
Addressing Learner Deficiencies: Analyze the attached PDF of my 'Managing AI in Business' midterm exam results. Based on the topics I scored lowest on, create a personalized study plan consisting of three detailed readings and five application-based case scenarios to improve my understanding of regulatory compliance in AI deployment."
Educational Content Brainstorming and Development: I need to develop a one-hour introductory lecture on Retrieval Augmented Generation (RAG) for a class of non-technical business students. Generate an outline for the presentation, suggest three relevant real-world use cases, and propose a concise, non-jargon-heavy definition for RAG.
Data Sourcing and Trust: I am preparing a policy on student data privacy for our district's new AI tutoring tool. I need to understand the current legal framework. Find and summarize all key excerpts from the Children's Online Privacy Protection Act (COPPA) and the Family Educational Rights and Privacy Act (FERPA) that relate to educational technology and minor student data. For each summary, include the source citation and document location.
Ethical Guardrail Development: I am building a custom AI model for student homework assistance. Design three different safety guardrail responses for when a student directly asks the AI to solve a complex math problem for an upcoming exam. Each response should decline to provide the direct answer but instead use a different pedagogical approach (e.g., Socratic questioning, breaking the problem into simpler steps, or providing a link to a relevant instructional video).
About Vihaan Nama
Vihaan Nama is an Applied AI Engineer at PS&S, a graduate researcher at Duke University's Trust Lab, and a teaching assistant for leading AI courses including Explainable AI, AI Product Management, and Managing AI in Business.
From his early experiments in sentiment analysis to his current work designing retrieval systems and open-source tools, Vihaan is driven by a passion for making AI both understandable and empowering, especially in education. His leadership in Duke’s Energy and Sustainability Club also reflects his commitment to ethical, environmentally conscious AI development.

