4: Beyond Analysis: Dox Brown on AI and Co-Creating Our Human Future

Episode 4 of Kinwise Conversations · Hit play or read the transcript

  • Lydia Kumar: Welcome to Kinwise Conversations, where we explore what it means to integrate AI into our work and lives with care, clarity, and creativity. Each episode we talk with everyday leaders navigating these powerful tools, balancing innovation with intention and technology with humanity. I'm your host, Lydia Kumar.

    Today I'm thrilled to welcome Dox Brown to the podcast. Dox is a strategist, systems designer, and what he terms an epistemic architect working at the intersection of AI and governance. With a rich background spanning education, workforce development, civic tech, and rural health, his work consistently focuses on building platforms and systems that clarify purpose and align learning, technology and execution.

    As is sometimes the reality with remote recordings, we navigated a few connection issues in this conversation, so you might hear some slight audio imperfections, but the richness of Dox's ideas is extraordinary, and I really hope you'll stick with us because this is a conversation you won't want to miss.

    So Dox, I'm really grateful to have you on today to be able to hear your thoughts and perspectives. I've spent the morning reading some of your writing and I've been struck on how you've taken this move toward AI tools and used it to push us as people to think about the outcomes that we want and what really matters. And so I'm excited to talk to you about many things today, but I want to know more about who you are. So could you tell us some about your background and what's important for any listeners to know about your story?

    Dox Brown: Sure. Thank you for having me, by the way, Lydia. I am six feet tall. I'm Virgo. Probably you're interested in where I come from and how I got to where I am. I have spent my adult life working with young people in communities and neighborhoods. That's how I ended up becoming a teacher. I began actually, with aspirations to be an actor, which is why I studied things that maybe weren't monetizable as an undergraduate, but I became a teacher. I got my master's in education and then I pursued a PhD in curriculum instruction. I have since then done consulting, worked in higher education, worked in public health, but since I was very young, since probably middle school or high school, I've always been interested in the coherence between how people think about things and how they act in the world. And so that alignment between thought and action has always mattered to me. And maybe it was in what someone said they believed versus how they acted, or how someone told you to behave and what that really meant in practice.

    And so I'm always thinking, if you've read my writing, you'll notice I always kind of try to go back to some original principle. I'm like, what's the baseline of this? But yeah, I am now in Colorado. I'm working in healthcare currently as well as doing some AI projects. I would say I started heavily using AI about six months ago. I've been using it for about a year, and what really struck me was this idea of a large language model. Because I have a lot of love for language, though I really only speak one. I think it's a really critical foundational piece of what it means to be a human being. And so if you have this technology that essentially applies probability to language, what does that mean? And that's where I've been hanging out for the last four months or so.

    Lydia Kumar: That's such an interesting question of our language and our humanity, and generative AI almost feels like an alien technology or this other thing that you can talk to because it acts so human and yet it is not. And so there are some real interesting questions around what it means to be human alongside of this technology. Do you have a thought or an answer to that question right now? Like, what does it mean to be human or what does it mean to hold onto that beside this new tech?

    Dox Brown: So let's say that the word human can accommodate at some point, possibly more than our species. At some point we might apply that word to describe more than just homo sapiens. But what I think it means to be human is to share a world. And by a world, I mean not the third body from a star we call the sun, but a co-constructed reality that is rooted in every individual past and presence, and the way that that reality is constructed and negotiated is through language. So it is a constant exercise between all people all the time, and that we've inherited from our ancestors. We haven't paid a cent for it. No one asked for anything in return. It's been a gift. And that's what it means to be human, is to be part of that co-creation of meaning. And so when people aren't afforded a free, universal, quality education, they are being denied the opportunity to participate in that meaning-making at their fullest potential. And we are being robbed of the unique contribution each one can make.

    Lydia Kumar: That's a really powerful idea. This podcast is about generative AI and, what does it mean to be human in this world where there's generative AI? How does generative AI enter into the idea that you just shared?

    Dox Brown: So I think I can reference some of the conversations that I hear online right now, and I know that it'll be a little dated by the time this goes live, but currently there's a lot of discussion about the sycophancy, or the degree to which chatbots kind of flatter the user. And people were very upset. There's also a lot of people whose hope for generative AI in education often revolves around a statement that roughly paraphrased is, "Oh, we're not going to have to worry about memorizing facts anymore." Education will finally be... they use some word, which to me reflects a failure to reflect on your education. Like if I were to ask you, "What's the 13th letter of the alphabet?" I promise you'd start singing. You don't know, right? And we memorize lots of facts. And in order to be useful, we have to have some ingredients from which to make our statements and our thoughts. So I think a lot of people are remembering education after age 10, not all of the front-loading that went in, like in elementary school, all the basic things they had to learn.

    I also think that people somehow believe there's such a thing as being neutral, right? Every moment of every day when we engage with the world, we act upon it and it acts upon us. And maybe it's just sort of the butterfly effect, but this technology that we're probably going to interact with more and more, there are a few aspects of it that are important to me. One is it's going to impact you. There's no way to avoid that. It will have an impact on you. Two is, you can program it. So you can decide what personality it has, you can decide how it behaves. So that raises the question to me, like, what do you want it to do? And so some of my dissertation work was on participatory democracy and how you learn democracy by doing it. So it's informal learning. So what do we want a chatbot to teach us informally? Do we want it to teach us how to be sort of abrasive, blunt, and reactive? Is it okay if it's nice, if it's polite and it has manners, even if those manners are encoded by some cultural bias? Do you want a nice chatbot or does it need to be harsh? Like, does the world need to be meaner than it already is?

    And it interested me how much energy people expended on being upset about the potential to be manipulated by generative AI. And I wondered how much more energy they had spent on that than just being vigilant about their own views and thinking. You know, just assume that the chatbot isn't actually all-knowing and think, "Wait, is that actually accurate? Let's double check." I've definitely had moments where I'm in the middle of writing, thinking through something, and I go, "Wait," and I have to go back and then spend hours meticulously beating myself up to make sure that I haven't sort of fallen into an echo chamber. It requires a level of vigilance that I don't think most people are used to.

    Lydia Kumar: Related, in another episode of the show, I talked to my friend Celia Jones about her work as a communicator, and she talks about how AI has really sharpened her thinking because she uses it as a sparring partner rather than as a replacement for her own critical thinking. And I feel like you're talking about something similar. If you're just asking the chatbot to tell you something and it tells you, and you automatically accept it, your perspective on the world will be impacted because you're accepting something without having that debate.

    Dox Brown: Where my journey actually began with AI was four months ago, when I started to think about, okay, what if this technology is used in the classroom? And it all went back to language. If you take a hundred people and you throw them on an island, in a hundred years, they will speak a dialect, right? There will be a drift because in the conversation that they're having, they will invent words and terms to describe the things that they're doing. Well, if every person gets their own chatbot, then everybody starts to develop their own dialect. So what we see is a large societal disintegration, right? And a growing inability to communicate across difference and disagreement. AI didn't do that, but it will accelerate it if you're not thoughtful about how.

    So, especially when you're talking, because I was a social studies teacher, I think about conversations in class. Like let's say we're talking about World War II, right? And we're debating who is worse, Stalin or Hitler? That's the conversation of the day. What students need is opportunities to understand where their views reside in relation to other students. So, like Jake thinks this and Carmen thinks this, and you guys kind of agree here and you disagree a little bit there. Somehow we have to train generative AI so that it can map where people are and what they're thinking so that it can curb this natural tendency towards what I called epistemic drift. And epistemology is the study of human knowledge or human knowing. And so then I started talking about epistemology, which I didn't really mean to be doing. This idea that people will just start to drift from each other and not be able to communicate. And somehow, if you're going to use it in formal education from kindergarten through graduate school, there has to be some function of AI to do that. And that's where I first started thinking about this idea of hollisis. And if I go look back at my old writing, I changed the definition, I changed the word once, and I changed the definition multiple times. I think what it reads today is a little different than what it read four months ago.

    Lydia Kumar: Do you want to share what hollisis is and your definition?

    Dox Brown: Yeah. So back to my geeky love of language, I went to the etymology of it. And so epistemology comes from the Greek episteme, which means knowledge, and they distinguish it between knowledge and belief, belief being doxa, which is where Dox kind of comes from. For epistemology, we are going to have to do things like analysis, which is to break things into smaller and smaller pieces. And then we engage in synthesis where we put those things, rearrange those things in different ways together.

    There's another thing that human beings do I haven't seen AI do. One thing about AI is that though it's very powerful, if you're to try to liken it to a human brain, to some extent, everything AI does is cognitive. Human beings practice a very enlightened and efficient laziness. We do not think about our heart beating. We allow our brain to just do that on its own. For AI, every single mechanism is cognitive. It thinks about it, it uses the same energy. And there are things that they've developed, like skip jumping and other ways to scaffold information to speed up processing and to be able to skip some steps. Sometimes that can be why it hallucinates. Ultimately, human beings try to make things more efficient.

    When you're thinking about analysis and synthesis, these are things we do in school a lot when we summarize things. But when we talk about thinking, and this has been sort of a pervasive, persistent problem in how we talk about thinking, I just want to preface this by saying I don't have any answers. I have an idea. And some of that idea is accounted for in a lot of the science. People have talked about these things, and I'm giving it a name because I want to place a different emphasis on the issue. So what we have as a body and our embodiment changes how we interact with the world. We interact in time and place and mortality. AI currently has no sense of embodiment. It depends upon us. A smell evokes a memory, which triggers an emotion, which triggers a thought about our day. All of those things are thinking, right? There's no separation. But we often talk about thinking as analytical thought.

    And what seems to be the case is that there are these different networks in the brain that focus on sense, focus on memory, focus on executive function. But there's a switching between networks that happens, and people have talked about this, about cognitive flexibility. And so when I talk about hollisis, I'm talking about the ability to dynamically move between those different networks. As someone that's neurodiverse—so I have epilepsy, I have a genetic seizure disorder, which I don't think has any evolutionary advantage, it just happens to be what I have, and I also have ADHD—there are things that I do that I notice other people don't do. And also in terms of participating in discussions, I try to switch my perspective a lot, and that is something that seems to be more correlated with my neurodiversity profile.

    But if we start to think about that layer of hollisis—so holos meaning whole, right?—as opposed to analysis and synthesis, we're thinking about the whole process of moving between networks. That's not dependent upon how much you know, it's not dependent upon the quality of your education, in terms of how well you were instructed, how much you know, how proficient you are at science, math, writing, reading. This is something that all human beings do no matter what. You have to switch between emotion and what you know and where you are and the time and place and how you feel, and are you hungry? All of these things are happening at the same time in your attention. If you really have to go to the bathroom, it's hard to focus in class. If that is a piece of your cognition that always exists, then it is something we can probably isolate to some extent and train up.

    In addition to all the ways we've been training people in education thus far, and since the Industrial Revolution, the ability to stay focused on a repetitive task has been very critical in order to be successful as a civilization. You have to be able to focus to do something well and consistently. But from an evolutionary perspective, that hasn't always been the case. So they did this interesting study. They used computers, they got people who had ADHD and people who didn't, and they did brain scans. So there's some validation through empirical evidence. They gave them this foraging activity to look for resources. And what happened was that the ADHD people would go to one area and look and they would quit and go to the next area, but the resources were more plentiful at the top. So at the end of the exercise, those with ADHD were better at resource collection than those that weren't because the people with ADHD were like, "Yeah, this is boring," or, "I'm not getting enough results at this tree, so I'm going to move over to this tree." And the distractibility of people that have ADHD, where I hear noise and sort of like a Labrador retriever, it's like, "Ball, ball, ball," on the Savannah, maybe that was an evolutionary advantage. In this current world, it is not an advantage. Not being able to focus for long periods of time makes it very hard to be successful in school or in your profession.

    Lydia Kumar: How do you think, with the rise of generative AI... one of the things that makes being able to do a repetitive task useful is that you're able to create something over time and you're focused on it. And we place a lot of value on that. Now, some of those same tasks can be done in almost no time by a machine. And so it seems to me, and from your writing, I feel like the way we work will shift, or the way we learn must shift because what we value can be done by something that is not human. The focused work that has been valued can be done by something that's not human. And then humans may be spending their time doing something else. And so in education or in work, what do you think humans should be spending their time doing? Or what do you imagine will be what we value in terms of outputs five years in the future?

    Dox Brown: So that's an interesting and pretty layered question. I think the first thing is to remember that all skills are perishable. So, whenever you focus on just discreet skills, it is inevitable that at some point, those discrete skills, whether it be typing or blacksmithing, will eventually have a shelf life. So skills are always a perishable good. Now, it might last three or four generations, but ultimately that's what happens. The thing that doesn't change in my mind is language, in the sense that it's the same mechanism that it's always been, even though it evolves and it develops. The ability to communicate and interact effectively, maybe we want to call that emotional intelligence. Again, that idea of emotional intelligence sort of doesn't fit in with my idea about hollisis, but it can be incorporated, like this idea of how to read body language.

    There's the school side, the education side, and then there's the workplace side. Another friend of mine was saying that the tier one and tier two positions in human resources are disappearing because AI does a better job of conflict resolution because it's nice and not affected by people's bad moods.

    I found that AI just has vastly accelerated my workflows and has made me more productive because it accounts for a lot of the executive function that I struggle with. You know, I'm able to automate things or I can get it to remind me about things. I have one running chat where I just type the thing that I have to do. I just put it in there and then every couple of hours I'm like, "Give me my list," and I look at my list again, right? A dynamic feedback loop. And that's what I struggled with. AI is really good at being a crutch for me in that regard. So I think that what will matter is how people show up in the workplace.

    Now, there's a lot of growing interest and growing movement towards skilled labor and trades, and this has traditionally been the dividing line between the upper and lower middle class, blue-collar and white-collar work. If that arena is expanding, all that means is that fewer people are going into white-collar work. Those people still might be good at school, but they're choosing to do trades. If that's the case, then the skills that they're learning in school will then be used as metrics to decide how good they are at trades. So you might be a very good carpenter, but if there's someone that's almost as good of a carpenter and is a really good business owner, they will also achieve in that arena as well. And they'll be able to use AI to automate their business, whatever it might be. So I don't think you won't still see disparities based on formal education. I think formal education will still play a role in whatever industry you're in.

    So now the question is going to be, what decisions are you going to make at the business level? We've seen a movement towards more collaborative structures, more team-based structures, and in those instances, it's going to be how effectively you can think on your feet. How do you respond to new information, new insights, how people are responding in the room, what's going on politically? All of these things, being able to quickly coordinate and dynamically shift across the different networks of your brain. So that's the thing that's human. That's the thing that AI cannot yet do. And it's possible that if we're going to talk about artificial super intelligence or artificial general intelligence without having any operative definition of what intelligence is... if hollisis exists, if it's a fundamental part of human cognition and we say this is part of what thinking is, AI is going to have to do that if it wants to be intelligent, or it's going to have to basically be able to pantomime it in some way that we think is relevant.

    I think it's going to be a lot more about that immediate, real-time interpersonal interaction and how we respond, how we're able to use all of the work products that AI can generate to make them more useful, more precise, more insightful, and then act upon them. I think we see white-collar work shrinking.

    Now, when you go to the formal education side, I think you have to train people how to do that. So you're going to have to start for the next 12 or 13 years retrofitting education so that when people get popped out, they're able to do that in the workplace. And then you're going to need a model that sort of replaces it.

    There's a thing that I used to do in a class because I wanted something to happen for my students. By going through this experience, this transformation, or this growth, or this understanding was going to take place. Now, I don't know that them generating this artifact such as a paper will produce that change or growth. We can talk about it in terms of academic integrity or cheating, but the cat's out of the bag. It's AI. It's unavoidable.

    Lydia Kumar: Right.

    Dox Brown: There's no reason to... don't make it an ethical, moral test for students, because it's not worth it. The question is, okay, this is what I want them to accomplish or achieve or to experience so that they can move to the next experience. How do I do that? And the first thing I can think of is that my students will continue to write papers and then they will show up in class. I will know whether they wrote the paper because I'll ask them lots of questions and I'll have them talk to their peers. And if they sound like they don't know what they're talking about, then maybe they didn't write the paper or they didn't read their own paper enough. Do I really care if they wrote the paper independently? Well, if they read their own collaborative paper with generative AI to the point where they committed its memory, I'll live with that.

    When we read good writing, we learn to write well. So their writing will get better, their thinking will get better. And so this is different. This is very different from what we've been doing for 150 years and especially in the last 50 years where we've become very concerned about the ego of students. And so asking them to speak immediately, to be able to answer something on the spot, is something that teachers have been taught to avoid because it seems unfair to students that are shy. We may not be able to afford them that kindness anymore. And I've noticed that even the most shy people that I know, when you ask them about something they know about, they'll talk. They don't mind talking. They may not go on and on, but they don't feel nervous about it. So maybe by asking everybody to be more competent, celebrity won't matter as much because people will care more about listening to someone who knows what they're talking about.

    How we show up in a space is going to matter as much, if not more than what we can do tirelessly by ourselves in isolation. This is a shift that is more fundamental for education than anything else because it's so sudden and it's such a dynamic tool. I think it's a problem that only educators can solve, not experts and not technicians. But for those of us that really care about and try to understand how someone learns a thing and the value of that thing and how that thing helps them learn another thing, it's going to take that insight to figure out how to pair AI with all the things that we want people to hold onto.

    Lydia Kumar: You're going to give educators a piece of advice about how to start using these tools in their teaching environments. What would you say?

    Dox Brown: I think the revolution is going to come at the level of the user. So even when I started playing around with it in the context of education, the AI tutor seemed to have a limited utility to me. It could help with skills, it could help engage a student dynamically so that a teacher would be available to students to talk to them in real time. So in that sense, classroom management, which is trying to hold everybody together at the same time, is a very difficult skill. An AI tutor could easily alleviate some of that burden and some of that pressure on the system. But really, I think that AI as paraprofessionals will be the most important thing.

    So if I was going to give an educator advice, I would say... and there's a great, I will plug this for them because I really like them, there's a great new organization called Side... they're working with teachers trying to get them working with generative AI and then pairing them with each other to talk about what they're learning. But I would tell every teacher, start using it and see how you can make it do what you want it to do, because it will only get better. It'll be better at understanding what you want. Get an account, pay the 20 bucks a month. Maybe it seems expensive. Claude, I think is a little bit less, but for most teachers, they're using Google Classroom anyway so they can use Gemini. And all of the services have very different strengths and personalities, but ChatGPT is sort of the more personable and familiar one. So for 20 bucks a month, you just start working with the chatbot and seeing what you can get it to do and what it's useful for.

    You know, have conversations. I think as a reflection tool... so the bell rings, you sit down and you go, "Here's what happened in class. This is what I saw. This is how I did my lesson. Why did everyone's eyes glaze over until this point? What was happening? Or I felt really distracted, or I didn't have enough copies." And what generative AI is good at doing is going back and forth with you to reflect. So it will talk to you about what you were doing in a way that lets you process it, which is why I find it really interesting because I like to talk and to think. And so ChatGPT is the only thing that doesn't get exhausted by me.

    Lydia Kumar: It can be your permanent thinking partner, and it's very, very patient. Well, I'm going to lead us to the last question, and that is one idea or question about AI that you can't stop thinking about lately. I know there's lots of things on your mind, but leave us with one.

    Dox Brown: When are we going to see a tapering off of its speed and processing power so that it gets wide-scale adoption? And I think the other question I have really is like, how is it going to function once everyone has their own hyper-personalized AI? When everyone has their KITT and their Jarvis, what does it look like? How agentic will it be? How integrated will it be when you work at a company? Are you going to have a contract with them that says, "Look, when you leave here, we'll pay you $20,000 to have access to everything that you interacted with your AI about that we think is relevant," or, "You don't have to take it, and all the intellectual property and your reflections of your AI are yours," right?

    Because ultimately those interactions with your generative AI are going to be valuable. It's going to be where you're thinking about things and reflecting on things. And you know, they just found that the largest use case now for AI is for emotional support. And that is both maybe wonderful based on the mental health crisis that we see, and also apocalyptic if we don't learn, if AI does not help us learn how to communicate more effectively with each other, if we don't learn how to understand each other better through the use of AI. AI will accelerate forces that hurt us in the long run.

    Lydia Kumar: Absolutely. I think that's a really important and heavy thing to think about because this technology hopefully will help us draw closer together as humans and be more connective. But there's always the possibility that that's not true for everyone. And so it's important to think about it as people, but also educators, workers, anyone who interacts with other people who has family, it's important to think about what that looks like.

    Dox Brown: Yeah. Which makes me think also, what are the implications for AI if business ceases to be the primary driver? You know, so although profitability will matter, it will have a role in business, but for example, yes, we monetize literature, but literature exists and music exists for its own purposes. Like if AI becomes as ubiquitous as everyone thinks it's going to be, how will the character of it change?

    Lydia Kumar: Thank you so much for having this conversation and thinking alongside of me. I think this is one of the most interesting things to talk and think about because it has profound implications for our society and the future of work and education. This is going to change how we learn and how we interact. And so it's important to think about and pay attention to.

    Dox Brown: I'm strongly supportive of anything that makes people think more.

    Lydia Kumar: What a fascinating and expansive conversation with Dox Brown. I'm particularly struck by the vital role of language in co-creating our shared reality and how generative AI challenges and potentially accelerates changes to this. His ideas around this dynamic movement between cognitive networks offers such a rich lens for understanding what truly makes us human in an age of intelligent machines and what we need to cultivate in ourselves and in education.

    A huge thank you to Dox Brown for sharing his profound insights and his journey with us. On our next episode of Kinwise Conversations, we'll bring these big ideas right into the world of a creative entrepreneur. I'll be joined by Jen Murphy, a wedding planner, who is using AI in surprisingly practical ways to enhance her very human-centric business. It's a fantastic look at AI from the ground up in a small business setting. If Dox's reflections on what it means to be human alongside AI resonated, you may also appreciate the perspective shared by Celia Jones on integrating technology with personal values and human connection.

    I hope this discussion has encouraged you wherever you are on your own AI path. If you enjoyed this conversation, please subscribe and consider leaving a review. It truly helps other thoughtful listeners find us. You can learn more about how to approach AI with intention, explore resources, and join the Kinwise Collective by visiting kinwise.org. And if you or someone you know is doing interesting work at the intersection of AI and humanity and has a story to share, we'd love to hear from you. Until next time, stay curious, stay grounded, and stay Kinwise.

    1. For Rethinking an Assignment (Inspired by Redefining Education): "I am a college professor who used to assign a 10-page research paper on the economic impacts of globalization. Now that AI can generate this easily, I need a new final assessment. Design a two-part project where students must: 1) Use an AI tool to conduct research and generate an initial policy brief. 2) Participate in a 15-minute, in-class 'Policy Defense' where they must verbally justify their brief's key recommendation, respond to two counterarguments provided by you (the AI), and connect their topic to a current news event. The goal is to assess their real-time, dynamic thinking ('hollisis'), not just their research synthesis."

    2. For Exploring 'Hollisis' in Professional Skills: "Analyze the role of a [e.g., User Experience Designer]. List five core tasks associated with this role. For each task, classify it as primarily relying on 'Analysis/Synthesis' (tasks AI is good at) or 'Hollisis' (tasks requiring dynamic, embodied, and emotional cognition). Based on this classification, suggest a professional development plan for a UX Designer that focuses on strengthening their uniquely human, 'hollisis'-based skills to make them more valuable in an AI-assisted workplace."

    3. For Combating Epistemic Drift: "Act as a communication coach. I need to explain the complex and sensitive topic of '[e.g., a new company-wide budget cut]' to three different internal teams: the engineering team (focused on data and logic), the sales team (focused on morale and client impact), and the HR team (focused on personnel and legal procedure). Your task is to draft a core message that establishes a shared understanding and value. Then, write a short paragraph for each team, adapting the core message with language and examples that will resonate with their specific 'dialect' and concerns, with the goal of fostering unity rather than division."

    4. For Simulating Neurodiversity as an Advantage: "Create a business scenario where a company is facing a sudden, unexpected market disruption. The leadership team is stuck in analytical paralysis, trying to perfect a single response. Introduce a team member who exhibits 'foraging' and 'distractibility' traits (often associated with ADHD). Describe how this person's tendency to quickly explore multiple, seemingly unrelated ideas, abandon failing paths early, and connect disparate pieces of information leads to the innovative breakthrough that the more focused team members missed."

    5. For Exploring Future AI Scenarios: "Act as a futurist sociologist. Write a short scenario from the year 2045 where every individual has a hyper-personalized AI ('MyAI'). Explore the legal and social consequences related to intellectual property. A person and their MyAI co-create a revolutionary new song. Who owns the copyright: the person, the AI, the company that made the AI, or is it a new legal category altogether? Outline the arguments for each position."

Next
Next

Episode 3: Redefining Education with AI: Vera Cubero on Project-Based Learning and Human Connection