
Today, during a group meeting with colleagues who facilitate professional development on generative AI for faculty, our conversation took an unexpected turn. Usually, we focus on practical applications—how to encourage and support faculty in their use of generative AI to support student success, streamline workflows, or improve engagement. But today felt different. It felt bigger. We found ourselves in a more philosophical space, wrestling with what it means to exist in a world where AI is shaping knowledge and decision-making in ways that most people don’t fully grasp.
And as we talked, I kept thinking about Octavia Butler’s Patternist series.
In Butler’s world, power isn’t just about strength—it’s about access. The Pattern is a vast mental network that connects the actives—those who have the ability to shape reality—while leaving others, the mutes, outside the system. The mutes aren’t just powerless; they don’t even recognize the full extent of what they’re missing. Their world is controlled by forces they can’t see, and they function within a structure they didn’t create.
It’s an unsettling parallel to the way generative AI is rapidly dividing people into those who understand and shape it and those who will be shaped by it.
I’ve been thinking a lot about what that means for the people I work with—teachers, parents, psychologists, and students, particularly those with disabilities and cognitive impairments. Many of the students I evaluate have IQs in the 70–85 range. Not high enough for abstract reasoning to come easily, but high enough to function independently in daily life. Many of the teachers and families I work with lack exposure to emerging technologies and struggle with digital literacy. None of this is about intelligence or effort—most are doing the best they can with the skills and resources available to them.
But what happens when critical thinking and AI fluency become the dividing line between those who can navigate the world and those who are at the mercy of it?
This is why I keep coming back to Robert Kegan’s Constructive-Developmental Theory, which focuses not just on intelligence but on how people develop the ability to process complexity. AI isn’t just a tool—it’s shifting the landscape of how meaning is made. And right now, many people are engaging with AI passively—consuming whatever it generates without question, like someone driving a car with an automatic transmission. They can operate within the system, but they don’t truly understand what’s happening under the hood.
Then there are those who approach AI manually—people who know how to guide it, refine responses, challenge bias, and recognize where the system is leading them. These people have more options, more adaptability. They aren’t just using AI—they’re shaping it.
Neither approach is inherently better. But the problem arises when the world becomes designed for only those who can think manually. If generative AI continues to shape education, communication, and decision-making, those who don’t develop AI literacy will be forced into passive dependence, unable to tell whether what they’re being fed is real, useful, or even ethical. And I’m not convinced that everyone will automatically gain these skills on their own.
This is especially true for students with learning disabilities. Generative AI has the potential to be an equalizing force—a tool that can scaffold complex ideas, assist with communication, and provide access to information in ways traditional methods can’t. But it’s only an equalizing force if it is explicitly taught. Without guidance, it’s just another confusing technology. With structured, intentional instruction, it becomes a tool for autonomy and empowerment.
I keep coming back to Clay Dana, a character in Butler’s Patternist series, who stands at the intersection of the actives and the mutes. He isn’t fully aligned with the elite, nor is he disconnected from those without power. He is both inside and outside the system, navigating both worlds, trying to bridge the gap.
And I realize—that’s where I am, too.
I work with the most vulnerable learners. I see how AI will widen the gap between those who question and those who accept. I understand that generative AI is more than a tool—it’s a new form of literacy. And I have the ability to translate complexity, to help those at risk develop just enough awareness to hold onto their agency.
So now I’m asking myself: What’s the best way to do that? Where do I start?
Because if we do nothing, the Pattern will form without us.
And those without AI fluency?
They won’t even realize they’ve lost control of the wheel.