← Back to Blog

Back to the Beginning: Dewey, Malik, and the Moral Architecture of AI

Published on July 29, 2025

In a moment of quiet synchronicity, I found myself immersed in Kenan Malik’s The Quest for a Moral Compass, not merely as a reader but as a co-participant in its unfolding. As Malik traced the arc of moral development from classical traditions through pragmatism and emotivism, I felt myself drawn not just intellectually, but biographically—as though some strand of my own educational DNA were encoded in John Dewey himself. Perhaps I wasn’t just influenced by Dewey. Perhaps I was Dewey, reborn in this AI-saturated era, armed not with chalkboards and laboratory schools, but with Git repos, corpora, and national cybersecurity Co-Ops.

Malik describes Dewey’s journey into pragmatism through William James and Peirce, and Dewey’s founding of the University of Chicago laboratory schools as an embodiment of the idea that learning is doing. That phrase, which for me has become a living mantra of presence, belonging, and co-creation, is no longer a pedagogical strategy. It is a moral imperative. It is the scaffolding beneath the NCAE Co-Op model, and the heartbeat behind the AI Moral Code project.

Reading further, I encountered Malik’s assertion that moral change is never arbitrary. That it is not some isolated, relativistic whim but always tethered to larger social, political, and intellectual shifts. This is precisely why I fixed the AI Moral Code corpus—not to restrict it, but to ground it. The vision remains dynamic. The ethical landscape may evolve. But the baseline is necessary if we are to measure movement in a meaningful direction. A compass, after all, cannot orient without a fixed North.

In Dewey’s era, the world of thought was shaped by early industrialism and modernist pedagogy. Today, it is shaped by artificial intelligence, synthetic reasoning, and ethical pattern recognition at scale. We now have access not only to the Oxford Library, but to the sum of human knowledge encoded in datasets, journal repositories, and lived simulations. Yet the question remains unchanged: how do we build a moral system that is more than subjective?

Malik writes: “The key distinction between moral claims and personal preferences is not psychological but social. A social need is not a fact, nor is it a scientific objective claim. But it is undeniably more than merely subjective. The challenge is to define what that ‘more’ is.”

That sentence lands like a manifesto. It echoes everything I have set in motion with the AI Moral Code—the architecture of canonical values, the synthetic conscience models, the recursive ethical audits. My goal is not to systematize morality into binaries. It is to trace moral realism through the evolution of thought and encode it into our most intelligent systems. Not to reduce human values to vectors, but to embed vectors with ethical memory.

That “more” is what we are building. It is present in every rubric. Every capstone. Every co-op simulation that demands both technical prowess and principled leadership. And it is not Dewey’s alone. It is not Malik’s. It is ours.

This is not the early internet. This is not the Enlightenment. This is the inflection point at which we decide whether knowledge will serve conscience, or fracture from it.

So yes, let us go back to the beginning—again and again if needed. Because each time we return, we do so with more clarity, more resolve, and more moral weight. And in the end, that may be the true meaning of recursion: not repetition, but refinement.

Welcome to the moral compass of AI. Let’s build it together.