← Back to Blog

Canonical Values Update

Published on June 27, 2025


title: “Canonical Values Update: IEEE ICAD 2025 Presentation” date: 2025-06-27 categories: [AI Ethics, Conferences, Canonical Values] tags: [AI Moral Code, IEEE ICAD, canonical values, ethical AI] author: Randy J. Hinrichs featured: true —

Overview

Following the IEEE ICAD 2025 Conference, I am releasing an updated derivation of the 12 Canonical Values of the AI Moral Code. This marks a significant refinement from the earlier 2024 version, supported by deeper semantic modeling and simulation resilience analysis.

Methodology Summary

The update draws on over 291 AI ethics documents spanning from 2006 to 2025. Using Sentence-BERT embeddings and a sector-weighted semantic scaffold, we derived values not solely by frequency but by value performance in simulations and contextual salience across regulatory, academic, industrial, NGO, and religious frameworks.

Each value’s Composite Value Score (CVS) was computed using:

CVS = ∑ (TF × IDF × SWI × CM)

Where: • TF x IDF anchors frequency context • SWI calibrates sectoral legitimacy • CM (Contextual Multiplier) boosts terms in high-impact zones (titles, principle clauses), and incorporates confidence scoring and simulation relevance.

Legend: TF = Term Frequency IDF = Inverse Document Frequency. SWI = Sector Weight Index CM = Contextual Multiplier

This formula grounds the values in linguistic relevance, sectoral legitimacy, and ethical utility.

Updated Canonical Values

Core Moral Values

  • Beneficence – Promoting well-being and flourishing
  • Dignity – Recognizing intrinsic human worth
  • Fairness – Ensuring equitable outcomes
  • Justice – Sustaining procedural integrity
  • Responsibility – Accepting moral and institutional accountability
  • Trust – Enabling predictable, co-functional systems

Instrumental Values

  • Innovation – Supporting ethical adaptability and progress
  • Sustainability – Ensuring long-term system viability

Conditional Values

  • Accountability – Structuring enforceable responsibility
  • Autonomy – Respecting freedom and human agency
  • Inclusivity – Recognizing and integrating diverse perspectives
  • Privacy – Protecting identity and informational boundaries

What’s New

Compared to the prior set, Beneficence, Innovation, and Autonomy gained formal inclusion after demonstrating stable performance under evolving simulation environments and policy scenarios. Values like Transparency and Explainability, while crucial, are now treated as instrumental structures, enabling higher-order values such as trust and accountability.

Introducing the Ethical Salience Tracker (EST)

To future-proof this framework, I introduced the Ethical Salience Tracker (EST)—a recalibration tool tracking how canonical values evolve over time. It monitors:

  • New legislation (e.g., EU AI Act, U.S. Algorithmic Accountability Act)
  • Semantic shifts in AI discourse
  • Sector-specific ethical realignment

Each value now has a volatility metric that allows ongoing updates to the moral core—without compromising philosophical integrity.

Conclusion

This refined canonical set enables a more robust, empirically grounded moral code for AI. The AI Moral Code Project remains open to iteration and input through aimoralcode.org. Future updates will integrate feedback from simulation testing, international ethics councils, and real-world deployments.


📄 Download the full white paper (PDF) ) This paper is part of the AI Moral Code research series and is under consideration for formal publication.

© 2025 Randy J. Hinrichs. Citation and academic use permitted with attribution.
Commercial use, derivative works, or redistribution prohibited without written consent.