From Framework to Field Test: The AI Moral Code in Action at IEEE ICAD 2025
Published on July 01, 2025
đ§ The AI Moral Code Evaluates Seven Breakthrough AI Papers
By Ran Hinrichs, Norwich University â Vermont
At this yearâs IEEE ICAD Conference in Boston, where adaptive systems and ethical AI design took center stage, we conducted a real-time field test of the AI Moral Code. For the first time, a moral framework was applied not retrospectively, but during conference sessions. From across four tracks, seven papers were evaluated for ethical salience, structural embedding, and design foresight.
In my presentation, Advancing Ethical AI: A Methodological and Empirical Approach to the AI Moral Code, I introduced a values-based framework drawn from nearly two decades of AI ethics literature. But what marked a turning point was the use of the Code not as post hoc critique, but as a live instrument for analyzing technical artifacts as ethical test cases.
đ Full Presentation Now Available
You can access the slides from my IEEE ICAD 2025 talk here:
Download the Full Presentation (PDF)
đ The Method: Applying the AI Moral Code Each paper was analyzed using the NRBC frameworkâNormative, Regulatory, Behavioral, and Conceptualâto assess:
- Whether moral values are structurally embedded in system design
- Whether regulatory logic aligns with enforceable ethical practices
- Whether behavior reflects clear operational norms
- Whether conceptual framing supports transparent ethical reasoning
This model allowed us to treat each system as a value-laden constructâdiagnosing not only what it does, but what it morally assumes. Abstracts were analyzed for ethical intent, value implementation, and long-range societal impact. Below are the findings from seven representative papers.
đ Ethical Evaluation Using the AI Moral Code
Framework: Normative, Regulatory, Behavioral, Conceptual (NRBC) Method: NRBC Structure ⢠Design Evaluation ⢠Ethical Embedding Source: IEEE ICAD 2025 Proceedings
- Automating Voice of Customer Analysis with AI Workflows Built on GPT-4o-Mini
Authors: Kenneth Crowther & Pietro Aldo Refosco (Xylem Inc.)
Track: AI for Business Intelligence and Analytics
- Ethical Focus: Accountability and Epistemic Trust
- Embedding: Automates sentiment analysis with GPT-4o-Mini, reducing human bias but obscuring how importance is assigned to feedback themes.
- Enhancement: A Value Rationale Logâannotating cluster weightings with fairness, urgency, and well-beingâwould bridge technical precision with ethical transparency.
- Algorithmic Literacy and Digital Privacy in the US: An Exploratory Study Using Data Visualization
Authors: Haijing Tu (Indiana State University), Rahul Devajji (Indiana University Bloomington), Tyler Horan (UMass Amherst)
Track: Algorithmic Literacy
- Ethical Focus: Autonomy and Inclusivity
- Embedding: Exposes disparities in algorithmic understanding across demographic lines. Links education to digital control.
- Enhancement: Embedding value-aligned literacy modules in AI systems would operationalize epistemic justice and support conceptual fairness at scale.
- CB-RML: Dynamic Regret Minimization via Coin-Betting Regularization and Meta-Learning
Author: Sourav Dutta (Ramapo College of New Jersey)
Track: Algorithmic Literacy
- Ethical Focus: Adaptability and Transparency
- Embedding: A parameter-free system that adapts to concept drift without clear stakeholder awareness.
- Enhancement: An Ethical Drift Logâtracking shifts with reason codesâwould enhance stakeholder trust and system accountability.
- Trends in US Healthcare Data Breaches
Author: Li Xu (The University of Arizona)
Track: Data Analytics
- Ethical Focus: Privacy, Trust, System Resilience
- Embedding: Shows breach escalation and AIâs partial role in mitigation; human error remains dominant.
- Enhancement: Value-tagging breach responses and reframing data loss as ethical failure would align response mechanisms with moral transparency.
- ResNet-Enhanced DFSA: A Time-Efficient UHF RFID Inventory System for Large-Scale Applications
Authors: Heyi Li (MIT), Sobhi Alfayoumi (UOC), Marta Gatnau Sarret (UOC), Rahul Bhattacharyya (MIT), Joan Melia Segui (UOC), Sanjay Sarma (MIT)
Track: AI in Finance and FinTech
- Ethical Focus: Efficiency, Fairness, System Reliability
- Embedding: Enhances slot allocation in dense RFID environments using AI classifiers, saving significant time.
- Enhancement: A Traceability Index would log slot decisions and classifier confidenceâenabling fairness audits in logistical and financial ecosystems.
- Iterative Updating of Digital Twins Using Convolutional Neural Networks: A Framework for Robust Structural Behavior Prediction
Authors: Zahra Zhiyanpour, Zhidong Zhang, Devin Harris (University of Virginia)
Track: AI Edge
- Ethical Focus: Reliability, Traceability, Human Oversight
- Embedding: Uses CNNs to update infrastructure models based on experimental ground truth, enabling real-time refinement.
- Enhancement: A Model Evolution Ledger could record each iterationâs ethical rationaleâensuring civic accountability in automated decision pipelines.
- Natural Language Interface for Queries on Databases with Sensitive Information
Authors: Suli Adeniye, Faisal Al-Atawi, Arunabha Sen (Arizona State University)
Track: Health Care
- Ethical Focus: Privacy, Accessibility, Institutional Trust
- Embedding: LLMs translate natural queries into Cypher while preserving user privacy via schema-only access and entity masking.
- Enhancement: A Consent Verification Layer would document what was queried and why, maintaining regulatory clarity and user comprehension.
đ§ Why This Matters
This marks the first live deployment of the AI Moral Code as a systematic evaluation framework across real-world AI implementations. Rather than relying on principle-stating or ethics-washing, the Code enables traceable moral logicâvisible within technical decisions, design filters, and system behaviors.
It proves what many theorists only speculate: that fairness, trust, and responsibility are not abstract values. They are implementable, auditable, and scalable. Whether you are an engineer, policymaker, or educator, this test shows that ethical design is not a luxuryâit is a necessity.
đ§ AI Moral Code Challenge
Review the full ICAD 2025 proceedings and apply the AI Moral Code to one of the above, or another paper of your choice. Consider:
- Which values from the Code appear or are missing?
- What design modifications would enhance moral resilience?
- Where does the Code reorient your interpretation of the work?
âł We invite you to share your mapping in the comments or via the New Blog Post form. Letâs build ethical design together.