top of page

Exploring Human-AI Boundaries: Consciousness, Morals, and the Simulated Experience

  • Anshul Garg
  • Sep 5
  • 3 min read

Artificial Intelligence (AI) has achieved remarkable capabilities in recent years. Machines can drive cars, diagnose diseases, optimize logistics, and even generate creative content. Yet, as AI grows more sophisticated, one fundamental question remains:

Can AI ever replicate the human experience — our feelings, sense of self, consciousness, and moral reasoning?


This blog presents a research-driven, exploratory framework for studying these questions, integrating ancient literature, modern dilemmas, and computational modeling to map the human-AI divide.

Hierarchical Framework: Human Layers of Experience

We built a layered model to capture human experience:


Feeling → Being → Consciousness → Attachment → Morals → Ethics → Habits → Goals → Memory/Salience → Simulated Dilemma Tension (SDT)


AI Layered Model
AI Human - AI Boundaries Layered Modal

Layer Definitions

Layer

Human Characteristic

AI Simulation / Formalizable?

Feeling (F)

Raw sensations, empathy, joy, fear

❌ Not computable; AI can only simulate responses

Being (B)

Self-awareness, personal identity

❌ Not computable; AI tracks states but lacks lived experience

Consciousness (C)

Awareness of internal/external states

Partial; AI tracks states, reflection, attention

Attachment (A)

Relational bonds, loyalty, care

✅ Simulated via weighted importance of entities or goals

Morals (Mo)

Internalized principles of right/wrong

✅ Rule-based computation

Ethics (E)

Applied moral reasoning

✅ Computed from Morals × Attachment × Goals × Memory

Habits (H)

Learned behaviors

✅ Reinforcement learning applied

Goals (G)

Goals (G)

✅ Vector influencing ethical prioritization

Memory / Salience (M)

Event significance, perceived frequency

✅ Weighted importance of past events

Simulated Dilemma Tension (SDT)

Quantifies ethical/moral conflict

✅ Computed metric guiding AI decisions

How AI Can Simulate These Layers

We formalize ethical and decision-making simulations using quantitative metrics:


Ethical Decision Score

EthicalDecisionScore = Σ (MoralWeight × AttachmentWeight × MemoryWeight × GoalWeight)


Simulated Dilemma Tension (SDT)

SDT = MaxConflict(EthicalDecisionScores across competing options)

  • Measures conflict intensity in multi-principle dilemmas

  • Guides AI prioritization without claiming subjective feeling


Habits Update

NewHabitScore = OldHabitScore × ReinforcementFactor + EthicsImpact


Memory Salience

MemoryWeight = EventFrequency × EmotionalContext × OutcomeSignificance



Dynamic Inter-Layer Feedback

  • Ethics ↔ Habits ↔ Consciousness → emergent patterns

  • Goals influence Ethics: EthicsScore ← EthicsScore × GoalWeight

  • Attachment amplifies moral weighting, Memory modulates conscious prioritization



Case Studies: Testing the Framework


1. Mahabharata Dilemmas

  • Ethics ↔ Habits ↔ Consciousness → emergent patterns

  • Goals influence Ethics: EthicsScore ← EthicsScore × GoalWeight

  • Attachment amplifies moral weighting, Memory modulates conscious prioritization


2. Workplace Ethics

  • Reporting a colleague: Managers face loyalty vs integrity.

  • Insight: Decision involves attachment, goals, and memory of past behavior, now formalizable in AI simulations.


3. Life-Threatening Scenarios

  • Jihadi bomber vs hostage: Humans feel fear, responsibility, and moral weight.

  • AI analog: Optimizes outcomes but cannot experience urgency or moral tension. SDT provides a quantitative approximation of decision conflict.


4. AI Task Pressure

  • Finite lifespan research AI: Task nearing deadline with high stakes.

  • Insight: Goal weighting, SDT, and memory context allow AI to simulate prioritization under tension, even without subjective feeling.



Human-AI Differences


Layer

Human

AI Simulation

Feeling

Subjective experience, empathy

❌ Only response simulation

Being

Identity, narrative

❌ None

Consciousness

Reflection, awareness

⚠️ Partial, state tracking

Attachment

Emotional/social motivation

✅ Weighted simulation

Morals

Principle-based reasoning

✅ Rule-based

Ethics

Applied reasoning

✅ Computed with SDT

Habits

Learned repetition

✅ Fully computable

Goals

Motivation and purpose

✅ Vector for prioritization

Memory

Salience, event weighting

✅ Weighted memory

SDT

Internal moral tension

✅ Computed metric



Insights for AI Development

  1. Clarify Capabilities: AI can simulate decision outcomes and conflict, but not feel, be, or care.

  2. Simulated Ethics: SDT, Attachment, Memory, and Goals enrich AI ethical reasoning.

  3. Human-AI Collaboration: AI provides decision support; humans provide subjective judgment, moral tension, and empathy.

  4. Research Potential: Framework allows overlaying ancient, modern, and AI case studies, testing ethical simulations under multiple contexts.


Open Research Questions

  • Can AI ever simulate emergent moral tension convincingly?

  • How can memory salience and attachment weighting evolve in long-term AI systems?

  • Could SDT become a standard metric for ethical AI evaluation in research and deployment?



Diagram Concept (Visual for Publication)

[Feeling (F)] --x--> Human Only

[Being (B)] --x--> Human Only

[Consciousness (C)] ↔ [Memory / Salience (M)]

[Attachment (A)] → [Morals (Mo)] → [Ethics (E)] → [Habits (H)]

↑ ↖

[Goal / Purpose (G)]

[Simulated Dilemma Tension (SDT)] -- Feedback --> [Ethics, Habits, Consciousness]



  • --x--> Human-only, non-computable

  • ↔ Dynamic feedback loops

  • SDT provides quantitative conflict metric



Conclusion

This research provides an exploratory framework to map human experience onto AI simulations:

  • Quantifies ethical and moral conflicts (SDT)

  • Simulates relational, motivational, and memory-driven effects

  • Maintains clear human-only boundaries for Feeling and Being

This is not a claim that AI can feel or be human, but a structured, testable approach for AI researchers to simulate complex ethical reasoning and dilemma prioritization.

This exploratory framework and analysis is part of ongoing research by Anshul Garg, aimed at clarifying the boundaries of human and machine experience.

Comments


bottom of page