NHI Forum
Watch full video from Stefan H. Farr here: https://www.youtube.com/watchv=957LoA5G58s&ab_channel=TheTrustProtocolwithStefan/?source=nhimg
The video explains one of the most persistent misconceptions in cybersecurity — that what we currently call "digital identity" is actually identity. In reality, the systems we use today do not manage true identities at all, but rather credentials — artifacts that reference a service or account, not the entity (human or machine) behind it.
1. The Core Misconception: Identity vs. Credentials
-
Identity in the real world is a unique, internally generated phenomenon: your experiences, attributes, and existence — only you can provide it.
-
Credentials (usernames, passwords, tokens, etc.) are external references tied to a point of service (like the address and key to a PO box).
-
This means that in digital systems:
-
Authentication = Validating credentials (about the service).
-
Identification = Confirming the agent itself (about the entity).
-
-
Credentials validate access but do not create trust; identification does. This difference becomes critical when events are chained over time.
2. The Trust Problem in Cyberspace
-
In human society, trust is built through identification and accountability — a “chain of trust.”
-
In cyberspace, most interactions rely purely on authentication, which is repudiable (you can’t prove who did something, only that a credential was used).
-
Current trust in digital space is artificially patched with extra controls (2FA, biometrics, KYC, CAPTCHAs, rate limits, etc.), all premised on the agent being human.
3. The Coming Disruption: Agentic AI
-
Historically, security models assumed every agent was a person — even if systems and tools were in the middle.
-
The rise of agentic AI and robotic process automation breaks this assumption:
-
Many security controls fail or are bypassed because they are incompatible with non-human agents (e.g., enterprises disabling 2FA for automation).
-
We are moving toward a synthetic society where humans and machine agents interact socially and economically.
-
-
In such a society, security needs to solve ownership, delegation, and trust — not just access control.
4. The Inadequacy of Credentials in Multi-Agent Environments
-
In a credential-based model:
-
An AI accessing a service with a user’s credentials is invisible as an agent — the system has no way to identify which AI acted.
-
Trust breaks because there is no link to the real actor across complex multi-hop interactions.
-
-
In an identity-based model:
-
Each agent has a persistent, verifiable identity.
-
Every hop in an interaction chain can be tied to a specific agent with non-repudiation, enabling an end-to-end chain of trust.
-
5. The Path Forward – Synthetic Identity for Digital Agents
-
Machines can have identity — not human identity, but a synthetic identity modeled to replicate the trust conditions of real-world identity.
-
This shift from “credential describing service” to “identity describing agent” is foundational to enabling trust in a world of mixed human and machine agents.
-
Stefan proposes 12 Principles of Agentic Identity as non-functional requirements for designing such a framework, enabling AI and agents to be first-class, accountable members of digital society.
Bottom Line
The current security paradigm — credential-based authentication — cannot support the emerging reality of AI-driven interactions. As machine agents become more autonomous and integrated into everyday transactions, digital space must transition to a true identity framework that treats agents (human or machine) as identifiable, accountable entities. Only then can trust, delegation, and secure multi-party interactions be sustained in a synthetic society.