The Internet was not designed for identity. This was not an oversight - it was a deliberate choice made under the constraints of the time. The early architects of the network prioritised interoperability and efficiency. They assumed that security and accountability would be added later, at the application layer, by the people building things on top.
They were not wrong about the engineering trade-off. They were wrong about whether anyone would ever come back to fix the foundation.
Nobody did. Instead, thirty years of security tooling was built on top of an anonymous substrate - each layer compensating for the absence of the one below it. Passwords to compensate for the lack of identity. Firewalls to compensate for the lack of perimeter. Multi-factor authentication to compensate for the weakness of passwords. Secrets managers to compensate for the proliferation of credentials. Each tool making the previous tool's problem slightly more manageable. None of them removing the condition that created the problem.
The security industry does not have a tooling problem. It has an architectural problem that tooling cannot solve. You cannot bolt accountability onto a system that was built without it.
The cost of this architectural debt is now visible everywhere: credential sprawl that grows geometrically with every new system, service, and actor added to a network. Breaches that are inevitable not because defenders are incompetent but because the geometry of the problem guarantees eventual failure. A cybersecurity industry that is almost exclusively reactive - detecting harm after it has occurred, never preventing the conditions that made it possible.
The industry has accepted this as a natural characteristic of digital space. It is not. It is the consequence of a foundation that was never finished.
Every security tool built for the Internet rests on one assumption: that the actor is a human. Passwords require memory. Multi-factor authentication requires a face, a device, a biometric. CAPTCHA requires pattern recognition. KYC requires a body to verify. WAF rate limiting assumes legitimate traffic moves at human speed.
The Internet is no longer primarily human.
Machine agents - bots, services, autonomous systems, AI workers - now represent the overwhelming majority of Internet traffic. The ratio of machine to human actors is approaching 80:1 and accelerating. Every new AI deployment, every microservice, every API integration adds actors to the network that have no face, no memory, no liveness to verify. The assumption on which thirty years of security tooling is built is disappearing in real time.
The response from the industry has been predictable: extend the human model to machines. Non-Human Identity platforms give machines credentials - API keys, service accounts, managed certificates. They apply the same credential management infrastructure to a population of actors for whom credential management was already failing at human scale. The geometry of the problem is N×M×R - actors, services, rotation points - multiplicative by design. Making N larger does not change the geometry. It amplifies it.
NHI did not solve machine identity. It renamed the actors and reproduced the architecture.
The question nobody has adequately answered is the foundational one: what does identity mean for an entity that has no body, no memory, no biometric signature - that can be copied, restarted, instantiated in parallel, and destroyed without trace? The answer is not a credential. The answer requires a different model entirely.
In physical reality, accountability is not a system, it is a topological property - a property of the space not of any one object in that space. It emerges from the structure of the space itself. Identities are immutable and unique. Actions require physical proximity. Attribution is continuous. These are not features of human society. They are properties of physical space that human society is built on top of.
Digital space is different in one profound way: we created it. We are not subject to its rules - we choose its rules. The topological properties that make accountability possible in physical reality are absent from digital space by default, but they are not absent by necessity. They are absent because we have not yet chosen to build them in.
For accountability to function - in any domain - certain conditions must be present:
Individuality
Actors must be uniquely and persistently identifiable - not by what they hold, but by what they are.
Pre-interaction identification
Identification must precede action. A recipient must know who is acting before any data is exchanged.
Choice of business
Recipients must be able to decline interaction with unknown actors before a connection is established.
Non-repudiation
Actions must be attributable to their origin in a manner that cannot be denied by the actor.
Continuity
Identity must outlast individual actions, sessions, and the digital constructs that carry it.
Ownership
Every digital actor must be traceable to a responsible owner through an unbroken chain bridging digital space and physical reality.
These are not product requirements. They are the necessary conditions for any accountability system to function. Physical reality satisfies them by default. Digital space satisfies none of them by default - but it can, because unlike physical reality, we can choose what rules it operates under.
The implications of these conditions, worked through rigorously, point to a single architectural conclusion: identity in digital space cannot be a credential. It must be an authority.
A credential is something external - issued, managed, rotated, shared, and revocable by someone other than the actor it represents. A credential is a key. It says nothing about the hand holding it.
An identity is something intrinsic. It is not given. It is not managed by someone else. It is the persistent, immutable characteristic of the actor itself - the thing that makes it that actor and not any other.
The model we have built - which we call selfauthority - reframes the question entirely. Instead of asking "what credential does this actor hold?" it asks "what is the issuing body of this actor's identity certificates?" The actor does not rely on any certificate to represent it. The actor is the authority that issues its certificates. When the certificate expires or rotates, the identity is unchanged - because the identity is not the certificate, it is the authority behind it.
Self-authority is a digital certificate authority whose sole purpose is to issue certificates that represent itself. The identity is the issuing body. The certificates are its ephemeral projections into digital space.
This resolves the ephemerality problem that undermines every certificate-based identity system: certificates expire, but identities persist. The certificates issued by a self-authority can rotate continuously - daily, hourly, on every connection - without any impact on the identity itself or on the relationships that identity has built. Rotation becomes a property of the architecture, not an operational burden.
It also resolves the scale problem. In a credential model, every actor-service relationship requires its own credential - N×M×R, geometry that multiplies at every dimension. In an identity model, each actor has one identity valid across all its service relationships. The complexity collapses from N×M×R to N×1. Not through better management. Through a different architecture.
The self-authority is not a central authority issuing identity to actors - it is the actor itself, functioning as its own authority. There is no certificate authority that can revoke your identity, no platform you are dependent on to exist in digital space. The identity is self-asserted, self-governed, and cryptographically inescapable.The self-authority is not a central authority issuing identity to actors - it is the actor itself, functioning as its own authority. No platform can revoke your identity or alter what it represents. Like a domain name that exists independently of any registrar yet requires the registrar ecosystem to be reachable, a selfauthority is genuinely self-governed while operating within a brokered infrastructure that no single provider controls.
The most counterintuitive property of this architecture - and the one that took longest to fully articulate - is what we call anonymous accountability.
The prevailing assumption in security and identity is that accountability requires identification: you must know who someone is in order to hold them responsible. This is why every identity system demands personal information. The demand for personal data is treated as a necessary condition for security.
It is not.
Consider how accountability functions in physical reality. When a person acts in society, they are held accountable not because the group has catalogued their personal information, but because they have something to lose. A business that has built relationships, reputation, and dependencies over decades is careful about not behaving badly because the cost of losing those things is prohibitive - not because anyone is watching their personal data. The accountability emerges from the stakes, not from surveillance.
The same mechanism emerges naturally in an identity-oriented digital space. As a self-authority accumulates relationships with services - banking, communication, commerce, infrastructure - the value of that identity grows asymptotically. A new self-authority has nothing to lose. An established one has everything to lose. The cost of malicious behaviour is not punishment after the fact - it is the immediate loss of every relationship that identity has built. Civil behaviour becomes the only rational choice.
In a mature identity-oriented digital space, crime becomes structurally difficult not because we stop it, but because we remove the conditions it requires. The attack surface for anonymous malicious actors does not exist.
And crucially: none of this requires personal information. The self-authority contains no personal data. A service can hold personal information about its users as part of its business function, but that personal information plays no role in the security model. Identity and personal information are separate instruments, by design.
Privacy and accountability - not in tension, but as properties of the same architecture. Security without surveillance. Attribution without exposure.
Everything described above was reasoned from first principles about identity and accountability in digital space - before artificial intelligence made the problem urgent. The architecture was not designed for machines. It turns out to be the only architecture that works for them.
A machine agent has no body, no memory independent of its instance, no biometric signature. It can be copied, replicated, restarted. It operates autonomously - without a human present to verify or take responsibility for each action. Every assumption of human-centric identity fails for machine actors.
The self-authority model requires none of those assumptions. A machine agent can be its own authority, issuing certificates that represent it - the agent, independent of any service or relationship - rotating those certificates continuously without coordination overhead. The agent has one identity across all the services it interacts with. Its ownership is traceable to a human or organisation through the delegation chain. Its actions are non-repudiable at every step.
This is what it means for identity to be delegation native. The ownership chain from autonomous agent to human owner is not an add-on - it is a structural property of the architecture. An AI agent acting on behalf of a human carries cryptographic proof of that delegation in every connection it makes. The human owner is always traceable. The agent cannot act outside its mandate without that fact being observable.
We have described the 12 principles required for accountable autonomous systems at mtls.id. Those principles are the application of this architecture to the agentic era - not a new framework, but the same framework applied to a domain it was always equipped to handle.
This argument was first formalized in 2015, years before non-human identity was a category, before agentic AI was a market, before any of the regulatory frameworks now mandating machine identity existed. It was not early because we anticipated a trend. It was early because the argument demanded it. If the foundational problem of digital space is the absence of identity as a topological property, the solution has to be architectural - and architectural solutions take time to build correctly.
The network effect has not compounded yet. A selfauthority with no relationships has nothing to lose, and the accountability dynamic requires stakes to function. The end-game requires a critical mass of identities that have accumulated enough value to make civil behaviour the only rational choice.
That critical mass begins with the wedge: the concrete, immediate problems that self-authority solves better than any credential system. B2B API security without shared secrets. Legacy services gated at the connection layer without code changes. Machine agents authenticated at the protocol level with cryptographic delegation chains. Microservices communicating without secrets managers or rotation schedules. Corporate Zero Trust that reaches TCP, not just HTTP.
Each deployment builds the network. Each identity that accumulates relationships raises the baseline. The end-game is not a product launch - it is a network effect that compounds until the conditions for anonymous malicious behaviour no longer exist.
We are building the foundation. The foundation takes time. We have been building it for ten years and we are not done. But the direction is set, the architecture is correct, and the world is finally arriving at the problem we have been solving.
There is a version of the Internet that most people have never experienced but everyone instinctively understands - because they have experienced its analogue in reality.
In a stable, civil society, people do not barricade their doors because they are afraid of every stranger. They do not demand identification before every conversation. They do not employ private security to accompany them through the street. They move freely, transact openly, and extend reasonable trust to people they have never met - not because they are naive, but because the social architecture of a civilised society makes civil behaviour the norm and malicious behaviour structurally costly. The threat is not absent. The conditions that make it easy have been removed.
The Internet has never had this. It has had the digital equivalent of a war zone - every service behind walls, every actor assumed hostile until proven otherwise, every interaction a potential attack surface. The industry calls this the threat landscape and treats it as a permanent condition. It is not permanent. It is the consequence of building a space without the social architecture that makes civil behaviour rational.
An accountable Internet looks like a city, not a fortress. Not because the bad actors are gone - they never entirely are - but because the conditions that make anonymous malicious behaviour viable have been removed. Identity is inescapable. Attribution is continuous. The cost of acting badly exceeds the potential gain. Civil behaviour is not enforced - it is the only rational choice.
That is what we are building. It will take time. It requires a network effect that has not yet compounded. It begins with every deployment that replaces a credential with an identity, every service that gates at the connection layer, every agent that carries a delegation chain to its human owner.
If that vision resonates - if you have been thinking about the same problem from any direction, building something that needs this infrastructure, or simply believe the Internet should be better than it is - you know where to find us.
Stefan Harsan Farr
LEGAL
Copyright © 2026,
Identity Plus, Inc., New Hampshire, USA,
All rights reserved