Agenticracy™ manifesto
Published on April 22, 2026 by Vlad-Mihai I

THE AGENTICRACY MANIFESTO
A PsychoTechnoSocial Contract for the Agentic Era

We are at the wrong end of a familiar pattern.

Every major technological transition in history has followed the same arc: capability deployed at speed, consequences understood too late, accountability assigned after the damage. The printing press. The industrial loom. The social media feed. Each time, the humans inside the system absorbed the cost of the absence of a contract.

AI agents are different in one critical way: they are not tools. They are actors. They make decisions, generate outputs, and mediate relationships — at scale, in real time, inside institutions that have not yet decided what accountability even means in this context.

We are not at the beginning of that conversation. We are already years into the deployment. The contract is already missing.

What we have observed.

We have spent twenty years inside the institutions that AI is now entering. In law enforcement, where institutional power without psychological accountability destroys the humans it claims to protect. In healthcare, where the consequences of getting AI wrong are not theoretical — they are clinical, legal, and human. In workforce systems, where the language of "efficiency" has always been a proxy for displacement when no one is counting the cost to the displaced.

We have been in the rooms where AI policy gets written. We have watched organisations deploy agents with safety cards and no accountability structures. We have watched regulators draft codes of practice that describe the problem beautifully and propose no measurement.

We have watched, and we have built the thing that was missing.

What Agenticracy is not.

It is not a governance framework. Governance is what institutions do to protect themselves. Agenticracy is a contract — the kind that exists between parties with different interests who have agreed on shared conditions.

It is not a product. Products are sold. Contracts are adopted.

It is not a critique of AI. AI agents deployed within the seven pillars of this Standard are among the most powerful instruments for human flourishing available to any generation. The Standard exists not to slow AI down, but to ensure that the humans working alongside it are not the ones who pay for speed.

The PsychoTechnoSocial Contract.

The idea is simple. Every AI deployment that affects a human being is a social act. It carries psychological consequences for the people it touches. It operates within an economic and ecological system. It exercises a form of power.

A contract is the appropriate response to power. Not a policy. Not a card. Not a safety tier. A contract — one that names the parties, defines the obligations, and provides a mechanism for measurement and redress.

Agenticracy is that contract. The seven pillars are its clauses. Slopometry is its enforcement instrument. Workability is its implementation partner.

Why the Standard must be open.

If the contract is owned by a vendor, it protects the vendor. If it is owned by a government, it protects the state. If it is owned by a lab, it protects the model.

The only contract that protects the human is one owned by no one — and maintained by everyone.

That is what open means here. Not free as in free beer. Free as in the contract cannot be captured, cannot be closed, cannot be weaponised by the party with the most resources. The Standard is CC BY-NC-SA. The skill file is on GitHub. The data commons is public. The Board of Ethics publishes its dissents.

We made these choices deliberately, because we have seen what happens when the accountability infrastructure is controlled by the entity being held accountable.

What we are asking.

We are not asking you to believe in AI safety as an abstract principle. We are asking you to sign a specific contract with specific clauses, install a skill file in your agent, submit a Slopometry report every 99 days, and display the Agenticracy Aligned badge honestly.

We are asking organisations to be counted. We are asking developers to build with the Standard as a constraint, not a feature. We are asking regulators to cite the Standard in procurement requirements. We are asking AI labs to endorse the skill file and help us make this the default.

We are asking, not demanding — because a contract only works when the parties choose it.

The alternative is not neutral.

Deploying AI agents without a shared accountability standard is not a neutral act. It is a choice. It is a choice to let the humans inside the system absorb the cost of the absence of a contract — as they always have, in every previous transition, until the contract was written.

We are writing it now, while there is still time for it to be chosen rather than imposed.

Adopt at agenticracy.ai. Measure at slopometry.ai. Implement at workability.ai.
Vlad-Mihai Iorga · London · April 2026