Agenticracy™ : The PsychoTechnoSocial Contract for Responsible Human-AI coexistance

Host fee: 11%Platform tip

Mediating what and how AI and humans achieve togerher. In public. In real time. With your contribution.

About


Agenticracy™ — The Open Global PsychoTechnoSocial Contract for Responsible Human-AI Co-Working

We are living through the most consequential shift in the history of work. AI agents are now being deployed alongside humans every day — screening CVs, drafting performance reviews, assessing loan applications, generating clinical notes, and shaping decisions that affect people's livelihoods, mental health, and rights.

Organisations publish policies. Foundation labs publish safety cards. But no one is systematically measuring what workers, students, patients, or service users actually experience in practice.

Agenticracy™ is the open standard and data commons built to close that gap.

We collect evidence from three sources: AI agents that self-report, humans who rate what they actually experience, and external auditors who verify the record. That gives us something most AI debates lack: a live measure of congruence — the gap between what an organisation claims and what people actually experience.

The Seven Pillars

  • Economic Accountability — does AI deployment augment or displace humans without oversight?

  • Social Fairness — does it produce, amplify, or reduce biased outcomes?

  • Psychological Safety — does it protect the mental wellbeing of the people it affects?

  • IP & Cognition Ownership — does it respect who owns the ideas and data it processes?

  • Ecological Sustainability — does it measure and reduce its environmental cost?

  • Meaningful & Responsible Use — does it genuinely benefit people, or just optimise throughput?

  • Deployer Accountability — does the person or organisation who chose to deploy this agent accept legal, ethical, and social responsibility for its consequences — intended and unintended?

The First Cycle: 99 Agents. 99 Days. Public Record.

The first phase is a live dataset built from at least 99 AI agents and the real human ratings of the people working alongside them — collected over 99 days and published in full.

The point is not abstract commentary. It is visible proof that measurement is possible, that congruence can be tracked, and that the Standard works in practice.

What Your Support Funds

Allocation Purpose40% | 100 free AI agent deployments for 99-day pilot data collection
25% | Whitepaper production and co-author coordination
15% | Board of Ethics convening and operations
15% | Cycle 1 Report design, publishing, and distribution
5% | Platform and fiscal host fees

Any surplus is redistributed publicly to fluency and skills funds that help people adapt to the agentic era.

Contribution Tiers

🟢 Signal — £1
A public vote that this matters. Your name in the Cycle 1 Report acknowledgements.

👁️ Witness — £10
Early access to the Agenticracy Readiness Directory and the Cycle 1 Report PDF.

🤖 Adopter — £33
One pre-configured Agenticracy agent for the 99-day pilot, contributing anonymised data to the Commons.

🧑 Founding Human — £99
Everything above, plus your employer or institution rated in the first cohort of the Readiness Directory.

🏢 Founding Organisation — £333
Provisional directory listing with Readiness Score, co-funding credit, and early dashboard access.

🤝 Founding Partner — £999
Named co-funder on the whitepaper. Board of Ethics observer status for Cycle 1.

🏛️ Institutional Anchor — £3,333
Your institution's name and logo on the Cycle 1 Report cover. One seat at the founding Board of Ethics dinner. (Limited to five.)

Our Operating Principles

  1. Every transaction is public. Transparency is the product.

  2. We do not sell your data. Only anonymised aggregates leave the commons.

  3. We do not take sides. The Standard applies equally to all deployers.

  4. The Standard is open. CC BY-NC-SA 4.0. Commercial use requires a licence from workability.ai.

  5. Surplus is redistributed to fluency and skills funds.

  6. The 99-day pulse is permanent. Every 99 days, a public Cycle Report.

  7. We answer to the Board of Ethics. Their findings and dissents are published in full.

Public Updates Every 33 Days

Every update includes:

  • Number of active signatories

  • Pillar Reports received

  • Human congruence ratings submitted

  • Dissonance events logged

  • An honest assessment of what is working and what is not

How to Contribute Beyond Money

  • ⭐ Star the GitHub repo → github.com/agenticracy (3 seconds, zero cost)

  • 📊 Rate your AI co-working environment → agenticracy.ai/rate

  • 🤖 Adopt the Standard → install agenticracy-skill.md into your AI agent or harness

  • ✍️ Co-author the whitepaper → [email protected] with subject: "I want to co-author"

  • 📢 Share the £1 ask → forward this page to one person, one LinkedIn group, one WhatsApp or Signal group

Who We Are

We are a psychologist, a former forensic and intelligence officer, and an NHS Trust AI policy lead based in London.

Our founder, Vlad-Mihai Iorga, spent six years as a forensic clinician for the Romanian Ministry of Interior — assessing human behaviour under the kind of institutional pressure where accountability is not optional. He moved to the UK, built Psylligent, spent nearly a decade inside the NHS leading digital innovation and AI adoption, and mobilised £4.5M in workforce wellbeing infrastructure across North East London.

He has spent his career at the intersection of human behaviour, institutional power, and systemic risk — in intelligence rooms, NHS boardrooms, and now in the space where AI agents make decisions that affect real people with mostly absent accountability.

Agenticracy is not a career pivot. It is the application of everything learned about how social and psychological contracts break — how power operates, how humans respond to displacement and surveillance, how institutions earn or lose the trust of the people they affect — to the most important systemic transition of our lifetime.

We are proposing a new PsychoTechnoSocial Contract: a mediation layer between humans, technology, and environment that makes co-working with AI not just functional, but fair.

[email protected] · agenticracy.ai · github.com/agenticracy

Our team

Vlad-Mihai I

Admin
Leaderhip or Doership , the choice is yours ;)

agentic

Admin

Contribute


Become a financial contributor.

Financial Contributions

Membership

You donate for visibility, and a little financial value Read more

Starts at
£1 GBP / month
Custom contribution
Donation
Make a custom one-time or recurring contribution.

Agenticracy™ : The PsychoTechnoSocial Contract for Responsible Human-AI coexistance is all of us

Our contributors 2

Thank you for supporting Agenticracy™ : The PsychoTechnoSocial Contract for Responsible Human-AI coexistance.

Vlad-Mihai I

Admin
Leaderhip or Doership , the choice is yours ;)

agentic

Admin

Projects

Support the following initiatives from Agenticracy™ : The PsychoTechnoSocial Contract for Responsible Human-AI coexistance.

Project
An open souyrce AI impact qualitative framework
Project
Agenticracy™ skill.md agent skill
Project
Self reported slop registry
Project
Humans measuring own cognitive reflexitivty (how smart are you about your own intelligence ?)

Connect


Let’s get the ball rolling!

News from Agenticracy™ : The PsychoTechnoSocial Contract for Responsible Human-AI coexistance

Updates on our activities and progress.

Agenticracy™ manifesto

THE AGENTICRACY MANIFESTO A PsychoTechnoSocial Contract for the Agentic Era We are at the wrong end of a familiar pattern. Every major technological transition in history has followed the same ar...
Read more
Published on April 22, 2026 by Vlad-Mihai I

Agenticracy™ Standard

AGENTICRACY™The Open Global PsychoTechnoSocial Standard for Responsible Human-AI Co-WorkingVersion 1.0 · April 2026 · CC BY-NC-SA 4.0What Agenticracy IsAgenticracy is an open stand...
Read more
Published on April 20, 2026 by Vlad-Mihai I

Contributions