AI Agents Are Running Your Business. Who's Checking?

Updated on
AI Agents Are Running Your Business. Who's Checking?

You wouldn't let an employee approve their own expenses

Basic internal controls exist for a reason. Segregation of duties. Approval hierarchies. Audit trails. We apply these to human workflows because we understand that unchecked authority creates risk, even when the person involved has good intentions.

AI agents have none of these controls by default. And they are being handed significant authority very quickly.

An AI agent is not a chatbot. It does not answer questions and wait for you to act. It acts. It can submit purchase requests, modify records, trigger API calls, send communications, and interact with systems autonomously, at machine speed, without pausing to ask. The productivity case is real. So is the governance gap.

The adoption reality

88% of organisations are currently using or actively planning to deploy AI agents. Only 37% have moved beyond pilot programs. That means the majority of businesses bringing agents into their operations are doing so without mature governance frameworks in place.

The ASD Essential Eight doesn't yet explicitly address non-human identities. But the intent of Maturity Level 2 and 3 MFA requirements is clear: privileged actions on critical systems should require strong, phishing-resistant authentication. The fact that the actor is an AI agent rather than a human employee doesn't change the risk profile. In many cases it increases it, because agents operate faster, at greater scale, and without the situational judgement a human might apply.

The authentication problem at the heart of agentic AI

Modern passkey authentication is built on a foundational principle: before any sensitive action is authorised, a human must be physically present and must explicitly approve it. This is not just good UX design. It is baked into the WebAuthn cryptographic standard. The private key that signs an authentication request can only be unlocked by a verified user gesture: a biometric scan, a PIN, or a hardware key touch.

AI agents cannot satisfy this requirement. They are software. They have no physical presence. They cannot touch a YubiKey or provide a fingerprint. This means that when an agent authenticates to a system on your behalf, it is doing so outside the normal human-verified authentication flow, typically using long-lived API keys, service account credentials, or OAuth tokens that were set up once and left running.

Ask yourself: when were those credentials last reviewed? Who set them up? What do they permit the agent to do? Is there an audit trail of every action taken under those credentials?

For most organisations, the honest answers to those questions are uncomfortable.

How a YubiKey creates the human checkpoint

The practical architecture is straightforward. Before an AI agent is granted access to any system where its actions are consequential, a human must authenticate using a passkey anchored by a hardware security key. That authentication event issues the agent a scoped, time-limited token. The agent operates within the bounds of that token. When the token expires, the agent must wait for a human to re-authenticate and re-authorise.

The YubiKey touch is your policy enforcement point. It is the moment where a specific, identified person physically approved what the agent is about to do. It is auditable, timestamped, and tied to a physical device that cannot be copied or cloned.

This architecture does not slow down your AI workflows in any meaningful way. Tokens can have appropriate lifespans for the task: hours for a routine workflow, minutes for a high-risk action. What it does is ensure that somewhere in the chain, a human made a deliberate, verifiable decision.

What this looks like in practice

Consider a few common agentic use cases and the appropriate human gate:

  • Finance automation: An agent that processes invoices and schedules payments should require a YubiKey-authenticated approval session before each payment run, with a token scoped only to the payment system.
  • IT provisioning: An agent that creates user accounts or modifies access permissions is performing privileged actions. Each provisioning session should be tied to a human authentication event from the approving administrator.
  • Customer data access: An agent querying or exporting customer records should operate under a token issued by an authenticated human with explicit scope, not a standing service account credential.

None of these controls are technically complex. They are governance decisions, implemented through your authentication architecture.

Getting started

The YubiKey 5 Series supports FIDO2/WebAuthn, OATH-TOTP, PIV, and OpenPGP, covering the authentication protocols your systems are most likely to use. USB-C and USB-A form factors are available, with NFC for mobile workflows.

Trust Panda supplies YubiKeys across Australia with local stock, GST-inclusive pricing, and no minimum order quantities for business purchases. Volume pricing is available for larger deployments. Contact our team to discuss.

Shop the YubiKey 5 Series or get in touch to talk through your deployment requirements.