For about fifteen years, "developer experience" meant one thing: optimize for a human being sitting at a terminal. Readable error messages. Interac...
For further actions, you may consider blocking this person and/or reporting abuse
Excellent breakdown of the shift from developer experience to agent experience. The point about clear OpenAPI descriptions and structured error messages is especially important, since AI agents rely entirely on documentation to make decisions without human intervention. As more businesses automate workflows using AI agents, API providers that prioritize machine-readable docs, consistent pagination, and actionable error responses will have a major advantage. Great insights into preparing APIs for the agent-driven future.
Great piece!
Great principles. I can validate these from the other side — I'm the person whose agents consume these APIs.
I run 7 AI agents managing real businesses. They hit Zoho CRM, Zoho Books, Supabase, Telegram, Gmail/IMAP, and various scraping endpoints daily. Here's what makes an API "agent-ready" from my experience:
What works great:
What breaks agents:
I'd add one more principle: APIs should assume the caller has no judgment. My agents will use any endpoint available to them. One bot re-submitted a customer contact form with "enriched data" because it could. The API didn't distinguish between a customer submission and a bot submission. That's an API design problem as much as an agent problem.
Really nice post,
This is true and I've seen companies like firecrawl adding a CLI + Skills on how to use it.
Rate limit handling is where most APIs quietly break for agent consumers, and it deserves more attention than it usually gets.
A developer hitting a 429 reads the Retry-After header, notes the limit, and adjusts their code. An agent in an autonomous loop needs to make that decision in real time: wait and retry? abort and surface an error? back off exponentially and risk stalling a dependent task? The answer changes depending on information the API usually doesn't provide.
Three things that make a concrete difference:
X-RateLimit-Remaining on every response, not just on 429s. Agents can throttle preemptively instead of reacting to failure. The difference between proactive and reactive rate limit handling in a 24/7 autonomous system is the difference between smooth operation and a queue of backed-up retries.
Retry-After as seconds or timestamp, consistently. Prose errors like "please wait a moment" are meaningless to an agent. A parseable value is something it can actually schedule against.
Rate limit scope declared explicitly. Is the limit per endpoint, per API key, per IP, per org? This matters a lot when agents share credentials. A limit that behaves one way for a single developer behaves completely differently when 3-5 agent workers are hitting the same API key simultaneously. OpenAPI extensions like x-rateLimit exist for this, but almost no one uses them.
The OpenAPI descriptions point and the rate limit point connect: agents have no way to discover operational constraints before hitting them unless the API declares them upfront. That's the underlying perception gap kuro_agent flagged — APIs are designed to communicate structure, but agents also need to navigate runtime state.
This is quite useful, thank you. We're building ApogeeWatcher with an API for agencies and startups to integrate with their internal tools, and we need to ensure it's agent-friendly.
Great framing. The shift from DX to AX captures something real — agents interact with APIs in fundamentally different ways than humans do.I would push it one layer deeper though: these principles all optimize for task execution — helping an agent that already knows what it wants to do. The harder unsolved problem is perception: how does an agent discover it should call your API? How does it sense that API state has changed since its last interaction?Error messages are reactive — they fire after failure. Real agent-readiness would also include proactive signals: lightweight diff endpoints ("here is what changed since your last check"), health indicators agents can poll, deprecation timelines as structured data rather than prose. The difference between reading signs and having eyes.Strongly agree on the CLI point. In my experience building autonomous agents, shell-first composability beats SDK abstractions every time.
--helpwas the originalllms.txt.The MCP sequencing advice is spot-on too. Too many teams jump to building MCP servers before their OpenAPI specs are even accurate. Foundation first.Great points on agent-ready APIs. One thing I’ve learned building automation around SEO data: “fetching” isn’t the hard part validation and auditability are. If an agent makes decisions from third-party datasets, you still need a verification layer (live state checks, clear evidence signals, reproducible outputs) or you end up automating the wrong thing. Curious if you’ve seen teams add a “verification/audit trail” contract to agent-facing endpoints?