I'm a senior product manager who's spent the last decade building API platforms, developer tools, and B2B SaaS products. Recently, I've been using Claude Code — Anthropic's agentic coding tool — to build a SaaS email client from scratch. It's a learning project, but it's given me a front-row seat to what AI-assisted development actually looks like in practice, not in theory.

Here's what I think product leaders should be paying attention to.

1. AI Will Speed Up the Build-Test-Learn Loop — But Not as Much as You Think

Agentic AI tools compress development cycles. I can spin up features, test ideas, and iterate faster than ever before. That means more experiments at lower cost, faster requirements drafting, and quicker QA cycles. For product strategy, this is significant: you can validate more hypotheses before committing resources.

But "faster" doesn't mean "instant," and it really doesn't mean "unsupervised." More on that below.

2. AI Still Needs a Competent Human in the Loop

Here's the reality check. I'm a seasoned PM and the best consumer-grade agentic coding tool available today is roughly as capable as a junior developer who works extremely fast. I've built my email client in about 10–12 hours of hands-on work, which is genuinely impressive — but it hasn't been easy.

Claude Code still produces bugs. It's overconfident in having fixed those bugs. Sometimes it benefits by having someone tell it where to look for solutions.

It misunderstands requirements.

It loses context across long sessions.

A random person off the street couldn't replicate what I've done, because they wouldn't know what the tool needs to succeed. "Set it and forget it" is a long way off. This puts real brakes on any strategy that assumes AI-driven pivots will be instantaneous.

I'll write more about what it's actually like to build a product with agentic AI in a future post.

3. Features Will Become Agents

This is the big product shift. We're moving from products that let users do things to products that do things for users. Instead of a search engine that helps you find cheap airfare, imagine an AI travel agent that finds flights within your parameters, shows you curated options, and completes the purchase.

For PMs, this changes how we write requirements. We're defining outcomes and guardrails, not screens and flows. PRDs start looking less like interaction specs and more like policy documents.

4. Products Will Need an Interface for Other AIs

Here's one that not enough people are talking about. As AI agents start acting on behalf of users across multiple products, your product needs to be consumable by machines, not just humans.

Think about accessibility features: they're invisible to most users, but when a visually impaired person opens your app, those features feed their screen reader everything it needs. We're heading toward something similar for AI — a semantic layer in our products that's invisible to humans but gives agents the context they need to interact on a user's behalf.

This has major implications for API strategy, structured data, and how we think about distribution.

5. Per-Seat Pricing Is on Borrowed Time

You can't sustainably charge per user while building technology designed to help each user do 10x more. The math doesn't work for customers, and they'll figure that out fast.

Expect a shift toward metered, usage-based pricing — paying for what the AI does, not how many people have access. LLM providers already operate this way. The rest of SaaS will follow, and PMs will need to instrument value delivered, not just feature adoption.

6. Senior PMs Get More Strategic; Junior PMs Face a Real Problem

If AI can draft requirements, synthesize research, and propose prioritization frameworks, the PM's value shifts up the stack — toward problem selection, stakeholder alignment, and judgment. Senior PMs who operate as strategic integrators become more valuable.

But here's the uncomfortable part: the entry-level tasks that train junior PMs — data analysis, spec writing, competitive research — are exactly what AI handles well. If we want to keep developing the next generation of product leaders, we need to figure out a mentorship model that isn't just make-work. I don't think our industry has grappled with this yet.

7. Explainability Becomes a Product Feature, Not a Compliance Checkbox

When an AI agent makes decisions on behalf of your users — prioritizing their inbox, approving a workflow, recommending a path — they need to understand why. "The AI decided" isn't good enough for trust, and it's definitely not good enough for compliance.

Explainability, data transparency, and user control over AI behavior aren't afterthoughts. They're the product. The companies that make "how did we get this result?" a first-class experience will win trust and, ultimately, market share.

What's Next

These are top-level observations. Several of them — especially the human-in-the-loop reality of building with AI, and the junior PM mentorship problem — deserve deeper dives. I'll be writing follow-ups on the ones that generate the most discussion.

What's your take? Which of these feels most urgent in your world?