Back to blog
·Jack

From Pull Requests to Prompt Requests: What Product Teams Need to Learn Next

From Pull Requests to Prompt Requests: What Product Teams Need to Learn Next

There is a moment happening right now in product teams across the industry. Engineers who spent years mastering Git workflows, code review etiquette, and pull request discipline are watching those habits quietly become less central. Peter Steinberger put it plainly: we are moving from pull requests to prompt requests. The unit of software production is shifting. What used to take days of implementation now takes minutes of instruction. Teams that once measured velocity in story points are measuring it in prompts per sprint. The atmosphere is electric. The dashboards are green. Shipping has never felt this fast.

This is not hype, exactly. The capability is real. Autonomous code generation has crossed a threshold that most people quietly expected to arrive later, or not at all. Models can write a working API endpoint, a database schema, a test suite. They can pick up a codebase they have never seen and navigate it with alarming fluency. The craft that once differentiated a senior engineer, the ability to hold a system in one's head and reason about consequence, is no longer locked behind years of apprenticeship. A competent prompt is now the entry point.

So product teams are naturally asking: how do we move faster? How do we scale? How do we use this?

Those are reasonable questions. They are just not the interesting ones anymore.

The Mountain Metaphor That Changes Everything

Peter has a metaphor that I keep returning to. He talks about climbing a mountain. The path to the summit is never a straight line. You cannot see the route from base camp. Switchbacks, false summits, unexpected terrain, the map you drew in advance will be wrong in the ways that matter most. Product development is like this. You cannot know at the beginning what the final product will be. You cannot know which features users will love, which assumptions will break on contact with reality, which early bets will become the core of something valuable. Discovery is not a phase you complete before building. It is the building.

This metaphor does not get easier when AI enters. It gets harder to ignore.

When building was slow and expensive, you were forced to be selective. The constraint itself imposed a kind of discipline. You killed weak ideas not out of wisdom but out of necessity, you simply could not afford to build everything. The filter was economic. Now that constraint is loosening. Code is cheaper. The cost of attempting something has dropped dramatically. That should free teams to experiment faster, and in some ways it does.

But the economics do not flatten uniformly. Building gets cheaper, serving does not. Old SaaS was expensive to build and relatively cheap to run at scale. AI products invert that. The marginal cost of inference is real. The infrastructure costs of ML pipelines are real. You can build a wrapper in a weekend, but the unit economics of running it at scale will find you. The teams discovering this are learning that velocity into the wrong thing is not a shortcut, it is a longer detour.

The Asymmetry Nobody Talks About

There is another asymmetry worth sitting with. AI lowers the cost of attack more than it lowers the cost of defense. When a project receives over a thousand security advisories in a year, roughly seventeen per day, something structural is happening. The cost of generating a plausible-looking vulnerability report has dropped to near zero. The cost of reading one, triaging it, tracing it through a dependency tree, validating whether it is exploitable in context, and deciding how to respond has not dropped at all. The flood is asymmetric. This is not a problem you can prompt your way out of. It requires judgment, context, and someone who understands what actually matters in the specific system they are protecting.

Maintainers are not drowning in code. They are drowning in noise that looks like signal.

The Wrong Question Product Teams Keep Asking

Here is where I want to stop and say something that will feel counterintuitive.

The question everyone is asking, can AI write code fully autonomously, is almost the wrong question. Yes, it largely can. That capability is here, and it will keep improving. But the harder thing, the more valuable thing, is not writing code. It is deciding what not to build.

Judgment has always been the expensive part of product development. Not effort. Not craft. Judgment. The ability to look at ten plausible ideas and correctly identify the two that deserve to exist. The ability to kill a feature that is well-built but wrong. The ability to say no to the stakeholder with the confident hypothesis and the persuasive mockup and the early user quote, and be right. That skill did not get cheaper. It got more expensive, because the cost of acting on a bad idea dropped while the cost of shipping the wrong thing stayed exactly where it was.

Where the Real Bottleneck Lives

Product development will not be fully automated. Not because the code generation is missing, it is not, but because the work that precedes code generation is not a technical problem. It is a comprehension problem. The real bottleneck was never code production. It was comprehension. Understanding what users actually experience. Distinguishing what they say from what they mean. Knowing which pattern in your feedback is a signal and which is just noise. None of that is solved by faster execution. Faster execution actually amplifies the cost of getting it wrong.

System design still matters. Taste still matters. Saying no still matters. These are not romantic notions from an earlier era of software. They are the things that compound into durable products. Thin wrappers are fragile not because the technology is weak but because the judgment layer is absent. The moats that survive are the ones built on workflow depth, on proprietary understanding of a user population, on decisions about specificity made deliberately rather than by drift.

The path to the mountain still has switchbacks. The mountain has not moved.

What Changes Practically for AI Product Teams

So what changes, practically, for product teams?

The role of the product person shifts from gating execution to governing comprehension. If the bottleneck was once getting code written, and that bottleneck is dissolving, then the new bottleneck is the one that was always there but obscured: deciding what to build, and why, and for whom, and what it means if you are wrong.

That means the inputs to decision-making matter more now, not less. Before you write a prompt, you need a clear understanding of the problem you are solving. Before you know the problem, you need honest signal from users, not surveys, not NPS scores, not filtered summaries from whoever is closest to the customer this quarter. Real signal: the unvarnished, often inconvenient, sometimes contradictory evidence of what people actually experience and need.

There is a layer of work that sits before execution. Before prompt quality, there is signal quality. Before shipping, there is understanding. That layer has always existed, but most teams have treated it as manual overhead instead of strategic infrastructure. The teams that win in an agentic product world will not just generate faster. They will understand faster. That is the layer Wingman is built for.

The New Moat Is Speed of Understanding

The teams that figure this out will not look slower to outsiders. They will look more deliberate. They will build fewer things. Their things will land. They will not mistake the ease of execution for validation of the idea being executed. They will hold the mountain metaphor seriously: you cannot know the route in advance, which means the intelligence you gather as you move is not overhead, it is the product.

Product judgment, in other words, is not a soft skill that AI makes obsolete. It is the hard constraint that AI makes more consequential. Every organization will have access to the same generation capabilities. What they will not share is the understanding of their users, the accumulated signal, the wisdom about which problems deserve to be solved and which should be declined.

That is where the work is now. Not in the prompts. In what comes before them.


Want to go deeper on signal quality and AI product management? Download our guide below or request early access to Wingman.