Loading Runway...
Loading Runway...
Evidence-backed analysis across 20 specific tasks. Capability claims sourced from peer-reviewed research, independent benchmarks, and industry data. Adoption rates tracked by industry and company size.
AI Exposure
Defensibility
Avg Capability
20/20 tasks with evidence
Avg Deployment
143 evidence sources
Market Context
Senior PM demand remains strong — AI is amplifying output, not replacing strategic judgment. Entry-level PM/APM hiring is down ~73% (Veritone Q1 2025) as AI handles first-draft PRDs, user stories, and meeting notes. Senior PMs who can direct AI and make strategic product bets are increasingly valuable. Vibe-coding tools now allow PMs to prototype directly, reducing engineering dependency for MVP scoping.
Source: Based on Veritone Q1 2025 Labor Market Analysis, LinkedIn Talent Insights 2025, and analysis of job description shifts in PM roles across US tech companies.
Role Defensibility Profile
Higher = harder to automate
Task-Level Analysis — 20 Tasks
Evaluate competing feature requests, technical investments, and strategic bets to sequence the product roadmap based on impact, effort, and alignment with business objectives.
Capability Evidence
Highest Exposure Areas
Meetings / Coordination / Scheduling
Calendar AI and agentic scheduling tools already handle meeting coordination. The coordination value that remains human is the nuanced political navigation — and that erodes as AI gains organisational context.
Writing / Summarising / Documentation
GPT-5 Deep Research and Claude already produce publication-quality reports, emails, and documentation. By 2027, AI writing assistants will handle first-draft creation for virtually all standard business documents with minimal human input.
Customer / Stakeholder Communication
AI agents are now handling routine customer communication autonomously. The protection in this task comes from novel relationship context and trust — which erodes when your client interactions become standardised or when AI gains sufficient context to replicate the pattern.
Strongest Defenses
Decision-Making Under Uncertainty
This remains one of the most defensible task categories — AI struggles with genuine novelty and accountability. The erosion condition: as AI decision-support tools become standard, the bar for what counts as 'genuine uncertainty' rises, and roles that mostly execute defined playbooks lose this protection.
Customer / Stakeholder Communication
AI agents are now handling routine customer communication autonomously. The protection in this task comes from novel relationship context and trust — which erodes when your client interactions become standardised or when AI gains sufficient context to replicate the pattern.
Relationship Management / Trust Building
This is the false moat most people rely on. Relationship trust is real protection today — it erodes when: (a) clients become comfortable trusting AI-mediated interactions, (b) your relationship context becomes standardisable, or (c) your firm deploys AI account management tools that clients prefer for speed.
Live signals
Real-time AI signals affecting this role
Compare roles
See how other roles compare
This is the average. What about you?
The average Product Manager scores 38/100 risk. But your specific role, environment, and task allocation could be higher or lower. Get your personalised score in ~4 minutes.
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Roadmap Prioritisation, ...
The IMF finds that approximately 40% of global employment is exposed to AI, with up to 60% in advanced economies. For knowledge work tasks like Roadmap Prioritisation, the study estimates 10% of task ...
Dell'Acqua et al. found that consultants using GPT-4 completed analytical tasks 25.1% faster with 40% higher quality for tasks inside the AI capability frontier. Roadmap Prioritisation contains analyt...
Deployment by Industry
Navigate competing priorities across engineering, design, sales, marketing, and executive leadership to build consensus on product direction and trade-off decisions.
Capability Evidence
Fabric IQ's solution to context fragmentation should help ensure stakeholders are working from consistent information and aligned understanding
Anthropic's study of real-world Claude usage across millions of professional conversations found that tasks related to Stakeholder Alignment & Buy-In represent a significant category of AI-augmented w...
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to Stakeholder Alignment & Buy-In represent a meaningful share of professional LLM usage. The study i...
Deployment by Industry
Translate business needs and user problems into detailed product requirements documents, functional specifications, and acceptance criteria that engineering can build against.
Capability Evidence
AI systems can filter content based on regional requirements
AI agents can assist in writing tech journalism stories
AI agent personas can assist in writing and structuring requirements by providing domain-specific expertise across 17 different departments
Deployment by Industry
Define success metrics and KPIs for product features and initiatives, set up tracking instrumentation, and monitor metric performance against targets.
Capability Evidence
Tableau AI and Pulse enable natural language data querying and automated insight generation. For tasks like Metric Definition & Tracking, AI tools achieve approximately 48% quality on routine data exp...
Cognizant and Oxford Economics analysed 18,000+ tasks across industries and found that Gen AI will impact 90% of jobs but fully displace very few. For tasks like Metric Definition & Tracking, the stud...
The IMF finds that approximately 40% of global employment is exposed to AI, with up to 60% in advanced economies. For knowledge work tasks like Metric Definition & Tracking, the study estimates 27% of...
Deployment by Industry
Analyse qualitative and quantitative user research — interviews, surveys, usability tests, behavioural data — and distil findings into actionable product insights.
Capability Evidence
AI systems can adapt research workflows and toolsets to changing scientific tasks
Automated verification of document-centric responses can assist researchers in validating citations and document references during research synthesis
ScienceClaw can synthesize research findings as a self-evolving AI research colleague
Deployment by Industry
Write user stories with clear acceptance criteria, edge cases, and context that enable engineering teams to implement features without ambiguity.
Capability Evidence
Automate vehicle imagery creation process for marketing
The Claude system card reports near-expert performance on graduate-level reasoning (GPQA), professional coding (SWE-bench), and document analysis tasks. For User Story Creation, Claude demonstrates ap...
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to User Story Creation represent a meaningful share of professional LLM usage. The study indicates 51...
Deployment by Industry
Facilitate sprint planning sessions, define sprint goals, negotiate scope with engineering leads, and ensure the team commits to a realistic and valuable set of work.
Capability Evidence
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to Sprint Planning & Ceremony Facilitation represent a meaningful share of professional LLM usage. Th...
GitHub's updated impact study shows 46% of all code is now AI-generated among Copilot users, with 82% developer satisfaction. For tasks like Sprint Planning & Ceremony Facilitation, AI coding assistan...
OpenAI's o1 system card demonstrates significant advancement in complex reasoning tasks, achieving 83rd percentile on Codeforces and 93rd percentile on AMC math competitions. For analytical aspects of...
Deployment by Industry
Define quarterly and annual product objectives and key results aligned to company strategy, negotiate targets with leadership, and track progress throughout the period.
Capability Evidence
MIT Sloan Management Review's annual survey of 3,000+ managers found that only 10% of organizations report significant financial value from AI deployment, despite widespread experimentation. For tasks...
OpenAI's o1 system card demonstrates significant advancement in complex reasoning tasks, achieving 83rd percentile on Codeforces and 93rd percentile on AMC math competitions. For analytical aspects of...
Salesforce reports that AI-using sales teams achieve 83% revenue growth vs 66% without AI. For tasks like OKR & Goal Setting, Einstein AI provides automated data entry, intelligent prioritization, and...
Deployment by Industry
Identify, track, and resolve dependencies between product, engineering, design, data, and infrastructure teams to prevent blockers and ensure coordinated delivery.
Capability Evidence
Extract and reconcile design data for nuclear engineering applications
AI agents that handle interdependent work tasks can better manage dependencies across different functional areas
Level-4 autonomous optical network manages cross-domain cross-layer dependencies for distributed AI training with 3.2x higher performance than single agents
Deployment by Industry
Analyse product usage data, funnel metrics, retention cohorts, and feature adoption patterns to identify opportunities, diagnose problems, and validate hypotheses.
Capability Evidence
Enhanced shopping features provide new data streams and analytics capabilities for interpreting product performance and customer engagement
The model can interpret visual analytics dashboards and charts while applying reasoning to understand product performance patterns
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Product Analytics Interp...
Deployment by Industry
Coordinate product launches across marketing, sales, support, and documentation teams — defining launch tiers, messaging, enablement materials, and rollout timelines.
Capability Evidence
AI agents can reduce coordination overhead in enterprise workflows
The Claude system card reports near-expert performance on graduate-level reasoning (GPQA), professional coding (SWE-bench), and document analysis tasks. For Go-to-Market Coordination, Claude demonstra...
GitHub's updated impact study shows 46% of all code is now AI-generated among Copilot users, with 82% developer satisfaction. For tasks like Go-to-Market Coordination, AI coding assistants demonstrate...
Deployment by Industry
Continuously refine the product backlog — re-prioritise items, split large epics into implementable stories, remove stale items, and ensure the top of the backlog is always ready for engineering.
Capability Evidence
GitHub's updated impact study shows 46% of all code is now AI-generated among Copilot users, with 82% developer satisfaction. For tasks like Backlog Grooming & Refinement, AI coding assistants demonst...
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to Backlog Grooming & Refinement represent a meaningful share of professional LLM usage. The study in...
Cognizant and Oxford Economics analysed 18,000+ tasks across industries and found that Gen AI will impact 90% of jobs but fully displace very few. For tasks like Backlog Grooming & Refinement, the stu...
Deployment by Industry
Monitor competitor products, features, pricing, and positioning to identify market gaps, inform differentiation strategy, and anticipate competitive threats.
Capability Evidence
Perform geospatial analysis beyond vector-only limitations
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Competitive Analysis, cu...
Dell'Acqua et al. found that consultants using GPT-4 completed analytical tasks 25.1% faster with 40% higher quality for tasks inside the AI capability frontier. Competitive Analysis contains analytic...
Deployment by Industry
Write release notes, changelog entries, and internal announcements that clearly communicate what shipped, why it matters, and what users or teams need to know.
Capability Evidence
Anthropic's study of real-world Claude usage across millions of professional conversations found that tasks related to Release Communication represent a significant category of AI-augmented work. The ...
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to Release Communication represent a meaningful share of professional LLM usage. The study indicates ...
Noy & Zhang found in a controlled experiment that AI assistance reduced professional writing task completion time by 40% and improved output quality by 18%. Tasks similar to Release Communication fall...
Deployment by Industry
Evaluate whether to build capabilities in-house, integrate third-party tools, or partner — weighing cost, time-to-market, strategic control, and long-term maintenance burden.
Capability Evidence
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Build vs Buy Decisions, ...
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to Build vs Buy Decisions represent a meaningful share of professional LLM usage. The study indicates...
MIT Sloan Management Review's annual survey of 3,000+ managers found that only 10% of organizations report significant financial value from AI deployment, despite widespread experimentation. For tasks...
Deployment by Industry
Collect, categorise, and prioritise customer feedback from support tickets, sales calls, NPS surveys, and user interviews to inform product decisions.
Capability Evidence
AI systems can handle initial customer interactions in support contexts
Gemini 3.1 Flash Live's improved audio AI with better precision and lower latency can help process and categorize voice-based customer feedback more effectively
More natural and reliable audio AI capabilities improve the ability to process and categorize customer feedback delivered through voice channels
Deployment by Industry
Work with engineering to assess technical complexity, architectural implications, and implementation risks of proposed features before committing them to the roadmap.
Capability Evidence
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to Technical Feasibility Assessment represent a meaningful share of professional LLM usage. The study...
Dell'Acqua et al. found that consultants using GPT-4 completed analytical tasks 25.1% faster with 40% higher quality for tasks inside the AI capability frontier. Technical Feasibility Assessment conta...
AI coding agents can automate part of advanced technical job functions in AI development roles
Deployment by Industry
Define product tiers, packaging structure, and pricing strategy based on value analysis, competitive positioning, and willingness-to-pay research.
Capability Evidence
The Claude system card reports near-expert performance on graduate-level reasoning (GPQA), professional coding (SWE-bench), and document analysis tasks. For Pricing & Packaging Decisions, Claude demon...
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Pricing & Packaging Deci...
OpenAI's o1 system card demonstrates significant advancement in complex reasoning tasks, achieving 83rd percentile on Codeforces and 93rd percentile on AMC math competitions. For analytical aspects of...
Deployment by Industry
Coach junior product managers, provide feedback on their work, and help develop product thinking skills across the organisation.
Capability Evidence
The IMF finds that approximately 40% of global employment is exposed to AI, with up to 60% in advanced economies. For knowledge work tasks like Team Mentoring & Development, the study estimates 5% of ...
AI can generate design elements for web development
The WEF Future of Jobs Report 2025 projects that employers expect 83 million jobs displaced and 69 million created by 2030, with analytical thinking and creative thinking remaining the most valued hum...
Deployment by Industry
Design and run product experiments — A/B tests, feature flags, beta programmes — to validate hypotheses with data before committing to full rollouts.
Capability Evidence
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Product Experimentation ...
Anthropic's study of real-world Claude usage across millions of professional conversations found that tasks related to Product Experimentation & A/B Testing represent a significant category of AI-augm...
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to Product Experimentation & A/B Testing represent a meaningful share of professional LLM usage. The ...
Deployment by Industry