Loading Runway...
Loading Runway...
Evidence-backed analysis across 17 specific tasks. Capability claims sourced from peer-reviewed research, independent benchmarks, and industry data. Adoption rates tracked by industry and company size.
AI Exposure
Defensibility
Avg Capability
17/17 tasks with evidence
Avg Deployment
92 evidence sources
Market Context
Generative AI tools — including GitHub Copilot for requirements, Jira AI, and purpose-built tools like Elicit — are automating the production of BRDs, user stories, and data dictionaries at pace. A Forrester 2025 survey found that 44% of enterprises had reduced BA headcount for routine process documentation roles while increasing investment in senior BAs who bridge business and technical domains. The 'translating business needs into technical specs' function remains a human stronghold, but entry-level BA roles face the steepest attrition. BAs who develop AI prompt engineering and process mining skills are significantly more resilient.
Source: Based on Forrester 'The Future of Business Analysis' (2025), IIBA Global State of Business Analysis Report (2025), and LinkedIn Hiring Trends (Q3 2025).
Role Defensibility Profile
Higher = harder to automate
Task-Level Analysis — 17 Tasks
Conduct stakeholder interviews, workshops, and observation sessions to gather, clarify, and document business requirements for projects and initiatives.
Capability Evidence
The Claude system card reports near-expert performance on graduate-level reasoning (GPQA), professional coding (SWE-bench), and document analysis tasks. For Requirements Elicitation, Claude demonstrat...
Highest Exposure Areas
Analysis / Reporting
Standard analysis and reporting is already being absorbed by AI at the enterprise level. McKinsey notes analysis tasks among the sharpest automation increases. The defensible remainder is interpretation requiring proprietary context — that window is closing.
Writing / Summarising / Documentation
GPT-5 Deep Research and Claude already produce publication-quality reports, emails, and documentation. By 2027, AI writing assistants will handle first-draft creation for virtually all standard business documents with minimal human input.
Customer / Stakeholder Communication
AI agents are now handling routine customer communication autonomously. The protection in this task comes from novel relationship context and trust — which erodes when your client interactions become standardised or when AI gains sufficient context to replicate the pattern.
Strongest Defenses
Customer / Stakeholder Communication
AI agents are now handling routine customer communication autonomously. The protection in this task comes from novel relationship context and trust — which erodes when your client interactions become standardised or when AI gains sufficient context to replicate the pattern.
Decision-Making Under Uncertainty
This remains one of the most defensible task categories — AI struggles with genuine novelty and accountability. The erosion condition: as AI decision-support tools become standard, the bar for what counts as 'genuine uncertainty' rises, and roles that mostly execute defined playbooks lose this protection.
Analysis / Reporting
Standard analysis and reporting is already being absorbed by AI at the enterprise level. McKinsey notes analysis tasks among the sharpest automation increases. The defensible remainder is interpretation requiring proprietary context — that window is closing.
Live signals
Real-time AI signals affecting this role
Compare roles
See how other roles compare
This is the average. What about you?
The average Business Analyst scores 48/100 risk. But your specific role, environment, and task allocation could be higher or lower. Get your personalised score in ~4 minutes.
Google reports that Gemini integration in Workspace automates email responses, generates document drafts, and creates spreadsheet formulas from natural language. For tasks like Requirements Elicitatio...
Google DeepMind reports Gemini Ultra achieved state-of-the-art results on 30 of 32 benchmarks, including 90% on MMLU and strong multimodal reasoning. For tasks like Requirements Elicitation that invol...
Deployment by Industry
Write detailed business requirements documents, user stories, acceptance criteria, and functional specifications that translate stakeholder needs into implementable requirements.
Capability Evidence
The Claude system card reports near-expert performance on graduate-level reasoning (GPQA), professional coding (SWE-bench), and document analysis tasks. For Requirements Documentation, Claude demonstr...
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to Requirements Documentation represent a meaningful share of professional LLM usage. The study indic...
GitHub's updated impact study shows 46% of all code is now AI-generated among Copilot users, with 82% developer satisfaction. For tasks like Requirements Documentation, AI coding assistants demonstrat...
Deployment by Industry
Document current-state and future-state business processes using flowcharts, swim-lane diagrams, and process models to identify inefficiencies and improvement opportunities.
Capability Evidence
The Claude system card reports near-expert performance on graduate-level reasoning (GPQA), professional coding (SWE-bench), and document analysis tasks. For Business Process Mapping, Claude demonstrat...
The Anthropic Economic Impact Report found that AI systems achieve 19% human-competitive quality on routine knowledge tasks related to Business Process Mapping, though significant quality gaps persist...
Google reports that Gemini integration in Workspace automates email responses, generates document drafts, and creates spreadsheet formulas from natural language. For tasks like Business Process Mappin...
Deployment by Industry
Analyse business data using SQL, Excel, and BI tools to identify trends, anomalies, and insights that inform decision-making, and present findings in reports and dashboards.
Capability Evidence
The Claude system card reports near-expert performance on graduate-level reasoning (GPQA), professional coding (SWE-bench), and document analysis tasks. For Data Analysis & Reporting, Claude demonstra...
OpenAI's o1 system card demonstrates significant advancement in complex reasoning tasks, achieving 83rd percentile on Codeforces and 93rd percentile on AMC math competitions. For analytical aspects of...
Noy & Zhang found in a controlled experiment that AI assistance reduced professional writing task completion time by 40% and improved output quality by 18%. Tasks similar to Data Analysis & Reporting ...
Deployment by Industry
Compare current-state processes, systems, and capabilities against desired future-state to identify gaps, quantify impacts, and recommend solutions for closing them.
Capability Evidence
The Claude system card reports near-expert performance on graduate-level reasoning (GPQA), professional coding (SWE-bench), and document analysis tasks. For Gap Analysis, Claude demonstrates approxima...
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Gap Analysis, current AI...
OpenAI's o1 system card demonstrates significant advancement in complex reasoning tasks, achieving 83rd percentile on Codeforces and 93rd percentile on AMC math competitions. For analytical aspects of...
Deployment by Industry
Plan, coordinate, and oversee user acceptance testing by defining test scenarios, managing test execution, tracking defects, and obtaining stakeholder sign-off on deliverables.
Capability Evidence
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like User Acceptance Testing ...
GitHub's updated impact study shows 46% of all code is now AI-generated among Copilot users, with 82% developer satisfaction. For tasks like User Acceptance Testing Coordination, AI coding assistants ...
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to User Acceptance Testing Coordination represent a meaningful share of professional LLM usage. The s...
Deployment by Industry
Develop business cases for proposed initiatives including cost-benefit analysis, ROI projections, risk assessment, and strategic alignment to support investment decisions.
Capability Evidence
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Business Case Developmen...
Dell'Acqua et al. found that consultants using GPT-4 completed analytical tasks 25.1% faster with 40% higher quality for tasks inside the AI capability frontier. Business Case Development contains ana...
Brynjolfsson, Li & Raymond found that AI assistance increased customer service worker productivity by 14% on average, with 34% gains for novice workers, in a study of 5,179 agents. For tasks like Busi...
Deployment by Industry
Assess the impact of proposed changes on people, processes, and technology across affected business units, identifying risks and developing mitigation strategies.
Capability Evidence
The Anthropic Economic Impact Report found that AI systems achieve 28% human-competitive quality on routine knowledge tasks related to Change Impact Assessment, though significant quality gaps persist...
MIT Sloan Management Review's annual survey of 3,000+ managers found that only 10% of organizations report significant financial value from AI deployment, despite widespread experimentation. For tasks...
Dell'Acqua et al. found that consultants using GPT-4 completed analytical tasks 25.1% faster with 40% higher quality for tasks inside the AI capability frontier. Change Impact Assessment contains anal...
Deployment by Industry
Evaluate potential solutions — software products, process changes, or organisational restructuring — against requirements, constraints, and evaluation criteria to recommend the best option.
Capability Evidence
GitHub's updated impact study shows 46% of all code is now AI-generated among Copilot users, with 82% developer satisfaction. For tasks like Solution Evaluation & Recommendation, AI coding assistants ...
MIT Sloan Management Review's annual survey of 3,000+ managers found that only 10% of organizations report significant financial value from AI deployment, despite widespread experimentation. For tasks...
Dell'Acqua et al. found that consultants using GPT-4 completed analytical tasks 25.1% faster with 40% higher quality for tasks inside the AI capability frontier. Solution Evaluation & Recommendation c...
Deployment by Industry
Maintain and prioritise the product or project backlog by grooming user stories, clarifying requirements, estimating effort with development teams, and sequencing work based on business value.
Capability Evidence
The WEF Future of Jobs Report 2025 projects that employers expect 83 million jobs displaced and 69 million created by 2030, with analytical thinking and creative thinking remaining the most valued hum...
MIT Sloan Management Review's annual survey of 3,000+ managers found that only 10% of organizations report significant financial value from AI deployment, despite widespread experimentation. For tasks...
Cognizant and Oxford Economics analysed 18,000+ tasks across industries and found that Gen AI will impact 90% of jobs but fully displace very few. For tasks like Backlog Management & Prioritisation, t...
Deployment by Industry
Create low-fidelity wireframes, mockups, and interactive prototypes to visualise proposed solutions and validate requirements with stakeholders before development begins.
Capability Evidence
Adobe reports that Firefly generated over 6.5 billion images in its first year, with Generative Fill and Expand reducing iterative editing time significantly. For tasks like Wireframing & Prototyping,...
Tableau AI and Pulse enable natural language data querying and automated insight generation. For tasks like Wireframing & Prototyping, AI tools achieve approximately 52% quality on routine data explor...
Google DeepMind reports Gemini Ultra achieved state-of-the-art results on 30 of 32 benchmarks, including 90% on MMLU and strong multimodal reasoning. For tasks like Wireframing & Prototyping that invo...
Deployment by Industry
Define key performance indicators and success metrics for business processes and projects, establishing baselines, targets, and measurement methodologies.
Capability Evidence
The Anthropic Economic Impact Report found that AI systems achieve 36% human-competitive quality on routine knowledge tasks related to KPI & Metrics Definition, though significant quality gaps persist...
MIT Sloan Management Review's annual survey of 3,000+ managers found that only 10% of organizations report significant financial value from AI deployment, despite widespread experimentation. For tasks...
Tableau AI and Pulse enable natural language data querying and automated insight generation. For tasks like KPI & Metrics Definition, AI tools achieve approximately 42% quality on routine data explora...
Deployment by Industry
Create logical and conceptual data models, map data flows between systems, and define data transformation rules to support system integration and migration projects.
Capability Evidence
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Data Modelling & Mapping...
Anthropic's study of real-world Claude usage across millions of professional conversations found that tasks related to Data Modelling & Mapping represent a significant category of AI-augmented work. T...
The Anthropic Economic Index analysis of real-world Claude usage patterns found that tasks related to Data Modelling & Mapping represent a meaningful share of professional LLM usage. The study indicat...
Deployment by Industry
Draft requests for proposal, evaluate vendor responses against requirements and scoring criteria, coordinate demos, and produce recommendation reports for vendor selection.
Capability Evidence
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Vendor Assessment & RFP ...
Anthropic's study of real-world Claude usage across millions of professional conversations found that tasks related to Vendor Assessment & RFP Management represent a significant category of AI-augment...
Dell'Acqua et al. found that consultants using GPT-4 completed analytical tasks 25.1% faster with 40% higher quality for tasks inside the AI capability frontier. Vendor Assessment & RFP Management con...
Deployment by Industry
Create user guides, training materials, and standard operating procedures for new or changed systems and processes to support adoption and knowledge transfer.
Capability Evidence
Seismic foundation model training time can be reduced from 6 months to 5 days through distributed training
Multi-agent LLM systems can provide training support for behavioral health professionals
The Frontier Model Forum benchmarks document consistent year-over-year improvement across reasoning, coding, and professional knowledge tasks. For Training Material Development, frontier models demons...
Deployment by Industry
Investigate operational problems and process failures using structured techniques such as fishbone diagrams, 5-whys, and Pareto analysis to identify underlying causes and recommend corrective actions.
Capability Evidence
The Claude system card reports near-expert performance on graduate-level reasoning (GPQA), professional coding (SWE-bench), and document analysis tasks. For Root Cause Analysis, Claude demonstrates ap...
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Root Cause Analysis, cur...
OpenAI's o1 system card demonstrates significant advancement in complex reasoning tasks, achieving 83rd percentile on Codeforces and 93rd percentile on AMC math competitions. For analytical aspects of...
Deployment by Industry
Conduct post-implementation reviews to evaluate whether delivered solutions meet business objectives, capture lessons learned, and identify remaining gaps or enhancement opportunities.
Capability Evidence
The Stanford HAI AI Index Report 2025 documents AI systems achieving expert-level performance on graduate-level science questions and professional coding tasks. For tasks like Post-Implementation Revi...
GitHub's updated impact study shows 46% of all code is now AI-generated among Copilot users, with 82% developer satisfaction. For tasks like Post-Implementation Review, AI coding assistants demonstrat...
Cognizant and Oxford Economics analysed 18,000+ tasks across industries and found that Gen AI will impact 90% of jobs but fully displace very few. For tasks like Post-Implementation Review, the study ...
Deployment by Industry