Loading Runway...
Loading Runway...
AI disruption doesn't happen at the job-title level. It happens task by task. Runway decomposes your role into the specific work you actually do, maps each task against verified AI capability evidence, and tracks how fast that capability is being adopted in practice. Three separate signals — never collapsed into one.
Wrong unit of analysis
Most AI impact assessments operate at the job-category level — "Marketing" or "Finance". But AI doesn't automate jobs. It automates tasks. A marketing strategist and a marketing coordinator share a title family but have fundamentally different task profiles and exposure. Category-level analysis is noise.
Capability and adoption conflated
The fact that AI can do something is not the same as companies actually deploying it. A task being technically automatable, that automation being adopted at scale, and workers being displaced are three separate signals at three different confidence levels. Most tools collapse them into a single score. That produces confident wrong answers.
Static snapshots
One-time reports are stale within weeks. AI capabilities shift constantly — what matters is not just where you stand, but how fast things are moving and in which direction. Without velocity tracking, assessments are already outdated by the time you read them.
Reassuring by default
Career tools are incentivised to tell you things are fine — it reduces churn. Runway is built on the opposite principle: you deserve the truth, even when it is uncomfortable. Every claim is sourced. Every uncertainty is surfaced.
Every Runway assessment passes through five distinct analytical layers. Each addresses a failure mode that simpler tools ignore.
1. Task decomposition
Your role is broken into atomic tasks — the specific, discrete units of work that constitute what you actually do. Not "content creation" but the specific subtasks within it. Task profiles are anchored to empirical occupational data and calibrated by your input. Two people with the same job title will get different profiles based on how they actually spend their time.
2. Capability evidence
For each task, we track what AI systems can demonstrably do today — at what quality level, with what evidence. Every capability claim carries a confidence level: Confirmed (multiple independent sources), Plausible (single credible source), or Early Signal (preliminary). Vendor marketing claims are weighted differently from peer-reviewed benchmarks. We track the evidence, not the hype.
3. Adoption tracking
Technical capability is not the same as real-world deployment. We separately track what percentage of companies are actually using AI for each task in practice — by industry and company size — and how fast that adoption rate is changing quarter over quarter. This is the signal most tools miss entirely.
4. Personalised exposure
Your exposure profile reflects your specific situation — not a generic average. Task importance, work environment, industry, regulatory context, existing tool usage, and organisational factors all shape your individual results. Two people in the same role at different companies can have meaningfully different exposure profiles.
5. Trajectory and velocity
A snapshot tells you where you are. Runway tells you where you are heading. We project how your exposure will change over 12 to 36 months based on capability advancement rates and adoption velocity for the specific tasks in your profile. The rate of change is often a more critical signal than the current score.
Most tools give you a single “AI risk” number. That number conflates three fundamentally different questions. Runway keeps them separate — because they have different confidence levels and different implications for what you should do.
Capability
Can AI do this task?
What AI systems have demonstrated against this specific task, at what quality threshold relative to human performance, verified by independent evidence. This is the technical ceiling — not a prediction about your job.
Adoption
Are companies actually deploying it?
What percentage of organisations are running AI against this task in production — not what is technically possible, but what is actually happening. Tracked by industry and company size, with quarterly velocity.
Exposure
What does that mean for you?
Your personal exposure, shaped by how central each task is to your role, your specific work environment, and the defensibility factors unique to your situation. This is where generic analysis becomes personal intelligence.
Not all sources are equal. Every claim in the system carries a confidence level based on evidence quality — and that confidence level is visible to you.
Confirmed
Multiple independent sources corroborate the claim. Includes peer-reviewed research, independent benchmarks with published methodology, and large-sample labour market data.
Plausible
Single credible source, or multiple sources that are not fully independent. Includes analyst reports, vendor benchmarks with published methodology, and smaller-sample market signals.
Early Signal
Preliminary or unverified. Includes preprints, vendor announcements without independent validation, and early-stage market trends. Explicitly marked as unconfirmed in your results.
We draw on multiple categories of data to ground every assessment in evidence rather than opinion.
Occupational task data
Empirical task distributions from occupational research databases and workforce analytics. This is the baseline for what people in your role actually spend time on.
AI capability research
Model evaluations, independent benchmarks, system cards, and peer-reviewed research on AI performance across specific task domains. Tracked per-task, not per-job.
Enterprise adoption data
Tool deployment rates, enterprise AI adoption surveys, and workforce analytics tracking what companies are actually running in production — not what vendors claim.
Labour market signals
Job posting analytics, skill demand trends, and employment data from government statistical agencies and workforce research institutions.
Economic research
Labour economics papers, AI economic impact studies, and task-based automation research from leading research institutions.
Not all tasks are equally automatable — even when AI can technically do them. We assess the defensibility of each task in your profile based on structural factors that affect real-world automation resistance.
Organisational context
Tasks that require deep knowledge of a specific organisation's processes, politics, or proprietary systems are harder to hand to AI — even capable AI.
Relationship value
When the value of a task comes from trust, rapport, or human connection — not just the output — automation faces a different kind of barrier.
Consequence stakes
High-consequence tasks (where errors are costly or irreversible) resist automation longer, because organisations require human accountability.
Output verifiability
Tasks where quality can only be judged by someone who already knows the answer are harder to delegate to AI confidently — there's no easy way to check the output.
We are direct about what Runway does well and where its limits are. Overpromising is the failure mode we are most determined to avoid.
What it is
A structured analytical system that decomposes your role into tasks, maps each task against verified capability and adoption evidence, and computes your personal exposure profile with confidence intervals.
A continuously updated intelligence layer — not a one-time report. New evidence is ingested, validated, and reflected in your results as AI capabilities and enterprise adoption evolve.
A decision-support tool that tells you which parts of your work are changing, how fast, and what the evidence quality is — so you can make informed career decisions.
What it is not
Not a crystal ball. Projections are scenario-based extrapolations from current evidence, not predictions. They show where current trajectories lead — not what will definitely happen. Regulatory shifts, market disruptions, or capability breakthroughs could change the picture.
Not an LLM wrapper. The intelligence comes from a structured data layer that tracks tasks, capabilities, and adoption independently. AI is used for synthesis and explanation — the analytical engine is deterministic.
Not career astrology. Every claim is tied to a specific evidence source with a stated confidence level. There are no personality-style insights, no vague "you should upskill" platitudes, and no reassurance that is not evidence-grounded.
Not based on self-report alone. Your inputs calibrate an empirical baseline anchored to occupational research — they do not replace it. When your self-report diverges significantly from the baseline, we surface that explicitly.
Continuous signal updates
Capability evidence and adoption data are updated as new research, benchmarks, and market signals are validated — not on a fixed quarterly cycle.
Version-tracked scoring
Every assessment records which scoring version was used. Your results are reproducible and comparable across time.
Outcome tracking
We follow up at 30, 90, and 180 days to measure whether our assessments predicted real-world outcomes accurately.
Confidence scoring
Every assessment includes a confidence grade. When data quality is low or evidence is sparse, score ranges widen and limitations are stated explicitly.
Most career tools are static — the analysis you get today is the same analysis you would have got six months ago. Runway is built differently. The underlying intelligence layer gets more accurate over time.
Every signal processed improves coverage
Each new capability benchmark, adoption survey, or labour market report is validated, mapped to specific tasks, and integrated into the evidence base. Coverage gaps narrow continuously — especially for roles and industries where early data was sparse.
Every assessment refines the baselines
Aggregate assessment data (fully anonymised) reveals where empirical task baselines are accurate and where they diverge from how people actually work. This tightens the accuracy of archetype profiles over time.
Outcome tracking closes the loop
Follow-up surveys at 30, 90, and 180 days measure whether our exposure assessments corresponded to real-world outcomes. Where they did not, we identify why — and the model adjusts. This is the feedback loop that separates a living system from a static report.
The result is an intelligence layer that is measurably better at assessing exposure today than it was three months ago — and will be better again in another three. That compounding accuracy is the core of what we are building.
Scores reflect task-level structural exposure — not individual capability, work quality, or adaptability.
Adoption data varies in quality by industry and region. Some segments have stronger evidence than others — your confidence grade reflects this.
AI capability is advancing faster than any monitoring system can fully track. We mitigate this with continuous signal processing, but gaps exist.
Projections assume current trajectory. Regulatory changes, market shifts, or breakthrough capabilities could change the picture.
This tool does not constitute professional career, legal, or financial advice.
Runway's methodology draws on established research in labour economics and task-based automation analysis — the same frameworks used by leading economists studying technological displacement.
Task-based automation analysis
The principle that automation risk is best understood at the task level, not the occupation level. Pioneered by labour economists studying how technology reshapes work.
Routine vs. non-routine task frameworks
The distinction between routine cognitive, non-routine cognitive, routine manual, and non-routine manual tasks — and how each category responds differently to AI advancement.
Capability-adoption gap research
Economic research showing that technical capability consistently outpaces enterprise adoption, and that the gap between them varies by industry, regulation, and task type.
This is a model, not a prophecy. The value is in what it reveals about the structure of your work — and what that structure means as AI capabilities evolve.
Start your assessment →