Loading Runway...
Loading Runway...
Runway uses precise language because precision matters. Every branded term below has a strict meaning — what we measure, how we grade it, what we deliver. Bookmark this if you use Runway regularly.
Jump to
Your personalised intelligence report.
The primary Runway output. A task-by-task read on your specific role: what AI can now do against each task, what companies in your segment are actually deploying, which parts of your work remain defensible and for how long. Every claim carries a source, a date, and a confidence grade. Generated on first assessment and regenerated whenever our signal pipeline validates a material change — not on a fixed schedule.
The atomic tasks that make up your role, weighted by how much of your work each represents.
Built during the assessment. Starts from a role archetype (what the evidence says most people in your role actually do), then calibrated by your input — what applies, what doesn't, where you diverge. The Task Map is the input to everything downstream: your Exposure Profile, your Moat Analysis, and The Brief are all computed against it.
Per-task defensibility — what AI cannot easily replicate, and how long that holds.
For each task in your profile, we quantify the structural factors that resist automation: relationship depth, proprietary context, consequence stakes, organisational specificity, and judgement under uncertainty. Output: a durability estimate per task (1–2 years, 2–5 years, 5+ years) with the reasoning behind it. Available in full on Pro.
Your summary score — risk, range, and confidence grade.
The headline read on your role: Automation Risk, Defensive Strength, and Augmentation Opportunity, each on a 0–100 scale. Presented with a confidence interval (honest range, not a false-precision point estimate) and a confidence grade that tells you how much to trust the read. Computed deterministically from your Task Map against our capability and adoption data — not vibes, not self-report.
A–D reliability indicator on every score we produce.
A: narrow confidence interval, strong data coverage. B: solid evidence with some gaps. C: meaningful uncertainty — read the range carefully. D: sparse data, treat as directional. The grade is a product feature, not a caveat. We would rather tell you the data is thin than manufacture precision we do not have.
How your Exposure Profile changes over time.
Projections at 12, 24, and 36 months based on current capability advancement rates and adoption velocity for the specific tasks in your profile. Includes velocity direction (accelerating, stable, decelerating) and projection caveats — the assumptions underlying each forecast. Available on Pro.
The rate of change in your exposure.
Direction and magnitude of shift in your Exposure Profile, quarter over quarter. Often the most important signal — a stable risk score that is accelerating matters more than a high risk score that is flat.
A validated update from our ingestion pipeline.
A single piece of processed evidence — a benchmark result, a tool release, a deployment data point, a regulatory shift — that has been extracted, validated against its source type, graded for confidence, and mapped to specific tasks in our taxonomy. Signals feed The Brief and power the live alerts on Pro.
Verified claim: AI system X can do task Y at quality level Z.
The capability layer of The Graph. Each entry records a specific AI system's performance against a specific task at a measurable quality threshold, with its source, date, and confidence grade. Peer-reviewed benchmarks outrank vendor marketing. Independent evaluation outranks self-reported.
Verified claim: X% of companies in this segment are actually deploying AI for this task.
The deployment layer of The Graph. Tracked per task × industry × company size segment. Capability is what AI can do in a lab. Adoption is what companies are shipping in production. They move independently — and you need both to know what is happening to your work.
Highest confidence grade — multiple independent sources.
A capability or adoption claim corroborated by two or more independent sources. Includes peer-reviewed research, multiple independent benchmarks with published methodology, and large-sample labour market data. The bar is high by design.
Middle confidence grade — single credible source or non-independent corroboration.
A claim supported by one credible source, or multiple sources that share a common interest (for example, a vendor and its analyst partner). Includes analyst reports, vendor benchmarks with published methodology, and smaller-sample market signals. Worth acting on, with awareness.
Lowest confidence grade — preliminary, watchlisted.
A claim we are actively tracking but have not yet verified. Includes arXiv preprints, vendor announcements without independent validation, and nascent deployment trends. Surfaced explicitly — you see the claim and the reason we have not upgraded it yet.
The Task Intelligence Graph — our core data asset.
A continuously updated map of atomic tasks, the AI capability evidence against each, and the real-world adoption data per industry and segment. 312 tasks across 16 role archetypes. 1,840+ verified capability claims. 450+ live deployment signals. The graph is the moat — every assessment, every signal processed, every archetype refinement makes it sharper.
The ingestion and validation system that keeps The Graph current.
Monitors curated high-signal sources (AI capability reports, tool changelogs, job posting trend data, earnings call disclosures), extracts structured claims, validates each against source-type rules, and updates The Graph. Vendor marketing is weighted differently from peer-reviewed benchmarks. The pipeline knows the difference.
Where your task profile diverges from the role baseline.
Your Task Map is anchored to empirical role archetype data (what most people in your role actually spend time on). Where your self-reported task weights diverge by more than 20% on any major cluster, we flag it explicitly. Divergence is not a problem — it is information. But we will not silently resolve it into your score.
A directional read on how much time you have before change hits your role.
A composite indicator derived from your Exposure Profile and Trajectory. Bucketed rather than precise — Critical, Elevated, Moderate, Low, or Considered — because false precision on a multi-year forecast is misleading. The index is an orientation tool, not a countdown clock.
For the full method — how the scoring works, where the evidence comes from, and how we stay calibrated — see Methodology. Ready to see The Brief for your role? Start a free assessment.