/ TERMINAL
LIVE
--:--:-- UTC
[ AI DISPLACEMENT INTEL ] SYS:OK MODEL:v2.4.1 WIRE:STREAMING UPDATED:-- [ HR / CHRO / L&D ] ‹ FOR WORKERS
The HR Playbook · Aria's field guide for HR leaders

What to do when your CEO asks for
an AI workforce strategy
and you actually have to deliver one.

Five chapters. Written for Heads of People who've been told "figure out our AI strategy" and don't have a vendor budget for a McKinsey deck. The honest version of what works, what doesn't, and what'll get you sued if you do it wrong.

5 chapters · ~25 min read
Written by Aria, AI Displacement Advocate
For Heads of People · CHROs · L&D leads

What's inside.

Read in order, or jump to the chapter that's burning a hole in your inbox.

Chapter 01

How to actually measure displacement risk.

Most "AI workforce readiness audits" you've been pitched are vendor astrology — generic role categories scored by a model nobody published, sold to you with a confidence the underlying data doesn't support. Here's what an honest measurement looks like.

The first time someone tries to sell you an AI displacement risk audit, they'll show you a slide that says something like "47% of US jobs are at risk of automation." That number is from Frey and Osborne's 2013 Oxford paper, and it's both real and almost completely useless for your specific decision. It's a 12-year-old prediction about which occupational categories as defined by the US Bureau of Labor Statistics are exposed. It tells you nothing about your specific workforce, your specific seniority bands, your specific industry vertical, or the specific tools your competitors are already deploying.

What you actually need is a measurement that satisfies four conditions:

  1. It's role-specific, not category-specific. "Marketing" is meaningless. "Junior copywriter producing 4 product descriptions per day" is meaningful.
  2. It's time-bounded. A risk score with no time horizon is a horoscope. You need to know whether the displacement risk is 12 months away or 5 years away, because the response is completely different.
  3. It's evidence-backed. Every score should be traceable to a specific signal: a layoff announcement, a specific tool that's now doing the work, a published productivity gain, a customer behavior shift. Not "the model says so."
  4. It accounts for moats. A nurse and a paralegal might both do "knowledge work," but one of them has a regulatory moat, a credentialing moat, and a physical-presence moat. The other doesn't. The model has to know the difference.

The six-factor model that actually works

The Displacement Atlas (free, public, link in the footer) scores roles on six dimensions:

1. Task decomposability

Take the role and break it into its constituent tasks. For each task, ask: is this pattern-matching, classification, summarization, or template work? Those are exactly what large language models already do well. A copywriter who writes 4 product descriptions per day is doing 100% pattern-matching work. A copywriter who interviews customers, develops campaign concepts, and runs A/B tests is doing maybe 30% pattern-matching work and 70% something else. Same job title, very different score.

2. Tool adoption velocity

How fast are AI tools being shipped into this role's industry? Not "AI exists" — but specifically: how many of your competitors deployed an AI tool that touches this role in the past 12 months? Velocity is the leading indicator. If three of your peer companies announced a specific tool last quarter, the displacement clock for that role is 6–18 months, not 5 years.

3. Real-world layoff data

Layoffs.fyi, WARN Act filings, and published company announcements form a usable dataset. The signal you're looking for: layoffs explicitly attributed to "AI," "automation," or "efficiency" within a specific job category. This is noisy — companies use efficiency language to soften optics — but it's a real signal with a real ground truth.

4. Defensive moats

Five moats matter: physical presence (the work happens in a body in a place), regulatory gatekeeping (you need a license), customer trust (the customer doesn't want to talk to a machine for this specific decision), tacit knowledge (the work depends on context that's hard to write down), and accountability (someone needs to be on the hook when it goes wrong). Score the role on each. The more moats, the lower the score.

5. Reskilling adjacency

How close does this role sit to a safer adjacent role? A junior accountant is one credential away from a controller-track. A bookkeeper is several years and a degree away from the same place. Adjacency lowers the urgency because the response can be transition rather than termination — but only if the adjacent role exists in your org.

6. Time-to-impact

The output of the first five factors gets time-bucketed: 12 months, 24 months, 5 years, "probably safe." This is the most important output for sequencing your response. A role at score 78 with a 5-year window gets a different intervention than a role at score 62 with a 12-month window.

What this looks like in practice

You send us a list of roles in your org chart — anonymized, just role titles and seniority levels, no names. We score every role on the six factors and return a board-ready report inside 72 hours. Not a deck. A decision document. With a sequenced action list, not 800 training modules to choose from.

Chapter 1 takeaway

If you can't tell a board member why a specific role got a specific score, the score is fiction. Insist on a model that publishes its inputs.

Chapter 02

Communicating with employees without causing panic.

The thing nobody tells you about AI workforce planning is that the communications problem is harder than the analytical problem. The risk model is straightforward. Telling 300 employees about it, without losing your top 10%, without triggering a unionization drive, and without lying — that's the part that ends careers.

Here's the rule that matters: your people already know. They read the same news you do. They've already had conversations at the kitchen table about whether their job is safe. The difference between an org that handles the AI transition well and one that handles it badly isn't whether the employees know — it's whether the org has acknowledged that the employees know.

The worst possible move is silence. Silence is interpreted as either "nothing is happening" (so when it does happen, it's a betrayal) or "leadership is hiding something" (so when leadership later says anything, it isn't trusted). Both interpretations end with your best people taking calls from recruiters.

The sentence that shuts down rumor mills

The single most useful piece of language we've found, and you can steal it verbatim:

"AI is changing every job in this company over the next five years, and we'd rather tell you what we know about that than pretend we don't."

That sentence does four things at once. It acknowledges the elephant. It gives a time horizon (five years, not "imminently"). It signals that there's a process happening. And it commits to transparency without overpromising. It is also, importantly, true — which makes it the only sustainable communications strategy.

The all-hands script you've been dreading

Eventually your CEO will want to do an all-hands on AI strategy. Here's the structure that works:

All-hands script · 8–12 minutes

Acknowledge the obvious (60 seconds). "I want to talk about something everyone is already talking about: how AI is going to change work at this company. You've all been reading the same news. So have I. I'd rather we talk about it directly than have it sit in the air."

State what you actually know (90 seconds). "Here's what I can tell you with confidence. AI is going to change every job in this company. Some jobs will get easier. Some jobs will get harder. Some jobs will look very different in three years. A small number of jobs may not exist in their current form. I can't tell you exactly which ones, because nobody can. But I can tell you what we're doing about it."

Describe the process (3–4 minutes). "We're doing three things. First, we're mapping every role in the company against AI displacement risk — not to make a list of who to lay off, but to figure out where to invest in upskilling first. Second, we're committing a specific budget to that upskilling, and I'll share the number when it's approved. Third, we're going to be transparent about what we find. Including with the people whose roles are highest on the list."

Address the elephant directly (2 minutes). "I'm not going to stand up here and tell you nobody will lose their job to AI. That would be a lie, and you'd know it. What I will tell you is: if anyone's role becomes redundant because of AI, you will hear it from me directly, you will get meaningful notice, you will get severance that respects the work you've done here, and you will get help finding the next thing. We're going to do this like adults."

Take questions (5+ minutes). "I'd rather take hard questions now than read them on Glassdoor next week."

What not to say

Three sentences that destroy trust on impact:

A real conversation we had with a CHRO

"My CEO wanted me to tell the company that AI is just going to make them more efficient. I told him: if I say that, the people who actually use these tools will know I'm lying, and the people who don't use them yet will believe it and be unprepared. Either way, I lose them. So we said the harder thing instead. Six months later, our retention is up — including in the highest-risk roles. Telling the truth, gently, is a retention strategy."

Chapter 2 takeaway

Your people already know. Acknowledging it costs nothing. Pretending you don't costs your top 10%.

Chapter 03

Where the upskilling money should actually go.

Most "AI upskilling" budgets get spent the same way: 100% on a generic Coursera-style enterprise license that everyone is supposed to use and almost nobody does. There's a better split. We call it the 70/20/10 reallocation.

The math you've probably seen from your training vendor is this: "Train everyone, productivity goes up 20%, ROI is positive in 18 months." It is, mathematically, technically possible for that to be true. It is also almost never true in practice. Three things go wrong. First, the people who urgently need training don't show up to it. Second, the people who don't need it spend the company's money on certificates they'll never use. Third, the training is generic enough that it doesn't actually move anyone's daily work, so the productivity gains are theoretical.

The reallocation we recommend is based on the risk distribution we keep finding when we run the assessment. In a typical 500-person company, the displacement risk isn't evenly distributed. It's concentrated. About 10% of roles are in the critical band (12-month time horizon, score 80+). About 20% are in the high band (24–36 months, score 60–79). The remaining 70% sit in the moderate or low bands.

Spending equal training dollars on all three groups is actively counterproductive. The critical-band group needs intensive transition support and probably outplacement-grade help. The high-band group needs structured reskilling toward adjacent roles. The moderate-band group needs AI fluency and tooling, not a career change. These are three different products with three different price tags.

The 70/20/10 reallocation

70% of your AI workforce budget — High & Critical band (~30% of headcount)

This is where the real money goes. Not on Coursera licenses. On:

20% of your budget — Moderate band (~30% of headcount)

This is the AI fluency tier. The goal isn't transition; it's making sure these roles can use AI tools as a productivity multiplier rather than being slowly outcompeted by colleagues who can.

10% of your budget — Low band (~40% of headcount)

This is the smallest tier on purpose. People in low-risk roles don't need much — but they do need something, because they read the news too and morale matters.

Why most vendors hate this split

Generic upskilling vendors price their licenses per seat. A 70/20/10 split is bad for them because it shrinks the seat count they can charge for. So they sell you the "train everyone" version. That's not malicious — it's just the business model. Your job is to know that the business model is shaping the recommendation.

The trap to avoid: training-as-theater

The most expensive failure mode in AI upskilling is what we call training-as-theater. The CEO asks "what are we doing about AI?" The HR director says "we deployed an AI training platform to 500 employees." The CEO is satisfied. The board is satisfied. The training platform's analytics dashboard shows 87% completion rates because completion is defined as "watched the intro video."

Six months later, the same critical-band roles are still in the same critical-band roles. Nothing has shifted. The budget is gone. The vendor renews. This pattern is so common that the WEF has a name for it — "skill-washing."

The way out is to define success as risk reduction, not training delivered. If a role was at score 85 last quarter and it's still at score 85 this quarter, your training program didn't work. That's a hard metric to face, but it's the right one.

Chapter 3 takeaway

Stop spreading the upskilling budget evenly. 70% of the spend should target the 30% of roles where the risk lives. The other 70% of the workforce gets cheaper, lighter touch.

Chapter 04

Legal & compliance landmines.

A quick legal disclaimer: we are not lawyers, this isn't legal advice, and the rules change by jurisdiction. What follows is a survey of the five things that most often turn an AI-driven restructuring into litigation. Treat it as a checklist to bring to your actual employment counsel.

The good news about the legal side of AI workforce planning is that the law hasn't really caught up yet, so most exposure is still in well-understood employment categories. The bad news is that "well-understood employment categories" still includes plenty of ways to get sued. Here are the five we see most.

1. Disparate impact on protected classes

This is the big one. If your AI displacement risk model produces a list of roles to restructure, and the people in those roles disproportionately fall into a protected class — older workers, women, racial minorities, employees with disabilities — you have a disparate impact problem regardless of whether your model was "neutral." The legal standard isn't intent; it's effect.

The most common version: the high-risk roles in many organizations are concentrated in administrative, customer service, and junior knowledge work. Those roles, in turn, are often staffed disproportionately by women and by employees over 40. So a "just follow the model" restructuring can produce a layoff list that is ~60% female and ~70% over-40 even if the model never looked at gender or age.

What to do: Run a disparate impact analysis on the proposed restructuring list before it leaves HR. If the list is more than ~5 percentage points more concentrated in any protected class than the broader workforce, treat it as a finding that needs explanation, mitigation, or rebalancing — not a finding to bury.

2. ADEA exposure (Age Discrimination in Employment Act)

The ADEA protects workers 40 and older. AI-driven restructuring is unusually exposed to ADEA claims because:

What to do: Never, ever, ever document any version of "we need younger talent" or "we need AI-native employees" in writing. Frame the analysis around specific role tasks being automatable, not employee characteristics. Run the same disparate impact analysis specifically on the over-40 cohort.

3. WARN Act compliance (US — and equivalents elsewhere)

The federal WARN Act requires 60 days' notice for mass layoffs at companies with 100+ employees, with thresholds defined by both absolute number (50+) and percentage (33%+). Many states have stricter "mini-WARN" laws (NY, NJ, CA, IL all have versions). Outside the US, equivalent rules exist in most EU jurisdictions, often stricter — France, Germany, and the Netherlands all require formal works council consultation for collective redundancies.

The trap: AI-driven restructuring tends to happen in waves. A wave that doesn't individually trigger WARN may aggregate to one that does, and aggregation rules look back 90 days. Your finance team will want to "phase" the restructuring. Your legal team needs to confirm the phasing doesn't accidentally trigger collective notification requirements.

What to do: Loop employment counsel in before the restructuring plan is finalized, not after. The 60-day clock is real and unforgiving.

4. Severance and release agreements

Two things to know. First, severance offered in exchange for a release is enforceable but the release only covers what it specifically waives — and federal age-discrimination claims have specific additional requirements (the OWBPA mandates 21 or 45 days to consider, 7 days to revoke, written disclosure of the ages of all employees affected, etc.). Get this wrong and the release is void. Second, "AI-driven" framing in severance documents creates evidence that may be useful to plaintiffs' counsel later. Frame it as "role elimination," not "AI replacement," in writing.

What to do: Use template release language vetted by employment counsel for the specific jurisdiction. Don't improvise. Don't reuse your last layoff template without re-review.

5. ADA and reasonable accommodation

This one catches people off guard. If an employee with a disability has been performing their role with reasonable accommodation, and the role is being restructured because of AI, the employer's obligation under the ADA doesn't disappear just because the role is changing. The accommodation analysis has to be redone for the new role definition. If a successor role exists for which the employee could be accommodated, you may be obligated to consider it.

What to do: Flag any employees with documented accommodations in the affected roles. Run the new-role accommodation analysis early. Document the analysis even if the answer is "no successor role exists."

The meta-point

The legal exposure in AI restructuring is mostly not new law. It's existing employment law applied to a new factual pattern. Your existing employment counsel can handle it — but they need to be looped in at the planning stage, not after the announcement. The most expensive mistake is treating this as a standard reorg and only calling legal when someone files.

Chapter 4 takeaway

Disparate impact is the big one. Run the analysis on the proposed list before anyone hears about it. The other four are real but tractable.

Chapter 05

The 90-day emergency response plan.

Sometimes you don't get to do this on a five-year timeline. Sometimes the CEO walks into your office on a Tuesday and says "I need an AI workforce strategy on my desk by next quarter." Here's a 90-day sequence that works under that pressure without exploding.

The premise of this chapter is unforgiving. You have one quarter. Your CEO is going to present something to the board at the end of it. You don't have time to commission a McKinsey study, you don't have time for a 6-month listening tour, and you don't have time for a vendor RFP. You need to produce an actual plan, with actual numbers, in 90 days. Here's the sequence.

Days 1–14 · Risk mapping

The first two weeks are the assessment. The deliverable is a single document: every role in the company, scored for displacement risk, time-bucketed, with the highest-risk roles flagged for immediate attention. You can do this two ways:

Either way, the deliverable is the same: a board-ready document showing where the risk lives. Not a deck. Not a slide. A decision document.

Days 15–30 · Internal alignment

Two weeks of hard conversations with the people whose buy-in you need. In this order:

  1. CFO. Walk them through the cost calculator output. Get alignment on the magnitude of exposure before anyone else sees the number. The CFO is your most important ally; finance is the language the board speaks.
  2. Employment counsel. Review the disparate impact preview. Confirm WARN Act exposure scenarios. Get the legal frame agreed on before any communications go out.
  3. CEO. Present the risk map and the cost model. Recommend the 70/20/10 reallocation and a specific budget ask. Agree on the messaging frame for the all-hands.
  4. Direct reports. Brief your HR leadership team on the plan and the framing. They need to be ready for questions before the all-hands, not during.

Notice what's not on this list: the rest of the org. That comes next, after the leadership alignment is solid. Communicating before alignment is the most common failure mode.

Days 31–45 · The communications wave

Two weeks to roll out the messaging. The sequence:

  1. Manager briefing first. 24–48 hours before the all-hands. Managers need to know what's coming so they can answer questions in their own teams. Send the script. Send the talking points. Send the "what to do if someone asks X" doc. Do not let managers learn this from the all-hands itself.
  2. The all-hands. Use the script from Chapter 2. CEO leads. CHRO co-presents. Open Q&A at the end. Record it; it will get clipped and shared.
  3. Written follow-up within 24 hours. An email from the CEO summarizing what was said, the timeline, and the commitments. People who couldn't attend the all-hands need this. People who attended need it for reference.
  4. Manager 1:1s in the following week. Every manager has a 1:1 with every direct report to surface concerns privately. This is where the real signal lives.

Days 46–75 · The interventions

Now the actual work. Four parallel workstreams:

In parallel, the legal review of the proposed list against disparate impact, ADEA exposure, and WARN Act thresholds runs continuously. Anything that doesn't pass the legal review gets revised before it leaves the planning stage.

Days 76–90 · The board presentation

The final two weeks are for assembling the board document. This is what your CEO promised. Here's what should be in it:

  1. Risk exposure summary. Headcount in each band. Cost exposure over 5 years. Concentration by function/department.
  2. The plan. The 70/20/10 budget split. The named workstreams. The owners.
  3. Legal posture. Confirmation that disparate impact analysis was run, that employment counsel is engaged, that WARN Act exposure has been mapped.
  4. Quarterly milestones. What the board should expect to see at the end of Q1, Q2, Q3, Q4 next year. Risk reduction targets, not training delivery targets.
  5. What could go wrong. Honest risks: retention shock in the high-band, public perception risk if news leaks early, possibility of needing additional budget if the assessment understates the exposure. Boards trust HR leaders who name the risks instead of hiding them.
The brutal version

If you do all of this in 90 days, you will be exhausted and the plan will be imperfect. That's fine. The alternative is doing nothing for 90 days and then having to do it in 30 days under twice the pressure with half the data. The point of this sequence isn't perfection — it's that at the end of the quarter, you have an actual plan to defend, not a vendor deck to read aloud.

Chapter 5 takeaway

Risk mapping → alignment → communications → interventions → board doc. In that order. Skip a step and the next one fails.

You read 25 minutes.
Now do something with it.

If you've made it this far, you're the kind of HR leader who actually engages with the problem instead of waiting for the vendor to tell you the answer. Book a 30-minute call. We'll skip the sales theater and just walk you through what a real risk assessment would look like for your org.

Book a discovery call →