What you get
This is not "prompt tips." It's an adoption system: training + workflow design + supervision guardrails + measurable pilots.
- Consistency: teams use AI in the same way, with shared checklists and quality gates.
- Safety: clear data boundaries, tool permissions, and escalation rules reduce "shadow AI."
- Accountability: RACI + approvals + audit trail expectations make oversight real.
- Evidence: pilots produce before/after metrics you can take to leadership and audit.
LLDF provides a practical language-layer risk lens (Prevent / Detect / Respond) that we translate into workforce behavior, workflows, and supervision.
- Use-case triage by risk: data sensitivity, tool access, autonomy, and failure impact.
- Controls mapped into real work: "what to do" before use, while using, and after outputs are produced.
- Repeatable evaluation: light-weight scorecards + ongoing evidence collection.
Program architecture
Four phases that match how organizations actually adopt AI.
Baseline your AI maturity and risk by function + role, and select high-ROI workflows.
- AI maturity & risk assessment (by function + role)
- Use-case inventory + top 10 workflows to augment
- Role map: who will build, approve, supervise (oversight model)
- Initial governance: "what's allowed, what's gated, what's prohibited"
Separate cohorts with hands-on practice in your real workflows (not generic demos).
- Executives: strategy, risk, governance, KPI design
- Managers: workflow redesign, QA, human-in-loop controls
- Practitioners: prompt-to-spec, verification, data handling, tool safety
- Builders/SMEs: evals, documentation, incident handling for AI
Convert training into operating procedures, approvals, and evidence requirements.
- AI Supervision Playbook: when to trust, escalate, stop (with examples)
- Workflow Design Kit: SOP templates, checklists, handoff points
- Accountability model: RACI, approval gates, audit trail expectations
- AI usage policy: data boundaries, tool permissions, unacceptable use
Run pilots to prove value and control quality before scaling.
- 2–3 piloted workflows (selected during Diagnose)
- Before/after metrics dashboard (quality, time saved, error rate, escalations)
- Lessons learned + rollout plan (next 90 days)
Role-based tracks (separate cohorts)
Each cohort gets a "how-to" playbook, templates, and a quality standard they can operate daily.
- AI portfolio strategy: where to automate vs. augment
- Risk appetite + KPI design (quality, compliance, productivity)
- Oversight model: escalation rules + decision rights
- Board-ready narrative: "value with control"
- Workflow decomposition: inputs → transformations → outputs
- Quality gates: verification checks and review thresholds
- Human-in-loop controls: what must be approved, by whom
- Team routines: daily/weekly AI supervision cadence
- Write prompts as specifications (constraints, sources, format)
- Verification: citations, cross-checks, failure modes
- Data handling: sensitivity tiers + safe redaction patterns
- Tool safety basics: permissions, scoping, logging expectations
- Evaluation design: test sets, acceptance thresholds, regression
- Documentation standards: model cards, workflow cards, changelogs
- AI incident handling: triage, containment, rollbacks, comms
- Operational telemetry: what to measure, how often, why
Operationalize deliverables
You leave with artifacts you can enforce: not just slideware.
- Trust rules: "green/yellow/red" output categories by workflow
- Escalation: when to involve a manager, legal, security, or SME
- Stop rules: when the tool must be paused or gated
- Evidence: what logs/screenshots/approvals must exist for audit
- AI-enabled SOP template (inputs, prompts/specs, review, output)
- Checklist pack (data handling, verification, release gates)
- Handoff points (human responsibilities and approvals)
- "Definition of Done" per workflow (quality + compliance)
- RACI for build/approve/supervise/monitor
- Approval gates (what requires sign-off and at which risk tier)
- Audit trail expectations (retention, access, who can view)
- Data boundaries and sensitivity tiers
- Tool permissions (what tools are allowed for which roles)
- Unacceptable use + enforcement path
Engagement options
Pricing-ready structure. Replace placeholders with your rate card.
- Diagnose (1 week) + Exec/Manager training (1 cohort each)
- Draft policy + basic supervision checklist
- 1 pilot workflow + measurement template
- Diagnose (2 weeks) + all 4 role cohorts
- Full Supervision Playbook + Workflow Design Kit
- 2–3 pilot workflows + dashboard
- Rollout plan for next 90 days
- Multi-function assessment + governance workshops
- Custom evals + evidence pipeline recommendations
- Train-the-trainer + quarterly supervision reviews
- Policy + audit readiness support
Choose your path
Qualify your fit, pick a package, and send a pre-filled email, runs entirely in your browser.
What we measure in pilots
Metrics are chosen during Diagnose based on workflow risk and success criteria.
- Defect rate and rework rate
- Acceptance rate by reviewer
- Time-to-complete per workflow
- Cycle time reduction vs. baseline
- Policy violations and escalations
- Sensitive data exposure attempts
- Usage by role and workflow coverage
- Training completion rate
Statement of Work
A week-by-week plan you can lift into a proposal. Adjust duration based on scope and number of cohorts.
| Week | Sessions (live) | Key activities | Deliverables | Client prerequisites |
|---|---|---|---|---|
| 0 | Kickoff (60–90m) | Stakeholder alignment, scope confirmation, comms plan, identify data/tool constraints. | Project charter + schedule + stakeholder map. | Named sponsor + point-of-contact; access to AI policy (if any). |
| 1 | Interviews (6–10×30m) | Role-based interviews; workflow discovery; current-state maturity & risk snapshot. | AI maturity & risk assessment (by function + role). | Interview roster; sample artifacts (SOPs, docs, QA checklists). |
| 2 | Workshop (2h) | Use-case inventory + ranking; choose top workflows; define success metrics. | Use-case inventory + top 10 workflows to augment; pilot shortlist. | List of systems/tools used in selected workflows; baseline metrics (if available). |
| 3 | Exec cohort (2h) | Strategy, governance, KPI design, decision rights, risk appetite. | Executive AI operating brief (1–2 pages) + KPI set draft. | Executive attendance; current risk/compliance requirements. |
| 4 | Manager cohort (2h) | Workflow redesign, QA gates, escalation rules, supervision routines. | Workflow QA checklist pack + escalation decision tree v1. | Named managers for pilot workflows; review/approval pathway. |
| 5 | Practitioner cohort (2h) | Prompt-to-spec, verification habits, data handling, safe tool usage patterns. | Practitioner templates: prompt spec, verification checklist, safe-output rubric. | Representative tasks & sample inputs (sanitized if sensitive). |
| 6 | Builder/SME cohort (2h) | Evals, documentation, incident handling, monitoring basics. | Evals starter kit + documentation standard + incident mini-runbook. | Access to test environment (or agreed simulation); logging expectations. |
| 7 | Ops workshop (2h) | Operationalize: policy, RACI, approval gates, audit trail, evidence requirements. | AI Supervision Playbook + Workflow Design Kit + RACI v1. | Policy owner attendance; legal/security review availability. |
| 8 | Pilot reviews (2×60m) | Run 2–3 pilots; collect evidence; measure before/after; refine gates. | Metrics dashboard + lessons learned + rollout plan (next 90 days). | Pilot owners; baseline metrics; agreement on measurement method. |
- Facilitated workshops + role-based cohorts (live sessions)
- Templates, checklists, and draft policy artifacts
- Pilot selection + measurement + rollout plan
- Lightweight governance structure (RACI + approvals)
Add-ons: train-the-trainer, quarterly supervision reviews, custom evaluations, audit readiness packages.
- Client provides a sponsor, POC, and pilot owners.
- Client provides access to workflow artifacts and baseline measures.
- Where sensitive data exists, client provides sanitized examples or a controlled environment.
- Final policy approval remains with client leadership.
Replaces placeholders with your real legal/contract language for proposals.
FAQ
Is this tool/vendor specific?
No. The program is workflow-first. It adapts to your tools, data boundaries, and governance requirements.
Will teams actually change behavior?
That's why Operationalize exists: the output is SOPs, checklists, and supervision routines that become the "new normal."
How do you prevent unsafe use?
We define allowed/gated/prohibited use, permissions by role, escalation rules, and evidence requirements for higher-risk workflows.
What should we have ready before kickoff?
A sponsor, a POC, an initial list of candidate workflows, and any existing AI policy (even if it's a draft).