Three lenses. In this order.
What tools, what stack, what does it actually cost to arm your team end-to-end?
Where the actual risk lives, what data goes where, and how hard to scrutinize the output.
Your team's real skill level — measured, not guessed. How to survey, pilot, standardize.
Most leaders start and stop at Cost. The wins are in Compliance and Competency.
Most corporate teams default to Microsoft Copilot — and most employees find it inefficient for actual work. The real question isn't "should we use AI?" — it's keep Copilot? add on top? swap?
The most cost-effective stack overall. Lowest entry per seat, biggest ecosystem.
Most expensive of the three — the strongest agent ecosystem and reasoning quality.
Broadest ecosystem. Drive integration is the killer feature.
OpenAI, Anthropic, Google all invest heavily in security — same as Workspace, Dropbox, SharePoint.
The vulnerability is in how you access the tool — not whether you use it.
Your team's risk is set by the lowest-tier account anyone on the team is using.
Top → bottom: most secure to most exposed. Position your team at the highest tier they actually need — and write down the floor for sensitive data.
Zero data retention · No training on your data · Audit logs · Contractual commitments.
Healthcare · Finance · Legal · PHI / PII / financials
Usually no training · Retention varies — read terms · Some admin controls.
Most teams of 5+ · The minimum responsible default
May be used for training unless toggled off · Limited admin controls.
Solo experimentation only — not for team work
Adds extension-intercept surface area on top of any tier.
Avoid for sensitive data — period
"Tell your team: assume anything you put in a personal account or a browser extension can be seen."
Match the traffic light to the scrutiny level. Red doc = High scrutiny. Always.
Anonymous baseline of your team's current state. Five minutes. Eleven question groups.
A 3-pillar diagnostic radar — Conceptual, Operational, Governance — org-wide and by department.
Targeted rollout based on what the survey actually told you — not what you assumed.
Do they understand the AI landscape — models, vendors, what's possible right now?
Can they actually use the tools? Prompting, files, agents, workflows?
Do they understand the risk profile + the rules for responsible use?
"Don't deploy AI to a team you haven't measured. You'll roll out to your weakest users and skip your strongest."
You'll get the survey AND the dashboard skill in the follow-up email.
Standardize prompts across the team so people stop reinventing every chat. Hosted where your team already lives — SharePoint, Notion, Drive.
Don't roll out to everyone. Start with people the survey shows are at tier 5+ on the operational ladder. Power users → wins → pull, not push.
Central hub: prompt library + skill library + governance one-pager + recorded demos. Becomes the team's source of truth as you scale.
"Survey first. Then library. Then pilots. Then center. Don't skip steps."
The situation, the audience, what you're working on.
"I'm a people manager prepping for a 1:1 with a direct report who missed a deadline."
Who AI should act as.
"Act as an experienced HR coach who specializes in performance conversations."
What specifically you want it to do.
"Draft 3 opening questions for the conversation."
How the output should look.
"Bulleted list. Each question ≤ 15 words. Numbered 1–3."
The voice / register.
"Curious and supportive — not accusatory."
Same input → same output, every time. New hires inherit the team's accumulated wisdom on day one. Prompts become an asset that compounds — bad ones get replaced, good ones get reused.
Link in the follow-up email. Goal: 80%+ response rate in 5 business days.
A workflow 2–3 people on your team are doing manually today — research, summaries, doc review, comms drafts.
Even one shared doc with 5 prompts is a start. Don't wait for perfect.