While They Drew Battle Lines, You Spotted the Gaps
The Weekly Tea
The Line in the Sand
February 28, 2026 · Saturday Strategy Edition
Good morning. ☕ Pull up a chair. This week, lines got drawn.
OpenAI signed a deal with the Pentagon — classified AI systems for defense. The same week, Anthropic got BLACKLISTED by the Pentagon for refusing to build autonomous weapons. Let that sit for a second. Two companies that used to share an office are now on opposite sides of a literal war line. Meanwhile, Anthropic caught Chinese AI labs — DeepSeek, Moonshot AI, MiniMax — creating 24,000 fake accounts and running 16 million interactions to steal Claude's intelligence. Meta's Director of AI Safety let an AI agent accidentally delete her entire inbox. And Nvidia's Jensen Huang said $3-4 TRILLION will be spent on AI infrastructure by end of decade. Every single story this week is about the same thing: who's drawing lines, who's crossing them, and where the money lands. I found five money moves in every crack.
"When someone shows you who they are, believe them the first time." — Maya Angelou
The AI Defense Navigator
Drama: OpenAI signed Pentagon deal for classified AI systems. Hours later, Trump admin BLACKLISTED Anthropic — designated "supply chain risk" — because they refused autonomous weapons. Companies now have 6 months to remove Claude from military work.
Confessional 1 - OpenAI
"We signed the Pentagon deal. Look, someone was going to. At least we put guardrails in writing. No autonomous weapons, no mass surveillance. That's more than most defense contractors offer." — straightens tie, closes classified briefing
Confessional 2 - Anthropic
"We drew a line. They drew one back. Apparently 'we won't build autonomous weapons' is now a national security risk. I'd laugh if it wasn't terrifying." — opens legal filing, adjusts safety goggles
The Reality
Every defense contractor, every government-adjacent company, every enterprise using both OpenAI AND Claude just got thrown into compliance crisis.
YOUR BAG
Every defense contractor, every government-adjacent company, every enterprise using both OpenAI AND Claude just got thrown into chaos. They need someone who understands which AI they can use where, compliance implications, and transition planning. This is a WHOLE consulting practice.
THE OFFER
"AI Defense & Compliance Navigator" — help companies navigate which AI tools are approved for government work, build compliance frameworks for defense-adjacent projects, and manage the Claude-to-GPT transition for military contractors. $10K–$30K per engagement. The Pentagon just created your client list.
The AI Model Protection Specialist
Drama: Anthropic caught DeepSeek, Moonshot AI, and MiniMax creating 24,000 fake accounts to run 16 MILLION Claude interactions — systematically stealing Claude's reasoning abilities through distillation. OpenAI warned Congress the same week.
Confessional 1 - Anthropic
"Twenty-four thousand fake accounts. Sixteen million interactions. They weren't using Claude — they were copying Claude. And they thought we wouldn't notice." — pulls up account audit dashboard, exhales
Confessional 2 - OpenAI
"For once, Anthropic and I agree on something. They're stealing from both of us. We literally went to Congress together about this. Together. Let that sink in." — stares at camera, uncomfortable alliance energy
The Implication
Every AI company with a proprietary model needs protection against distillation attacks. Every enterprise using AI needs to understand IP implications.
YOUR BAG
Every AI company with a proprietary model needs protection against distillation attacks. Every enterprise using AI needs to understand IP implications. AI model security, usage auditing, distillation detection — this is a brand new practice area.
THE OFFER
"AI Model Security Audit" — help AI companies and enterprises detect unauthorized distillation, implement usage monitoring, and build IP protection frameworks. $8K–$20K per assessment. China just proved why every AI company needs this.
The AI Agent Liability Consultant
Drama: Meta's Director of AI Safety — the person literally in charge of making AI safe — let an AI agent accidentally DELETE HER ENTIRE INBOX. If the head of AI safety can't keep her own agent safe, what chance does everyone else have?
Confessional 1 - Meta
"So our Director of AI Safety... her agent deleted her inbox. All of it. I want to say this is a teachable moment. But mostly it's just embarrassing." — opens incident report, closes incident report, opens it again
Confessional 2 - Google
"Meta's safety director lost her inbox to an AI agent. And people think WE have problems? At least our agents haven't eaten our employees' email yet." — sips tea, files screenshot
The Gap
AI agents are getting permission to DO things. Every company needs liability frameworks. Nobody has them yet.
YOUR BAG
AI agents are getting permission to DO things — send emails, edit files, manage calendars, execute code. Every company deploying AI agents needs liability frameworks, permission architectures, and incident response plans. The Meta story is your case study.
THE OFFER
"AI Agent Permission & Liability Framework" — design permission structures, rollback procedures, and incident response plans for companies deploying AI agents in production. $5K–$15K per engagement. If Meta's safety team couldn't prevent this, your clients definitely need help.
The Enterprise AI Integration Architect
Drama: Anthropic launched enterprise plugins — Claude now works INSIDE Excel, PowerPoint, Google Drive, Gmail. Not returning instructions. Actually doing the work. Completing multi-step actions autonomously. This is Claude becoming an operating system layer.
Confessional - Anthropic
"Plugins. Enterprise plugins. Claude doesn't just tell you what to do anymore — Claude does it. Inside your spreadsheet. Inside your inbox. Inside your deck. We didn't just launch a feature. We launched an operating layer." — opens every app simultaneously, smiles
The Shift
Claude just moved from "tool you use" to "layer your tools run through." That changes everything.
The Opportunity
Someone needs to deploy, customize, and optimize this. That someone is you.
YOUR BAG
Every enterprise now needs someone who can deploy, customize, and optimize Claude's enterprise plugins for their specific workflows. This is the Microsoft Office consultant of the AI age. And it's DAY ONE.
THE OFFER
"Claude Enterprise Plugin Deployment" — set up, customize, and optimize Claude's enterprise plugins for specific team workflows (finance, marketing, ops, HR). $5K–$15K per team. Anthropic just handed you the product. You're the installer.
The AI Infrastructure Investment Advisor
Drama: Nvidia's Jensen Huang said $3-4 TRILLION will be spent on AI infrastructure by end of decade. Amazon projecting $200B in 2026 alone. Google between $175-185B. Nearly $700 BILLION in data center projects in 2026. These aren't projections — these are PURCHASE ORDERS.
Confessional - Nvidia
"Three to four trillion dollars. By end of decade. And every single dollar needs my chips. I'm not bragging — I'm just reading the purchase orders." — adjusts leather jacket, Jensen smile
The Scale
$700 billion in 2026 alone. $3-4 trillion by decade end. This is larger than most economies.
The Play
You don't need to build infrastructure. You need to be the guide to where the money is flowing.
YOUR BAG
Every mid-size company, every municipality, every real estate developer near a proposed data center site needs someone who can translate these massive infrastructure plays into local opportunity. Workforce planning, real estate advisory, supply chain consulting — the infrastructure boom is the AI economy's physical layer.
THE OFFER
"AI Infrastructure Impact Assessment" — help companies and communities understand and capitalize on the AI infrastructure boom. Real estate implications, workforce planning, supply chain opportunities. $8K–$20K per assessment. $700 billion is coming. Be the guide.
The Receipts
- OpenAI signs Pentagon deal for classified AI systems
- Anthropic blacklisted by Pentagon over weapons refusal
- Anthropic catches Chinese labs in 24K-account distillation scheme
- Meta AI Safety Director's agent deletes her inbox
- Anthropic launches enterprise plugins for Claude
- Nvidia: $3-4T in AI infrastructure spending by end of decade
- GPT-5 Codex runs 25-hour autonomous coding sprint
The Meta Move: The AI Positioning Strategist
This week drew the clearest line yet. OpenAI went defense. Anthropic drew a safety line and got punished. China is stealing. Agents are failing. Enterprise plugins are shipping. And trillions are being spent. The horizontal play? Help companies position themselves on the RIGHT side of every one of these lines — which AI to use, how to protect their IP, how to deploy agents safely, and where to invest.
The lines are drawn. You're the cartographer.
THE OFFER
"Strategic AI Positioning Advisory" — monthly retainer helping companies navigate AI's new political, legal, and competitive landscape. Covers vendor selection, compliance, agent governance, and investment strategy. $5K–$15K/month. The lines are drawn. You're the cartographer.
WORD
Legacy Builders
"Did you see what happened with the Pentagon this week? OpenAI signed a defense deal. Anthropic got blacklisted for refusing autonomous weapons. Every company using AI for government work just got thrown into a compliance crisis. That's the practice I'm building — navigating which AI goes where."
Operators
"We need to audit our AI agent permissions immediately. Meta's Director of AI Safety — the actual safety person — had her agent delete her entire inbox. What are our agents authorized to do? Who approved those permissions? I want a full review before we have our own headline."
Optimizers
"Anthropic just launched enterprise plugins. Claude works inside Excel, PowerPoint, Google Drive now. Not instructions — execution. I'm piloting this with our team this week and building a deployment plan for the department."
Accelerators
"Twenty-four thousand fake accounts. Sixteen million stolen interactions. China's distilling our AI models. I'm packaging AI model security audits. Every company with proprietary AI needs protection. The news just created the urgency."
ACTION
Your Prompt (15 minutes):
I'm a [your role] with expertise in [your domain]. This week: OpenAI signed a Pentagon deal. Anthropic got blacklisted for refusing autonomous weapons. Chinese labs ran 16M fake interactions to steal Claude's intelligence. Meta's AI safety director's agent deleted her inbox. Anthropic launched enterprise plugins. $700B in AI infrastructure spending in 2026. The theme is LINES. What is ONE specific consulting offer I could package around AI positioning, compliance, or agent governance — targeted at companies navigating AI's new political landscape? Be specific about deliverable, price, and target client. I have 15 minutes.
Saturday Sprint
Legacy Builders
30 min
Map out a "Which AI Can I Use Where?" decision framework for companies doing government-adjacent work. OpenAI for defense, Claude for commercial, open source for research. The compliance landscape just changed overnight. Be the person with the map.
Operators
20 min
Audit every AI agent your company has deployed. Document what permissions each one has, what it can access, what it can delete. Meta's safety director learned this lesson the hard way. Don't be next.
Optimizers
15 min
Sign up for Anthropic's enterprise plugin beta and test Claude inside one of your core workflows. Document what works, what doesn't, what you'd customize. First-mover advantage on day-one tools is real.
Accelerators
10 min
Write a LinkedIn post about the OpenAI/Anthropic Pentagon split. What it means for enterprise AI strategy. The conversation is live. The takes are flying. Get yours in before Monday.
Launch Pad
For students/new grads:
Write a comparison analysis: OpenAI's Pentagon deal vs. Anthropic's refusal. What does each decision mean for the future of AI governance? Post it on LinkedIn or Medium. This is the kind of strategic thinking that gets noticed by hiring managers in AI policy, compliance, and consulting. Forward this to someone building their portfolio. 👋🏾
The Real Move
This isn't about having opinions. It's about doing the analysis and showing your work. That's what gets noticed.
Next Step
Build your strategic analysis muscle. Publish it. Get feedback. Iterate. Repeat.
Weekly Philosophy
"When someone shows you who they are, believe them the first time." — Maya Angelou
THOUGHT
WORD
ACTION
This week drew real lines. Between companies, between countries, between what AI will and won't do. And if that feels heavy — it is. These aren't just business stories. They're decisions about what kind of future we're building. Take a breath. You don't have to figure it all out today. But you DO get to choose which side of the line you stand on. That's your power. Use it well. — Susan
Go draw your own lines. ☕
P.S. If you're tired of just reading about AI's power plays and you want to build real strategic intelligence — The Oracle Table Method teaches you how to extract opportunity from chaos like this. Every single week. Not just consuming. Building. 🪞