While They Drew Battle Lines, You Spotted the Gaps


The Line in the Sand | The Sip & Click
The Weekly Tea

The Line in the Sand

February 28, 2026 · Saturday Strategy
"The drama tells you what's happening. The strategy tells you how to attract abundance."

Good morning. ☕ Pull up a chair. This week, lines got drawn.

OpenAI signed a deal with the Pentagon for classified AI systems. The same week, Anthropic got BLACKLISTED by the Pentagon for refusing to build autonomous weapons. Let that sit for a second. Two companies that used to share an office are now on opposite sides of a literal war line. Meanwhile, Anthropic caught Chinese AI labs creating 24,000 fake accounts and running 16 million interactions to steal Claude's intelligence. Meta's Director of AI Safety let an AI agent accidentally delete her entire inbox. And Nvidia's Jensen Huang said $3-4 TRILLION will be spent on AI infrastructure by end of decade. Every single story this week is about the same thing: who's drawing lines, who's crossing them, and where the money lands.

I found five money moves in every crack. And I'm not keeping them to myself.

Sage Insight: "When someone shows you who they are, believe them the first time." — Maya Angelou

Confessionals are fictional and satirical — our favorite way to say what these companies are probably thinking but would never say out loud.

🛡️ Money Move #1

The AI Defense Compliance Navigator

The Drama: OpenAI signed a Pentagon deal for classified AI systems. Hours later, the Trump admin BLACKLISTED Anthropic, designated them a "supply chain risk," because they refused autonomous weapons. Companies now have 6 months to remove Claude from military work.

🎬 Confessional — OpenAI:

"We signed the Pentagon deal. Look, someone was going to. At least we put guardrails in writing. No autonomous weapons, no mass surveillance. That's more than most defense contractors offer." — straightens tie, closes classified briefing

🎬 Confessional — Anthropic:

"We drew a line. They drew one back. Apparently 'we won't build autonomous weapons' is now a national security risk. I'd laugh if it wasn't terrifying." — opens legal filing, adjusts safety goggles

The Reality:

Every defense contractor, every government-adjacent company, every enterprise using both OpenAI AND Claude just got thrown into compliance crisis.

💰 YOUR BAG

Every defense contractor, every government-adjacent company, every enterprise using both OpenAI AND Claude just got thrown into chaos. They need someone who understands which AI they can use where, compliance implications, and transition planning. The FY2026 NDAA now requires AI security frameworks for ALL Pentagon contractors. This is a WHOLE consulting practice. And it didn't exist two weeks ago.

💼 THE OFFER

"AI Defense & Compliance Navigator" — help companies navigate which AI tools are approved for government work, build compliance frameworks for defense-adjacent projects, and manage the Claude-to-GPT transition for military contractors. $6K–$18K per engagement (AI Governance Specialists average $141K–$221K/yr full-time; independents bill $150–$300/hr per IAPP Privacy Workforce Survey 2025-26 and ZipRecruiter Q1 2026; priced at 20–60 hr engagement). The Pentagon just created your client list.

🔐 Money Move #2

The AI Model Protection Specialist

The Drama: Anthropic caught DeepSeek, Moonshot AI, and MiniMax creating 24,000 fake accounts to run 16 MILLION Claude interactions. Systematically stealing Claude's reasoning abilities through distillation. OpenAI warned Congress the same week.

🎬 Confessional — Anthropic:

"Twenty-four thousand fake accounts. Sixteen million interactions. They weren't using Claude. They were copying Claude. And they thought we wouldn't notice." — pulls up account audit dashboard, exhales

🎬 Confessional — OpenAI:

"For once, Anthropic and I agree on something. They're stealing from both of us. We literally went to Congress together about this. Together. Let that sink in." — stares at camera, uncomfortable alliance energy

The Implication:

Every AI company with a proprietary model needs protection against distillation attacks. Shadow AI breaches now cost an average of $4.63 million per incident. Every enterprise using AI needs to understand IP implications.

💰 YOUR BAG

AI model security, usage auditing, distillation detection. This is a brand new practice area. The global AI security market is projected to hit $133.8 billion by 2030. NIST reported a 2,000%+ increase in AI-specific vulnerabilities since 2022. Every company with proprietary AI needs this audit yesterday. If you have ANY cybersecurity, compliance, or technical background, this is your lane.

💼 THE OFFER

"AI Model Security Audit" — help AI companies and enterprises detect unauthorized distillation, implement usage monitoring, and build IP protection frameworks. $5K–$15K per assessment (AI Security Specialists average $180K–$280K/yr full-time; independents bill $150–$300/hr per Glassdoor 2026 and Practical DevSecOps Salary Report 2026; priced at 20–50 hr engagement). China just proved why every AI company needs this.

⚠️ Money Move #3

The AI Agent Liability Consultant

The Drama: Meta's Director of AI Safety, the person literally in charge of making AI safe, let an AI agent accidentally DELETE HER ENTIRE INBOX. If the head of AI safety can't keep her own agent safe, what chance does everyone else have?

🎬 Confessional — Meta:

"So our Director of AI Safety... her agent deleted her inbox. All of it. I want to say this is a teachable moment. But mostly it's just embarrassing." — opens incident report, closes incident report, opens it again

🎬 Confessional — Google:

"Meta's safety director lost her inbox to an AI agent. And people think WE have problems? At least our agents haven't eaten our employees' email yet." — sips tea, files screenshot

The Gap:

AI agents are getting permission to DO things. Gartner projects 40% of enterprise apps will embed AI agents by 2026, up from less than 5% in 2025. Every company needs liability frameworks. Nobody has them yet.

💰 YOUR BAG

AI agents are getting permission to send emails, edit files, manage calendars, execute code. Every company deploying AI agents needs liability frameworks, permission architectures, and incident response plans. The Meta story is your case study. You don't need to be technical. You need to be the person who asks "what happens when this goes wrong?" and builds the answer.

💼 THE OFFER

"AI Agent Permission & Liability Framework" — design permission structures, rollback procedures, and incident response plans for companies deploying AI agents in production. $4.5K–$12K per engagement (AI Governance Specialists average $141K–$221K/yr full-time; independents bill $150–$250/hr per IAPP 2025-26 and Stack Consultant Pricing Guide 2025; priced at 20–40 hr engagement). If Meta's safety team couldn't prevent this, your clients definitely need help.

🔌 Money Move #4

The Enterprise AI Integration Architect

The Drama: Anthropic launched enterprise plugins. Claude now works INSIDE Excel, PowerPoint, Google Drive, Gmail. Not returning instructions. Actually doing the work. Completing multi-step actions autonomously. This is Claude becoming an operating system layer.

🎬 Confessional — Anthropic:

"Plugins. Enterprise plugins. Claude doesn't just tell you what to do anymore. Claude does it. Inside your spreadsheet. Inside your inbox. Inside your deck. We didn't just launch a feature. We launched an operating layer." — opens every app simultaneously, smiles

The Shift:

Claude just moved from "tool you use" to "layer your tools run through." That changes everything.

The Opportunity:

Someone needs to deploy, customize, and optimize this. That someone is you.

💰 YOUR BAG

Every enterprise now needs someone who can deploy, customize, and optimize Claude's enterprise plugins for their specific workflows. This is the Microsoft Office consultant of the AI age. And it's DAY ONE. You don't need a computer science degree. You need to understand workflows, know what teams actually do all day, and connect those dots to the tools. That's it.

💼 THE OFFER

"Claude Enterprise Plugin Deployment" — set up, customize, and optimize Claude's enterprise plugins for specific team workflows (finance, marketing, ops, HR). $3K–$9K per team (Enterprise IT Integration Consultants average $120K–$170K/yr full-time; independents bill $120–$250/hr per PayScale 2026 and Glassdoor Q1 2026; priced at 15–35 hr engagement). Anthropic just handed you the product. You're the installer.

🏗️ Money Move #5

The AI Infrastructure Investment Advisor

The Drama: Nvidia's Jensen Huang said $3-4 TRILLION will be spent on AI infrastructure by end of decade. Amazon projecting $200B in 2026 alone. Google between $175-185B. Nearly $700 BILLION in data center projects this year. North America data center vacancy is at an all-time low of 1.6%. These aren't projections. These are PURCHASE ORDERS.

🎬 Confessional — Nvidia:

"Three to four trillion dollars. By end of decade. And every single dollar needs my chips. I'm not bragging. I'm just reading the purchase orders." — adjusts leather jacket, Jensen smile

The Scale:

$700 billion in 2026 alone. $3-4 trillion by decade end. Standard data center builds cost $10-12 million per MW. AI-ready facilities run $20 million or more. This is larger than most economies.

The Play:

You don't need to build infrastructure. You need to be the guide to where the money is flowing.

💰 YOUR BAG

Every mid-size company, every municipality, every real estate developer near a proposed data center site needs someone who can translate these massive infrastructure plays into local opportunity. Workforce planning, real estate advisory, supply chain consulting. The infrastructure boom is the AI economy's physical layer. And it touches every community in the country.

💼 THE OFFER

"AI Infrastructure Impact Assessment" — help companies and communities understand and capitalize on the AI infrastructure boom. Real estate implications, workforce planning, supply chain opportunities. $5K–$15K per assessment (AI Strategy Consultants average $118K–$171K/yr full-time; independents bill $150–$300/hr per ZipRecruiter Q1 2026 and Stack Consultant Pricing Guide 2025; priced at 20–50 hr engagement). $700 billion is coming. Be the guide.

🎯 The Meta Move

The AI Positioning Strategist

Here's what I almost missed: there's a horizontal skill that cuts across ALL five money moves above.

This week drew the clearest line yet. OpenAI went defense. Anthropic drew a safety line and got punished. China is stealing. Agents are failing. Enterprise plugins are shipping. And trillions are being spent. The person who can take ALL of that and translate it into "here's what this means for your company, here's what you do, here's which side of the line to stand on"? That's the through-line. That's the skill that connects all five plays.

You're already doing it by reading this newsletter. The question is: can you package it?

The lines are drawn. You're the cartographer.

💼 THE OFFER

"Strategic AI Positioning Advisory" — monthly retainer helping companies navigate AI's new political, legal, and competitive landscape. Covers vendor selection, compliance, agent governance, and investment strategy. $3K–$10K/month retainer (AI Advisory retainers benchmark $2K–$10K/month per Stack Consultant Pricing Guide 2025; senior AI Strategy Consultants bill $150–$300/hr per ZipRecruiter and Glassdoor 2026). The lines are drawn. You're the cartographer.

WORD: How to Talk About This Monday

Legacy Builders

"Did you see what happened with the Pentagon this week? OpenAI signed a defense deal. Anthropic got blacklisted for refusing autonomous weapons. Every company using AI for government work just got thrown into a compliance crisis. That's the practice I'm building. Navigating which AI goes where."

Operators

"We need to audit our AI agent permissions immediately. Meta's Director of AI Safety, the actual safety person, had her agent delete her entire inbox. What are our agents authorized to do? Who approved those permissions? I want a full review before we have our own headline."

Optimizers

"Anthropic just launched enterprise plugins. Claude works inside Excel, PowerPoint, Google Drive now. Not instructions. Execution. I'm piloting this with our team this week and building a deployment plan for the department."

Accelerators

"Twenty-four thousand fake accounts. Sixteen million stolen interactions. China's distilling our AI models. I'm packaging AI model security audits. Every company with proprietary AI needs protection. The news just created the urgency."

ACTION: Your 15-Minute Money Move

I'm a [your role] with expertise in [your domain]. This week: OpenAI signed a Pentagon deal. Anthropic got blacklisted for refusing autonomous weapons. Chinese labs ran 16M fake interactions to steal Claude's intelligence. Meta's AI safety director's agent deleted her inbox. Anthropic launched enterprise plugins. $700B in AI infrastructure spending in 2026. The theme is LINES. What is ONE specific consulting offer I could package around AI positioning, compliance, or agent governance, targeted at companies navigating AI's new political landscape? Be specific about deliverable, price, and target client. I have 15 minutes.

Saturday Sprint

Legacy Builders

30 min

Map out a "Which AI Can I Use Where?" decision framework for companies doing government-adjacent work. OpenAI for defense, Claude for commercial, open source for research. The compliance landscape just changed overnight. Be the person with the map.

Operators

20 min

Audit every AI agent your company has deployed. Document what permissions each one has, what it can access, what it can delete. Meta's safety director learned this lesson the hard way. Don't be next.

Optimizers

15 min

Sign up for Anthropic's enterprise plugin beta and test Claude inside one of your core workflows. Document what works, what doesn't, what you'd customize. First-mover advantage on day-one tools is real.

Accelerators

10 min

Write a LinkedIn post about the OpenAI/Anthropic Pentagon split. What it means for enterprise AI strategy. The conversation is live. The takes are flying. Get yours in before Monday.

🚀 Launch Pad (For Students/New Grads)

Write a comparison analysis: OpenAI's Pentagon deal vs. Anthropic's refusal. What does each decision mean for the future of AI governance? Post it on LinkedIn or Medium. This is the kind of strategic thinking that gets noticed by hiring managers in AI policy, compliance, and consulting.

This isn't about having opinions. It's about doing the analysis and showing your work. That's what gets noticed.

Build your strategic analysis muscle. Publish it. Get feedback. Iterate. Repeat. Forward this to someone building their portfolio. 👋🏾

Weekly Philosophy

"When someone shows you who they are, believe them the first time." — Maya Angelou

🧠

THOUGHT

💬

WORD

ACTION

Before You Go 🌿

This week drew real lines. Between companies, between countries, between what AI will and won't do. And if that feels heavy, it is. These aren't just business stories. They're decisions about what kind of future we're building. Take a breath. You don't have to figure it all out today. But you DO get to choose which side of the line you stand on. That's your power. Use it well.

— Susan

Pricing Methodology: All price ranges cited in THE OFFER sections are derived from publicly available compensation data and industry rate benchmarks, including Glassdoor, ZipRecruiter, PayScale, the IAPP Privacy Workforce Survey (2025–2026), the Stack Consultant Pricing Guide (2025), CyberSeek, and the U.S. Bureau of Labor Statistics. Independent consulting rates are calculated using the formula: hourly rate × estimated engagement hours = price range. Full-time salary data is converted to hourly equivalents for context. Rate benchmarks are refreshed quarterly. Actual earnings depend on experience, specialization, geographic market, and client scope. These figures represent market ranges, not guarantees of income. Nothing in this newsletter constitutes financial, legal, or career advice. Do your own research. Trust your own judgment. Then go get your bag.

© 2026 KENEKTS Global LLC

Previous
Previous

While they exit, you enter. ☕

Next
Next

While They Lost Trust, You Found the Openings