While They Panicked, You Priced It


The Week The Receipts Wrote Themselves | The Sip & Click
The Weekly Tea

The Week The Receipts Wrote Themselves

April 4, 2026 · Saturday Strategy
"The drama tells you what's happening. The tea tells you how to attract abundance."

Good morning. ☕ Pull up a chair. I've got tea. And this week? The receipts wrote themselves.

Anthropic accidentally leaked Claude's entire source code to the public. 512,000 lines. Thousands of devs had copies before anyone blinked.

Microsoft is openly threatening to sue OpenAI over a $50B deal with Amazon that may have broken their cloud-exclusivity contract.

OpenAI bought a tech talk show for low hundreds of millions, betting big on the future of how AI stories get told. "Editorial independence" is doing some heavy lifting though.

Utah is letting an AI chatbot prescribe psychiatric medication without a doctor. Prozac. Zoloft. Wellbutrin. $19 a month.

A jury found Meta and YouTube liable for deliberately designing addictive platforms that harmed a young woman's mental health. 2,000 more lawsuits are waiting.

An AI-powered tractor startup burned through $240 million and fired every employee. The autonomous tractors kept hitting vines.

If you've been scrolling past AI headlines thinking "this doesn't affect me." Friend, I need you to sit down. Because there's money and opportunity hiding in every single one of these stories. And some of them have your name on it.

Sage Insight
"No one was ever injured by the truth, whereas he is injured who continues in his own self-deception and ignorance."
— Marcus Aurelius, Meditations (Book 6, Section 21)

Confessionals are fictional and satirical — our favorite way to say what these companies are probably thinking but would never say out loud.

🔒 Money Move #1

The AI Security Consultant

The Drama:

Two of the biggest names in AI had security nightmares in the same week. Anthropic accidentally shipped Claude Code's entire source code to the public npm registry via a packaging error. We're talking 512,000 lines of TypeScript, 44 unreleased feature flags, internal model codenames, and a secret autonomous daemon feature called "KAIROS." Thousands of developers mirrored the code within hours. Anthropic scrambled to file copyright takedowns on GitHub, accidentally nuking more repos than intended, then had to walk it back.

Meanwhile, Meta paused all work with Mercor, a leading AI data vendor, after a security breach that may have exposed proprietary data about how major AI labs train their models. Multiple companies are now investigating.

🎬 Confessional — Anthropic: "We've spent three years telling everyone we're the responsible ones. The grown-ups. The ones who do AI the RIGHT way. And then we shipped our entire source code to npm like an intern's first deploy." — quietly unpins "Safety First" banner from Slack channel

🎬 Confessional — Meta: "Look, we didn't leak OUR code. Our vendor leaked EVERYONE's code. That's a totally different kind of messy. We're the victim here. For once." — updates vendor risk spreadsheet, adds seventeen new rows

💰 YOUR BAG:

Every AI company that just read these headlines is now panic-reviewing their supply chain security, their code deployment processes, and their vendor agreements. And most of them don't have anyone dedicated to this. That's the gap. If you have ANY background in cybersecurity, IT governance, compliance, or even risk management from non-tech industries? You have transferable skills that companies are desperate for RIGHT NOW. You don't need to know how to write TypeScript. You need to know how to audit a process, review vendor contracts, and build a security checklist that prevents a $57 million source code file from hitting the public internet. That's operations. That's governance. That's YOUR lane.

💼 THE OFFER:

AI Security & IP Protection Audit — a focused engagement where you review a company's AI code deployment pipeline, vendor data agreements, and IP exposure points. You deliver a risk report with specific recommendations. $5K–$15K per engagement (AI Security Engineers average $152K/yr full-time per ZipRecruiter Q1 2026; independents bill $150–$300/hr per Stack Consultant Pricing Guide 2025 and Glassdoor 2026; priced at 20–40 hr engagement). Every company with a model, an API, or a data vendor needs this done yesterday.

📋 Money Move #2

The AI Contract & Vendor Strategist

The Drama:

One of the biggest partnerships in tech is growing up. And growing pains look different when billions of dollars are involved. Microsoft is openly discussing legal action against OpenAI over a $50 billion deal with Amazon on AWS, claiming it may conflict with their cloud-exclusivity agreement. Here's the thing though. OpenAI isn't being shady. They're doing what any company does when it outgrows a single partnership. They're diversifying. Microsoft helped build them. And now they're big enough to want options. That's business. But the contract says what the contract says, and this is about to get very expensive for somebody's legal team.

Meanwhile, Elon Musk is requiring every bank, law firm, auditor, and advisor working on the SpaceX IPO to subscribe to Grok. Not suggesting. Requiring. As a condition of getting the deal.

🎬 Confessional — Microsoft: "We invested $13 billion. We believed in the vision. And now they're expanding to other clouds. I'm not saying it's personal. But I am saying our lawyers are reviewing the paperwork." — opens Azure dashboard, stares at it meaningfully

🎬 Confessional — xAI: "It's simple. You want to work on the biggest IPO of the decade? You use our product. That's not coercion. That's market alignment." — adjusts Grok subscription settings, adds mandatory field

💰 YOUR BAG:

Every company signing an AI vendor contract right now is watching the Microsoft-OpenAI situation and thinking "could that be us?" Not because anyone did anything wrong. Because partnerships evolve. Companies grow. And the contracts that made sense at $1 billion might not make sense at $100 billion. Cloud exclusivity clauses. Data portability terms. What happens to your fine-tuned models if your vendor changes pricing, gets acquired, or decides you're a competitor? These are contract questions with real money attached. And most legal teams don't have anyone who understands both the contract language AND the AI technology enough to spot the gaps before they become headlines. If you have a background in procurement, vendor management, legal ops, or contract negotiation, and you understand what "cloud lock-in" actually means? You're the person every general counsel needs on speed dial right now.

💼 THE OFFER:

AI Vendor Contract Review — a flat-fee engagement where you review a company's AI vendor agreements for lock-in risk, data portability gaps, and exclusivity traps. You deliver a redlined contract with recommendations. $4K–$12K per engagement (AI Governance Specialists average $141K–$240K/yr full-time per ZipRecruiter and Glassdoor 2026; independents bill $150–$300/hr per IAPP Privacy Workforce Survey 2025-26 and Stack Consultant Pricing Guide 2025; priced at 15–40 hr engagement). Position it as: "I help companies make sure their AI partnerships grow WITH them, not against them."

⚖️ Money Move #3

The AI Risk & Liability Consultant

The Drama:

A jury just found Meta and YouTube liable for deliberately designing addictive platforms that harmed a young woman's mental health. The verdict: $6 million. But here's the number that matters more. 2,000 similar lawsuits are waiting in the wings. This is the first time a jury has said "yes, the way you built this product caused real harm, and you owe money for it." That changes the math for every company shipping AI-powered features to consumers.

Now zoom out to Monarch Tractor. An AI-powered tractor startup that raised $240 million and was once valued at half a billion dollars. Their autonomous tractors were hitting vines. The self-driving system didn't work. Dealerships sued them for selling defective equipment. They had to fire every single employee. Quarter of a billion dollars, gone. Not because the idea was bad. Because the product didn't work and nobody stopped the train.

🎬 Confessional — Meta: "One verdict. $6 million. That's a rounding error on our Q1 revenue. But 2,000 more lawsuits? That's not a rounding error. That's a line item." — opens calculator, closes calculator, opens it again

🎬 Confessional — Monarch Tractor (guest appearance): "We raised a quarter billion dollars. We had the vision. We had the deck. We had the press coverage. What we didn't have was a tractor that could drive straight without hitting a vine. Details." — stares at empty parking lot where 200 employees used to park

💰 YOUR BAG:

Every company shipping AI features to real people just watched a jury say "you're liable for what your product does." And every AI startup that raised on a pitch deck just watched $240 million evaporate because nobody in the room could honestly assess whether the product actually worked. The demand for people who can walk into a boardroom and say "here's your liability exposure, here's where your product claims don't match reality, and here's what it costs if you ship anyway"? That's not theoretical anymore. That's a jury verdict. If you have any background in risk management, insurance, product liability, legal ops, quality assurance, or compliance? You already know how to assess exposure. Now pair that with enough AI fluency to understand what these products actually do, and you're the person companies need BEFORE the lawsuit, not after.

💼 THE OFFER:

AI Product Liability & Risk Assessment — a focused engagement where you evaluate a company's AI-powered products for liability exposure, user harm potential, and regulatory risk. You review the product claims, the actual capabilities, the user impact data, and deliver a risk report with specific mitigation recommendations. $6K–$18K per engagement (AI Governance Specialists average $141K–$240K/yr full-time per ZipRecruiter and Glassdoor 2026; independents bill $150–$300/hr per IAPP Privacy Workforce Survey 2025-26 and Stack Consultant Pricing Guide 2025; priced at 25–50 hr engagement). The buyer is the General Counsel, the Chief Product Officer, or the VP of Risk. And after this week? They're picking up the phone.

📡 Money Move #4

The AI Media & Narrative Strategist

The Drama:

Let me tell you what's happening to our media, marketing, and entertainment people right now. Because it's a LOT.

OpenAI just made a bold move into media, acquiring TBPN, a buzzy tech talk show popular with Silicon Valley founders, for low hundreds of millions. The show had 58,000 YouTube subscribers but was on track for $30 million in ad revenue this year. That's a bet on the future of how tech stories get told. Sam Altman called it his favorite tech show. The hosts, John Coogan and Jordi Hays, will maintain editorial independence. And honestly? An AI company investing in long-form tech media is interesting. "Editorial independence" is doing some heavy lifting in that press release. But the play itself? Smart. They see where media is going.

Meanwhile, HarperCollins just partnered with Toonstar, an AI animation studio, to turn book franchises into digital shows. The first project is Lisa Greenwald's "Friendship List" series. No mention of human animators in the announcement.

And Meta's latest AI-powered ad tools? Creators are publicly calling out the gap between what Meta promised and what the tools actually deliver. Overpromising performance. Under-delivering on creator partnerships. It's the "build fast, apologize later" loop, and the people getting burned are the marketers and creators who trusted the pitch.

🎬 Confessional — OpenAI: "We see where media is going and we're investing early. Yes, we also happen to need better press. But this is about owning a format, not controlling a narrative. Those are different things. Mostly." — bookmarks "editorial independence" Wikipedia page, just in case

🎬 Confessional — Meta: "Our AI ad tools are revolutionary. They're performing beyond expectations. And if a few creators are experiencing 'inconsistencies,' that's a feature, not a bug. We're iterating." — refreshes creator complaints dashboard, minimizes window

💰 YOUR BAG:

Here's what I need you to hear. If you've been in marketing, media, publishing, entertainment, PR, or content creation? Your industry is being reshaped THIS WEEK. Not next year. Right now. And the companies doing the reshaping? They don't have people who understand both sides. They have tech people who don't understand media. They have media people who don't understand AI. The person who bridges that gap? There is almost no competition for that role right now.

You already know how narratives work. You already know what audiences trust. You already know why "editorial independence" under a political operative sounds like a punchline. That media literacy, that instinct? That's not something an AI model can replicate. But pair it WITH AI fluency? Now you're the person who can advise a publisher on whether AI animation is the right move for their brand. You're the one who can tell a company their AI ad tools are eroding creator trust and here's how to fix it. This isn't about abandoning your industry. It's about being the person your industry can't do without in the new landscape.

💼 THE OFFER:

AI Media & Narrative Strategy Audit — a focused engagement where you evaluate a company's AI-driven content, advertising, or PR strategy for trust gaps, creator friction, and narrative risk. You deliver a report with specific fixes. $3K–$10K per engagement (AI Strategy Consultants average $118K–$171K/yr full-time per ZipRecruiter 2026; independents bill $150–$250/hr per Stack Consultant Pricing Guide 2025; priced at 15–30 hr engagement). Position yourself as: "I help companies tell AI stories that audiences actually believe."

🏥 Money Move #5

The AI Healthcare Governance Advisor

The Drama:

Utah just became the first state to let an AI chatbot prescribe psychiatric medication without a doctor. A Y Combinator-backed startup called Legion Health is running a 12-month pilot where patients can renew prescriptions for Prozac, Zoloft, Wellbutrin, and Lexapro through a chatbot. Cost: $19 a month. The first 250 prescriptions will be monitored by a physician. After that? If the system hits a 98% approval rate, it prescribes on its own.

Meanwhile, Tennessee's governor signed a bill PROHIBITING AI systems from representing themselves as mental health professionals. And across the country, 78 chatbot bills are alive in 27 states. The regulatory map is a patchwork of "go ahead" and "absolutely not."

And here's the part that should keep you up at night: a study published last year found that large language models used in healthcare are "extremely susceptible to jailbreak attacks." Which is exactly what you want from a tool that can prescribe medication without a human in the room.

🎬 Confessional — Legion Health (guest appearance): "We're not replacing doctors. We're... augmenting access. At scale. For $19 a month. With a chatbot. That prescribes psychiatric medication. Okay I hear how that sounds. But the data is solid." — checks 98% approval threshold, nervously

🎬 Confessional — Tennessee (guest appearance): "We saw what Utah did and said 'not in our state.' Somebody has to be the adult. We wrote a whole law about it." — signs bill with visible relief

💰 YOUR BAG:

Twenty-seven states. Seventy-eight bills. And counting. Every state health department, every hospital system, every telehealth startup, every insurance company is now staring at a regulatory landscape that changes by the zip code. They need people who can map AI healthcare regulations across jurisdictions, build compliance frameworks, assess clinical AI tools for safety, and advise on risk exposure. If you have ANY background in healthcare compliance, nursing, health IT, public health policy, or even pharma operations? You already understand the regulatory environment. Now pair that with AI fluency and you're the person standing between a health system and a lawsuit.

💼 THE OFFER:

AI Healthcare Governance & Compliance Review — a multi-session engagement where you map a healthcare organization's AI tool usage against current and pending state regulations, assess patient safety risk, and deliver a compliance roadmap. $6K–$18K per engagement (AI Governance Specialists average $141K–$240K/yr full-time per ZipRecruiter and Glassdoor 2026; healthcare compliance premium pushes independent rates to $175–$350/hr per IAPP Privacy Workforce Survey 2025-26; priced at 20–50 hr engagement). The buyer is the General Counsel, the Chief Compliance Officer, or the VP of Digital Health. They're already worried. You're the call they need to make.

🎯 The Meta Move

The AI Risk Translator

Here's what I almost missed until I laid all five Money Moves side by side: every single story this week is about someone who didn't see the risk until it was too late.

Anthropic didn't see the npm packaging risk. Meta didn't see the vendor data risk. OpenAI didn't see Sora's burn rate risk until a billion-dollar partnership was on the line. Utah didn't fully see the jailbreak risk in AI prescriptions. Microsoft didn't see the contract risk in letting OpenAI get too independent. And OpenAI bought a media company without seeing the narrative risk of putting it under their political fixer.

The horizontal skill that connects ALL of this? The ability to walk into a room full of executives who just read the same headlines you did, and calmly say: "Here's what this means for us. Here's where we're exposed. And here's what we do about it."

That's not a tech skill. That's a translation skill. You take chaos and turn it into a decision framework. You're already doing it by reading this newsletter. The question is: can you package it?

💼 THE OFFER:

"AI Risk Translation" Monthly Briefing — a recurring retainer where you synthesize the week's AI developments into a 1-page risk and opportunity brief for an executive or leadership team. You become their AI intelligence layer. Not reacting to headlines. Anticipating them. $2K–$8K/month retainer depending on frequency and depth (per Stack Consultant Pricing Guide 2025 and IAPP Advisory Rate Benchmarks 2025-26). This is exactly where consulting demand is exploding right now.

WORD: How to Talk About This Monday

Legacy Builders — The Fractional Expert

"Did you see that Anthropic leaked their entire source code? Here's what most people are missing. It wasn't a hack. It was a packaging error. An internal debug file shipped to a public registry. That means the process broke, not the security system. Which honestly? That's scarier. Because process failures mean it can happen to anyone who doesn't have someone specifically reviewing their deployment pipeline. That's the consulting practice being born right now."

The Operators — The AI Translator

"Here's how I'd frame it. Oracle is making a bet that AI infrastructure will generate more revenue than 20,000 employees. That bet might be right. But the transition? That's where companies destroy themselves. The people who survive these pivots are the ones who can speak both languages. They understand the old workflows AND they can map the new AI capabilities. If that's you, you're not getting replaced. You're getting promoted. But you have to make that visible. Now. Not after the layoff memo."

The Optimizers — The Productivity Architect

"It's a pilot program. 15 medications. Patients have to be stable. And the first 250 prescriptions get reviewed by a human doctor. But here's the real story. 78 AI chatbot bills are moving in 27 states right now. Every healthcare org is going to need someone who can map this regulatory patchwork and keep them compliant. That's not a doctor's job. That's a compliance and operations job. And most health systems don't have anyone doing it yet."

The Accelerators — The Speed Specialist

"It's not weird. It's actually smart. They see that AI companies need to own how their stories get told, so they invested early in a media format. Spent low hundreds of millions on a show with 58K subscribers but $30M in projected ad revenue. Which tells you something important. Companies will pay serious money for people who can shape how AI stories land with real audiences. If you can write, produce, or consult on AI narratives? That skill is about to be very, very valuable."

ACTION — Your 15-Minute Money Move

Copy this prompt. Paste it into Claude or ChatGPT. Let it help you pick your lane from this week's opportunities.

I just read about five AI income opportunities:

1. AI Security & IP Protection Consulting (helping companies audit deployment pipelines and vendor agreements after the Anthropic code leak)
2. AI Vendor Contract Strategy (reviewing AI partnerships for lock-in risk after the Microsoft-OpenAI contract dispute)
3. AI Product Liability & Risk Assessment (evaluating AI products for harm exposure after the Meta/YouTube verdict and Monarch Tractor collapse)
4. AI Media & Narrative Strategy (advising companies on AI storytelling trust after OpenAI bought a talk show and Meta's ad tools disappointed creators)
5. AI Healthcare Governance & Compliance (mapping state-by-state AI regulations after Utah approved AI prescriptions and Tennessee banned them)

My professional background is in [INSERT YOUR INDUSTRY/ROLE].

Based on my background, which ONE of these five opportunities is the best fit for me? Tell me:
- Why it matches my existing skills
- What I'd need to learn in the next 30 days
- One specific first step I can take this weekend
- How to describe this service in one sentence on LinkedIn

Be specific. Be direct. Don't hedge.

Done is better than perfect. Paste it. Run it. Screenshot the answer. That's your blueprint for the week.

Saturday Sprint

Legacy Builders
20 min

Pick ONE of the five Money Moves from today. Write a LinkedIn post about it. Not a pitch. An observation. "Here's what I noticed about the Anthropic code leak that most people missed." Post it. Let the recruiters find you.

The Operators
15 min

Map your company's top 3 AI vendors. For each one, write down: What's the contract term? What happens to our data if we leave? Who owns the fine-tuned models? If you can't answer these questions, that's the problem. And the opportunity.

The Optimizers
10 min

Search LinkedIn Jobs for "AI governance" or "AI compliance" in your metro area. Screenshot 3 postings. Note the salary ranges. That's your market validation. Now you know the demand is real.

The Accelerators
15 min

Go to the Anthropic code leak coverage on VentureBeat or Wired. Read one article. Then write a 3-sentence summary as if you were explaining it to your boss who doesn't follow AI news. That's the skill. Practice it.

Launch Pad 🚀

For Students, New Grads, and Career Starters:

This week's portfolio project: The AI Risk Digest. Pick any TWO of this week's stories. Write a 500-word briefing as if you were presenting to a CEO who has 5 minutes and needs to understand what happened and what it means for their business. Use plain language. No jargon. Include a "What This Means For Us" section with 3 specific recommendations.

Post it on LinkedIn or Medium with the hashtag #AIRiskDigest. Why this works: You're demonstrating the exact skill the Meta Move describes. Translating AI chaos into executive decisions. That's a portfolio piece that gets you hired.

Forward this to someone who's job hunting right now. They'll thank you. 👋🏾

From Susan's World

The Essential AI Table Method

Stop reacting to AI news. Start building strategic intelligence. The method that teaches you how to extract opportunity from chaos like this every single week.

The Sip & Click

AI news, real opportunities, and the strategy the headlines won't give you. New editions drop when the tea is hot.

Before You Go 🌿

The world is heavy right now. Iran. Lebanon. A defense budget that makes your eyes water. I know you're carrying all of that when you open this newsletter.

So if the Sprint feels like too much today, skip it. The opportunities will be here Monday. But if building is how you breathe when things get heavy? I'm right here with you.

Take care of yourself first. Always.

— Susan

Pricing Methodology: All price ranges cited in THE OFFER sections are derived from publicly available compensation data and industry rate benchmarks, including Glassdoor, ZipRecruiter, PayScale, the IAPP Privacy Workforce Survey (2025–2026), the Stack Consultant Pricing Guide (2025), CyberSeek, and the U.S. Bureau of Labor Statistics. Independent consulting rates are calculated using the formula: hourly rate × estimated engagement hours = price range. Full-time salary data is converted to hourly equivalents for context. Rate benchmarks are refreshed quarterly. Actual earnings depend on experience, specialization, geographic market, and client scope. These figures represent market ranges, not guarantees of income. Nothing in this newsletter constitutes financial, legal, or career advice. Do your own research. Trust your own judgment. Then go get your bag.

© 2026 KENEKTS Global LLC

Next
Next

While They Lost Trust, You Found Five Openings