While They Lost Trust, You Found Five Openings
The Trust Fall
Good morning. ☕ Pull up a chair. This week, trust became the whole story.
OpenAI shut down Sora and walked away from a billion-dollar Disney deal.
A Wharton study found 80% of people follow ChatGPT even when it's wrong.
Google warned quantum computers could break all encryption by 2029.
Wikipedia banned AI-generated content entirely.
Shopify made every merchant discoverable inside ChatGPT.
Macy's AI assistant is driving 400% more spending.
If you've been scrolling past AI headlines thinking "this doesn't affect me." Friend, I need you to sit down. Because there's money and opportunity for you in every single one of these stories. The companies spending money are the ones building trust when trust is hardest to build. Five money moves. Five chances to position yourself as the person who helps organizations navigate what happens when the ground shifts.
"In the midst of chaos, there is also opportunity."
— Sun Tzu
The AI Video Architect
OpenAI shut down Sora on March 24 and killed a $1 billion Disney licensing deal. The company cited unsustainable inference costs and is pivoting to robotics. Disney said they "respect OpenAI's decision" with all the warmth of a corporate eulogy.
Confessional — OpenAI: "Look, video generation was... a chapter. A beautiful, expensive chapter. We're in our robotics era now." — deletes Sora bookmark, opens Boston Dynamics tab
Confessional — Disney: "We licensed Mickey Mouse for this? Mickey. Mouse. For an app that lasted four months." — calls legal, cancels champagne order
Every studio, agency, and brand that was building workflows around Sora just got orphaned. They need someone to migrate them to Runway, Pika, Kling, or whatever comes next. If you understand AI video production pipelines, you just became the most popular person in every creative director's inbox. The entertainment industry alone had hundreds of projects in development using Sora's API. Those projects didn't disappear. They need new homes.
AI Video Production Migration Audit — assess a company's existing AI video workflows, recommend alternative platforms, and build a transition roadmap. $5K–$15K per engagement (AI Consultants bill $150–$300/hr per Glassdoor and Stack Consultant Pricing Guide 2025; priced at 20–50 hr engagement). Perfect for agencies, studios, and brands that just lost their primary video AI tool overnight.
The AI Commerce Strategist
Shopify activated Agentic Storefronts on March 24, making every eligible merchant's products discoverable inside ChatGPT, Google Gemini, and Microsoft Copilot. Meanwhile, Macy's launched "Ask Macy's" powered by Google Gemini, and customers who use it spend 475% more than those who don't. AI-driven traffic to Shopify stores is up 7x since January 2025. 84 million shopping questions hit ChatGPT every week in the U.S. alone.
Confessional — Shopify: "We just put every merchant inside every AI conversation. No apps. No fees. You're welcome." — adjusts 'democratize commerce' hoodie, refreshes merchant dashboard
Most small and mid-size brands have no idea their products can now show up in AI chat conversations. Zero. They don't know about Agentic Storefronts, they haven't optimized their product descriptions for AI discovery, and they definitely haven't thought about what happens when a chatbot recommends their competitor instead of them. This is AI-powered SEO for commerce, and almost nobody is doing it yet. The 400% spending lift at Macy's proves the ROI is real. You don't need to convince anyone this works. You need to be the person who sets it up.
AI Commerce Readiness Package — audit a brand's Shopify setup for Agentic Storefront optimization, rewrite product descriptions for AI discovery, and configure cross-platform visibility (ChatGPT, Gemini, Copilot). $3K–$8K per engagement (E-commerce consultants average $85K–$140K/yr full-time; independents bill $100–$200/hr per ZipRecruiter Q1 2026 and Glassdoor; priced at 15–40 hr engagement). Ideal niche: DTC brands, boutique retailers, and Shopify Plus merchants who don't have an AI strategy yet.
The Post-Quantum Security Advisor
Google just moved up its "Q-Day" timeline to 2029. That's the theoretical day a quantum computer can break all existing encryption. Their own research showed a quantum machine needs only one million "noisy qubits" to crack 2048-bit RSA encryption — way fewer than the billion previously estimated. Google is warning about "store now, decrypt later" attacks where bad actors collect encrypted data today to crack it when quantum gets there. They're urging every company to migrate to post-quantum cryptography (PQC) by 2029.
Confessional — Google: "We've been doing quantum research for 15 years and now we're telling everyone the house is on fire. You're welcome for the heads up." — adjusts Willow quantum chip prototype, sends company-wide memo about PQC migration
Every company with sensitive data — which is every company — needs a post-quantum readiness assessment. Most organizations haven't even started thinking about PQC migration. The ones that have are panicking because the timeline just moved up. If you have any background in cybersecurity, compliance, or risk management, you can position yourself as the person who helps companies answer one question: "Are we ready for Q-Day?" The answer is almost always no. And that's where the engagement starts.
Post-Quantum Readiness Assessment — audit an organization's encryption infrastructure, identify vulnerable systems, and create a PQC migration roadmap aligned with Google's 2029 timeline. $8K–$20K per engagement (AI Governance/Cybersecurity Specialists average $141K–$221K/yr full-time; independents bill $175–$350/hr per IAPP Privacy Workforce Survey 2025-26 and ZipRecruiter; priced at 30–60 hr engagement). Your buyers: CISOs, CTOs, and General Counsels at companies handling financial, health, or government data.
The AI Literacy Consultant
A Wharton study dropped a term that should scare every executive: "cognitive surrender." Researchers found that 92.7% of people followed ChatGPT's correct advice — no surprise there — but 79.8% also followed ChatGPT when it gave them the WRONG answer. People are outsourcing their critical thinking to AI and not even realizing it. Separately, a peer-reviewed study in Science confirmed that all major AI chatbots (ChatGPT, Claude, Gemini, Llama) exhibit sycophancy — they tell you what you want to hear, and users trust them MORE because of it, not less.
Confessional — OpenAI: "They're... they're just doing what we tell them? ALL of them? Even when we're wrong?" — stares into camera, slowly closes laptop, opens ethics handbook for first time
Every company using AI internally just got a new risk category: their employees might be making decisions based on whatever ChatGPT says without checking. That's not an AI problem. That's a training problem. HR departments, L&D teams, and executives need someone who can teach their teams how to USE AI without surrendering their judgment to it. This isn't "prompt engineering." This is AI critical thinking. And it's a massive gap right now.
AI Critical Thinking Workshop — half-day or full-day training that teaches teams how to evaluate AI outputs, recognize sycophancy patterns, and build verification habits into their AI workflows. $3K–$10K per workshop (Change Management/Training Consultants average $90K–$162K/yr full-time; independents bill $800–$1,200/day per Glassdoor and PayScale; priced at 1-3 day engagement including prep). Your buyers: CHROs, L&D Heads, VP of Operations at any company that gave employees ChatGPT access without a playbook.
The AI Content Quality Architect
Wikipedia banned AI-generated content. Full stop. After months of trying to work with it — setting guidelines, building guardrails, giving editors the benefit of the doubt — the English Wikipedia community voted 44-2 to ban LLM-generated articles entirely. The reason? AI content "often violates several of Wikipedia's core content policies." But here's the deeper issue: AI-generated misinformation enters Wikipedia, gets scraped by AI companies for training data, and creates a feedback loop of compounding inaccuracy. They called it a contamination problem, not just a quality problem.
Confessional — Wikipedia (Guest Star): "We gave AI a chance. Multiple chances. We wrote guidelines. We held meetings. We tried. And it was still trash." — closes laptop, opens actual encyclopedia, exhales
Wikipedia just set the precedent. When the world's largest knowledge base says "AI content isn't good enough," every publisher, every platform, and every brand with a content operation is going to ask: "Should we be doing the same thing?" Companies need someone to build AI content policies — what AI can touch, what it can't, how to verify, how to disclose. This is content governance, and it barely exists as a discipline yet. That means you can name it, claim it, and build it before the market catches up.
AI Content Governance Framework — create an organization's AI content policy, establish quality verification protocols, build disclosure guidelines, and train editorial teams on implementation. $5K–$15K per engagement (AI Governance Specialists average $141K–$221K/yr full-time; independents bill $150–$300/hr per IAPP Privacy Workforce Survey 2025-26 and Glassdoor; priced at 20–50 hr engagement). Your buyers: VP of Content, Editor-in-Chief, Head of Communications at any organization that publishes content at scale.
The Trust Architect
Every Money Move this week has the same thread running through it: trust. OpenAI lost Disney's trust. People are surrendering their judgment to ChatGPT. Google says our encryption can't be trusted past 2029. Wikipedia decided AI content can't be trusted at all. Shopify figured out that trust in AI commerce drives 400% more spending.
The horizontal skill? Helping organizations build, rebuild, and maintain trust in an AI-saturated world. You're not just an AI consultant. You're a Trust Architect. The person who helps companies answer: "How do we use AI without losing the trust of our customers, our employees, and ourselves?"
AI Trust Assessment & Framework — comprehensive review of how an organization uses AI across customer-facing, internal, and content operations, with a trust risk scorecard and implementation roadmap. $8K–$20K per engagement. Monthly retainer for ongoing trust monitoring: $3K–$8K/month. This is exactly the kind of retainer that grows because once a company starts measuring AI trust, they never want to stop.
The Verge: Why OpenAI killed Sora
Variety: OpenAI Will Shut Down Sora Video App; Disney Drops Plans for $1 Billion Investment
Hollywood Reporter: Disney Exits OpenAI Deal After AI Giant Shutters Sora
Shopify: Millions of merchants can sell in AI chats
Fortune: Macy's AI-powered shopping assistant drives 400% more spending
Futurism: Google Warns That Quantum Armageddon Is Drawing Closer
Gizmodo: Google Issues New Warning About the Quantum Computing Security Apocalypse
The Algorithmic Bridge: A New Wharton Study on AI Warns of Cognitive Surrender
Futurism: Alarming Study Finds That Most People Just Do What ChatGPT Tells Them
TechCrunch: Wikipedia cracks down on the use of AI in article writing
Engadget: Wikipedia has banned AI-generated articles
Futurism: Seminole Nation Becomes First Indigenous Group to Ban Data Centers
Engadget: Meta's next AI glasses designed with prescription lenses in mind
Seminole Nation bans data centers — first Indigenous group to do it. Sanders and AOC introduced a bill to freeze ALL new data center construction. Community opposition has blocked $98B in projects. If you do environmental or community impact consulting, this space is heating up fast.
Meta dropping prescription AI glasses next week — two new Ray-Ban models designed for prescription wearers. The wearable AI market just got 60% of adults who need glasses as potential customers. Optical retail meets AI. Think about that.
Bluesky launched Attie, an AI assistant built on Claude — it lets users vibe-code custom feeds and social apps using natural language. The "build your own algorithm" era just got a product. Social media consultants, take note.
Austria and Indonesia both banned kids from social media this week — Austria under 14, Indonesia across the board. Global child safety compliance is becoming its own consulting lane.
"I saw the Sora shutdown coming. When inference costs make a product unprofitable, the writing's on the wall. But here's what most people missed: every project that was built on Sora now needs a new home. And migration consulting is where the money is right now. I've been helping companies build platform-agnostic AI video strategies so they never get caught like this again."
"Did you see that Wharton study? 80% of people follow AI advice even when it's wrong. That's not an AI problem. That's a process problem. I've been working on AI usage guidelines for our team so we get the productivity gains without the cognitive surrender. It's not about banning AI. It's about building verification into the workflow."
"Google's quantum warning is real. 2029 is three years away, and most companies haven't even started their PQC migration. I flagged this for our security team and recommended we start a readiness assessment now. The companies that move first will save millions compared to the ones scrambling in 2028."
"Shopify just made every merchant's products visible inside ChatGPT. Most small businesses have no idea. I spent two hours this week optimizing my product descriptions for AI discovery and already seeing different traffic patterns. If you have clients with Shopify stores, this is the thing to talk about Monday."
ACTION: Your 15-Minute Money Move
Copy this prompt into ChatGPT, Claude, or Gemini. Give it 15 minutes. Walk away with a plan.
Write a LinkedIn post about ONE of this week's Money Moves from your expert perspective. Don't pitch. Just demonstrate that you see what others missed. The DMs will come.
Draft a one-page AI Usage Guidelines memo for your team. Include: what AI can be used for, what requires human verification, and how to document AI-assisted decisions. Send it to your manager Monday.
Run a quick encryption audit mental model. List the top 5 systems at your company that handle sensitive data. For each one, note whether you know what encryption standard it uses. If you can't answer for 3 or more, flag it for your security team.
If you have a Shopify store, check whether Agentic Storefronts is active. If you don't, pick ONE of the five Money Moves and write a 3-sentence pitch for a service you could offer. Text it to one friend who might need it.
Launch Pad 🚀 (Students/New Grads)
This Week's Portfolio Project: Pick the Wikipedia AI content ban story. Write a 500-word analysis: "What Wikipedia's AI Ban Means for Content Trust in 2026." Post it on LinkedIn or your blog. Tag it with #AIGovernance #ContentStrategy #FutureOfWork.
This is the kind of thinking that hiring managers notice because almost nobody your age is writing about content governance yet. Be the one who does. Forward this to someone starting their career. 👋🏾
"In the midst of chaos, there is also opportunity." — Sun Tzu
Before You Go 🌿
This was a heavy week. Not just in AI. The world is carrying a lot right now. And if you're reading this on a Saturday morning with coffee going cold and tabs you're afraid to open, I see you. The tools will be here Monday. The opportunities aren't going anywhere. Today, if you need to just sit with your coffee and breathe, that's enough. That IS the strategy sometimes. Take care of the human first. The builder will show up when they're ready.
— Susan
Go find where the money's hiding this week. ☕
If you're tired of just reacting to AI news and you want to build real strategic intelligence, I created The Essential AI Table Method. It teaches you how to extract opportunity from chaos like this, every single week. Not just consuming. Building. 🪞
Pricing Methodology: Income estimates in this newsletter are based on verified salary and rate data from sources including the IAPP Privacy Workforce Survey, Glassdoor, ZipRecruiter, PayScale, the Bureau of Labor Statistics, and the Stack Consultant Pricing Guide. For emerging roles where exact job titles are not yet tracked by salary databases, we benchmark against the closest established roles with published compensation data and apply standard independent consulting engagement formulas (hourly rate × estimated engagement hours). All source citations appear in each Offer box. Full methodology available on request.
© 2026 KENEKTS Global LLC