Good morning ☀️, leader of the next generation.
I believe this: Technology, when used with intention, doesn’t distract us — it awakens us.
It shows us who we are. What we can build. How far we can go.
It strips away the noise and gives us tools that let our natural genius come through.
That we stop waiting for the world to hand us permission — and we start using the tools already in our hands to create, to express, to become.
This is not about keeping up.
It’s about breaking through.
WHAT'S AT STAKE TODAY ⚡
- California's new AI safety law shows regulation and innovation don't have to clash 📋
- Instacrops will demo its water-saving, crop-boosting AI at TechCrunch Disrupt 2025 🌱
- Jeff Bezos hails AI boom as 'good' kind of bubble 💰
- Amazon's Ring cameras to form AI Search Party to find your missing dog 🐕
- AI's new dual workforce challenge: Balancing overcapacity and talent shortages ⚖️
- Tomra e-book explores AI's evolving role in recycling ♻️
- IBM Releases Open-Source Granite 4.0 Generative AI 🤖
California proves AI safety regulation can coexist with innovation progress
California's new AI safety law shows regulation and innovation don't have to clash

California Governor Gavin Newsom's signing of SB 53, the groundbreaking AI safety and transparency bill, demonstrates that effective regulation doesn't need to stifle technological advancement. This first-of-its-kind legislation requires major AI companies to maintain transparency about their safety protocols and demonstrate how they prevent catastrophic risks like cyberattacks on critical infrastructure or bioweapon development.
Adam Billen, vice president of public policy at youth advocacy group Encode AI, argues that policymakers understand the need for balanced legislation that protects innovation while ensuring product safety. "Companies are already doing the stuff that we ask them to do in this bill," Billen explained, noting that AI firms already conduct safety testing and release model cards.
The legislation addresses a concerning industry practice where companies relax safety standards under competitive pressure. OpenAI has publicly stated it may "adjust" safety requirements if rival labs release high-risk systems without similar safeguards. SB 53 aims to prevent this race-to-the-bottom mentality by enforcing existing safety promises through the Office of Emergency Services.
Despite receiving less opposition than its predecessor SB 1047, which Newsom vetoed last year, the broader AI industry continues pushing back against state regulation. Major players like Meta, Andreessen Horowitz, and OpenAI president Greg Brockman are investing hundreds of millions in super PACs supporting pro-AI politicians and previously advocated for a 10-year AI regulation moratorium.
The regulatory battle isn't over. Senator Ted Cruz recently introduced the SANDBOX Act, allowing AI companies to bypass federal regulations for up to a decade. Billen anticipates forthcoming federal legislation that, while appearing as a compromise, would effectively override state laws. He warns this could "delete federalism for the most important technology of our time."
Critics argue that state AI regulations will hinder America's competition with China, but Billen disputes this narrative. "Are bills like SB 53 the thing that will stop us from beating China? No," he stated, calling such arguments "intellectually dishonest." Instead, he advocates for export controls and ensuring American companies have access to advanced chips.
Legislative proposals like the Chip Security Act aim to prevent advanced AI chips from reaching China through export controls and tracking devices. The CHIPS and Science Act seeks to boost domestic chip production. However, major tech companies including OpenAI and Nvidia have shown reluctance toward these measures, citing competitiveness and security concerns.
Nvidia's opposition stems from financial incentives—China represents a significant portion of its global revenue. Billen speculates that OpenAI may avoid advocating for chip export restrictions to maintain good relationships with suppliers like Nvidia. The Trump administration's inconsistent messaging has further complicated matters, initially expanding export bans before reversing course and allowing limited chip sales to China in exchange for 15% revenue sharing.
State AI bills typically focus on deepfakes, transparency, algorithmic discrimination, children's safety, and governmental AI use—areas that don't significantly impact the China competition. Billen emphasizes that SB 53 addresses only a specific subset of AI risks and shouldn't replace comprehensive state legislation covering broader AI concerns.
The successful passage of SB 53 represents democracy in action, showing industry and policymakers can collaborate effectively despite the "very ugly and messy" process. Billen views this legislation as proof that the foundational democratic and federalism processes that underpin America's economic system still function.
"I think SB 53 is one of the best proof points that that can still work," he concluded, emphasizing that balanced regulation supporting both innovation and safety remains achievable through collaborative governance.
Honestly, would you actually use something like this?
Protect your checkout from coupon plug-ins. Boost your margin today.
KeepCart: Coupon Protection partners with DTC brands like Quince, Blueland, Vessi and more to protect your checkout from plug-ins like Honey, CapitalOne, RetailMeNot, and more to boost your DTC margins
Overpaid commissions to affiliates and influencers add up fast – Get rid of the headache and revenue losses with KeepCart.
After months of using KeepCart, Mando says “It has paid for itself multiple times over.”
Now it’s your turn to see how much more profit you can keep.
California proves AI regulation can coexist with innovation progress
California's new AI safety law shows regulation and innovation don't have to clash

California Governor Gavin Newsom signed SB 53, requiring large AI labs to maintain transparency about safety protocols preventing catastrophic risks like cyberattacks and bio-weapons development. Adam Billen from Encode AI argues this demonstrates regulation doesn't hinder innovation since companies already conduct safety testing.
The bill addresses competitive pressure leading some firms to relax safety standards. While Silicon Valley opposes most AI regulation, citing competition with China, Billen maintains targeted transparency requirements won't impact America's AI race and represents successful democratic collaboration between industry and policymakers.
⚡ More AI Bites
- 🌱💧 Instacrops will demo its water-saving, crop-boosting AI at TechCrunch Disrupt 2025
- 🚀💰 Jeff Bezos hails AI boom as 'good' kind of bubble
- 🐕📹 Amazon's Ring cameras to form AI Search Party to find your missing dog
- ⚖️👥 AI's new dual workforce challenge: Balancing overcapacity and talent shortages
- ♻️📚 Tomra e-book explores AI's evolving role in recycling
- 🔓🤖 IBM Releases Open-Source Granite 4.0 Generative AI
🎙️ NEW EPISODE DROP:
The Future of Human + AI Collaboration | What are The Smarter Ways to Use AI

I sat down with Dimitriy Wolf — social engineer, psycholinguist, and advisor to presidents, sheikhs, and TEDx speakers — to break down why most people use AI wrong.
We covered how to think *with* AI (not just prompt it), how to build authentic brand voices, and what yachts have to do with artificial intelligence.
If you're still using ChatGPT like a search bar — you're getting left behind.
⚡ Trends for the Future
The Reinforcement Gap — or why some AI skills improve faster than others

AI skills with clear metrics improve faster than subjective ones.
AI coding tools are advancing rapidly, with models like GPT-5, Gemini 2.5, and Sonnet 4.5 enabling developers to automate increasingly sophisticated tasks. However, other AI capabilities like email writing show minimal improvement compared to a year ago, creating an uneven landscape of AI progress.
This disparity stems from reinforcement learning (RL), which has become the primary driver of AI development over the past six months. Coding applications benefit enormously from RL because they can be tested against billions of measurable outcomes. Code either works or it doesn't, providing clear pass-fail metrics that enable automated training at massive scale without human intervention.
Software development represents an ideal testing ground for reinforcement learning. The industry already employs systematic validation through unit testing, integration testing, and security testing. These established frameworks translate perfectly to validating AI-generated code and powering reinforcement learning systems.
In contrast, subjective skills like writing emails or generating chatbot responses lack clear evaluation metrics, making them difficult to improve through RL. This creates what experts are calling the "reinforcement gap" between easily testable and subjectively assessed capabilities.
Not every task falls neatly into these categories. While accounting reports don't have built-in testing frameworks, well-funded startups could develop custom validation systems. The key factor determining whether a process becomes a functional product versus just a demo lies in its underlying testability.
Surprisingly, some seemingly subjective tasks prove more testable than expected. OpenAI's Sora 2 demonstrates remarkable progress in AI video generation, with consistent object persistence, stable facial features, and physics-compliant motion. This suggests sophisticated RL systems operating behind the scenes for each quality metric.
As reinforcement learning continues dominating AI development, the reinforcement gap will likely widen, with profound implications for startups and the broader economy. Processes on the right side of this gap face potential automation, while those requiring subjective judgment may remain human-dependent longer.

⚡ You’re Smart. Strategic. Intentional. So Let’s Be Real:
Where Are You Holding Back— On Purpose?
Every smart operator has something they’re intentionally avoiding.
Not because they’re lazy — but because it’s risky, unknown, or just... a bit *uncomfortable*.
So what’s that thing for you?
That one decision, move, or experiment you know would push you forward — but you’ve been choosing not to do it.
Tell us — and we’ll share a tactical POV or tool to help you rethink it.
No fluff. Just momentum.
AI will be humanity's greatest collaborative tool, amplifying our collective wisdom to solve problems that have persisted for generations and create unprecedented breakthroughs.
Jane Fraser is the CEO of Citigroup, making her the first woman to lead a major Wall Street bank in its more than 200-year history. Her leadership in global banking and financial services has focused on digital transformation and sustainable finance, and she continues to advocate for using AI and advanced technologies to create more inclusive financial systems while managing risk responsibly in an increasingly complex global economy.
🎬 How to Use AI Agents to Build High-Converting Video Ads
Build once. Scale everywhere. In just minutes.

What if: Instead of spending weeks scripting, editing, and testing ads… you could build 10 variations of a high-converting video in 15 minutes — with agents handling the voiceover, visuals, call-to-action, and even the hook testing?
That’s exactly what we’ll show you this week — live.
Why This Changes Everything for Founders 🧠
- 🎥 Fast-Track Ad Creation: From idea to launch-ready video in under an hour — no team needed.
- 🔁 Test Like a Pro: Run A/B tests on 10 hooks, CTAs, or visuals — while you sleep.
- 💰 Better ROAS, Less Guessing: Let agents learn what your audience *actually* responds to.
- 📦 Works for Any Product: Digital or physical — agents adapt to your niche & brand voice.
📅 Join the Live Session w code Supercharged
We’ll walk you step-by-step through building your own ad-generating agent — no code needed.
👉 Click here to save your seat for the webinar
You’ll leave with a working prototype, free templates, and 5x more speed in your next ad campaign.
🎁 Bonus for Attendees:
- ✅ AI Agent Template for Ad Generation
- ✅ "Top 10 Hooks That Convert" Prompt Pack
- ✅ Entry to win a Free AI Strategy Audit
Episode 1: AI Education Without Limits
I sat down with Dr. José Fernández from the Miami Dade College AI Center to explore how AI is transforming education.
We discussed how students today can access world-class AI training with almost zero student loan debt — making innovation more accessible than ever.
Episode 2: What if your story were your sales strategy?
I sat down with Maury Rogow, a Hollywood producer and branding mastermind behind $2.5B in brand growth.
We talked about why the brands that feel effortless aren’t the loudest — they’re just the most aligned.
🌡️ Use the Satisfaction Thermometer to show us how much you enjoyed The Supercharged today ;)

The Supercharged is aiming to be the world's #1 AI business magazine and is on a mission to empower 1,000,000 entrepreneurs worldwide by 2025, guiding them through the transition into the AI-driven creative age. We're dedicated to breaking down complex technologies, sharing actionable insights, and fostering a community that thrives on innovation, to become the ultimate resource for businesses navigating the AI revolution.
The Supercharged is the #1 AI Newsletter for Entrepreneurs, with 25,000 + readers working at the world’s leading startups and enterprises. The Supercharged is free for the readers. Main ads are typically sold out 2 weeks in advance. You can book future ad spots here.
I'm sending this email because you registered for one of our workshops or our affiliates brought you. You can unsubscribe at the bottom of each email at any time.