Sponsored by

Experts Would Invest $100,000 in This Alternative Now

A new Knight Frank report made an unexpected declaration. It revealed that 44% of family offices are investing more in residential real estate now. And, you don’t need to be Warren Buffet to see why.

Since 2000, residential real estate outperformed the S&P 500 by 70% in total returns. It’s the only asset that pays you to own it, grows while you sleep, and shields your gains from the IRS.Β 

That’s why you need mogul. It’s a real estate platform that lets you invest in institutional-grade rental properties. You get monthly rental income, capital appreciation and tax benefits without a down payment or 3 a.m. tenant calls. In fact, over 20,000 investors have joined.Β 

Here’s Why:

β€’ Tax Benefits

β€’ +7% annual yields

β€’ 18.8% avg annual IRR

TLDR: You can invest in high quality real estate for a fraction of the cost. Why wait?

Past performance isn't predictive; illustrative only. Investing risks principal; no securities offer. See important Disclaimers

Good morning β˜€οΈ, leader of the next generation.

Technology is reshaping how the world works.

The real question is not what it can do β€” but how we choose to use it.

We focus on using technology to support human evolution.

Progress works when responsibility stays human.

⚑ WHAT'S AT STAKE TODAY ⚑

  1. πŸŽ΅πŸ€– Spotify tests new tool to stop AI slop from being attributed to real artists
  2. πŸ’°πŸ“Š Doss raises $55M for AI inventory management that plugs into ERP
  3. πŸ€–πŸ€ Agile Robots becomes the latest robotics company to partner with Google DeepMind
  4. πŸŽ¬πŸ’Έ Mirage raises $75M to continue building models for its AI video-editing app Captions
  5. 🍎🎨 Meet the former Apple designer building a new AI interface at Hark
  6. πŸ”’πŸ“ Talat's AI meeting notes stay on your machine, not in the cloud
  7. πŸ’¬πŸ›’ OpenAI's plans to make ChatGPT more like Amazon aren't going so well
  8. πŸ’πŸ” Databricks bought two startups to underpin its new AI security product
  9. πŸ§ β›“οΈ Anthropic hands Claude Code more control, but keeps it on a leash

Spotify launches protection tool against AI music misattribution

Spotify tests new tool to stop AI slop from being attributed to real artists

title image

Spotify is taking decisive action against the growing problem of AI-generated music being falsely attributed to legitimate artists. The streaming giant has launched a beta test for its groundbreaking "Artist Profile Protection" feature, giving musicians unprecedented control over what appears under their name on the platform.

The new tool addresses a critical issue plaguing the music industry: the surge of easily-produced AI tracks flooding streaming services and landing on the wrong artist profiles. This misattribution problem has been exacerbated by the rise of artificial intelligence music generation tools, creating headaches for both artists and fans.

"Music has been landing on the wrong artist pages across streaming services, and the rise of easy-to-produce AI tracks has made the problem worse," Spotify explained in their announcement. The company has made protecting artist identity a top priority for 2026, positioning this as a first-of-its-kind solution to an industry-wide challenge.

How does it work? Artists enrolled in the beta program can now review and approve or decline releases before they appear on their profiles. Only approved content will contribute to their statistics, appear in user recommendations, or show up on their official artist page. This gives musicians complete control over their digital identity on the platform.

The timing of Spotify's announcement is particularly significant, coming just one week after Sony Music revealed it had requested the removal of over 135,000 AI-generated songs that were impersonating its artists across various streaming platforms. This massive cleanup effort highlights the scale of the problem facing the industry.

Spotify acknowledges that while open distribution has democratized music publishing for independent artists, it has also created opportunities for both accidental mistakes and malicious exploitation. Songs can end up on incorrect profiles due to metadata errors, confusion between artists sharing similar names, or deliberate attempts to hijack established artists' profiles for fraudulent gain.

When misattribution occurs, the consequences are far-reaching. It can distort an artist's catalog, skew their performance statistics, affect their Release Radar placement, and fundamentally alter how fans discover their music. These issues have made profile protection one of the most requested features from artists over the past year.

The new feature isn't designed for every artist on the platform. Instead, Spotify is targeting musicians who have experienced repeated instances of incorrect releases, those with common names that create confusion, or artists who simply want greater control over their profile content.

For beta participants, the Artist Profile Protection feature is accessible through their "Spotify for Artists" dashboard on both desktop and mobile web platforms. Once activated, artists receive email notifications whenever new music is submitted to Spotify with their name attached, allowing them to make informed decisions about what gets published under their brand.

This development represents a significant shift in how streaming platforms approach content moderation and artist protection. Rather than relying solely on post-publication reporting and removal processes, Spotify is implementing a proactive approval system that prevents problematic content from ever reaching listeners.

The initiative also reflects the broader challenges facing the music industry as AI technology becomes more sophisticated and accessible. As artificial intelligence tools make it easier than ever to create convincing musical content, platforms must evolve their protection mechanisms to maintain trust and integrity in their ecosystems.

While still in beta testing, this feature could set a new industry standard for artist protection on streaming platforms, potentially influencing how competitors approach similar challenges in the rapidly evolving landscape of AI-generated content.

πŸ” Which AI Dilemma Should We Tackle First?

🧠 I’ve broken down the 5 biggest challenges we face with AI and humanity today. But now I’m curious: Which one intrigues you the most? Which rabbit hole should we explore first β€” together?

Login or Subscribe to participate

2026’s biggest media shift

Attention is the hardest thing to buy. And everyone else is bidding too.

When people are scrolling, skipping, swiping, and split-screening their way through the day, finding uninterrupted moments where your audience is truly paying attention is the priority.

That’s where Performance TV stands out.

Check out the data from 600+ marketers on the most effective channels to capture audience attention in 2026.

Doss secures $55M for AI-powered inventory management integration

Doss raises $55M for AI inventory management that plugs into ERP

title image

Doss has raised $55 million in Series B funding led by Madrona and Premji Invest to enhance AI-powered inventory management. The startup integrates with existing accounting systems, addressing gaps in traditional ERPs and new AI-native platforms like Rillet and Campfire.

Founded in 2022, Doss pivoted from core accounting to focus on inventory management integration. The company targets mid-market consumer brands generating $20-250 million revenue, partnering with AI ERP companies rather than competing directly against legacy systems like NetSuite.

πŸŽ™ New Episode: Turn Views Into Revenue with Ivan Unfiltered

Ivan Unfiltered Podcast

Most businesses are posting content…
But very few are turning it into revenue.

In this episode, I sit down with Ivan Unfiltered β€” founder of Viral Video Labs and the force behind one of the biggest podcasts coming out of Las Vegas.

Ivan doesn’t just create content. He builds content systems that convert.

Through Viral Video Labs, he helps entrepreneurs and brands:

  • Stop the scroll
  • Capture real attention
  • Turn short-form video into leads, sales, and authority

We break down:

  • πŸ”₯ Why most businesses fail at short-form
  • πŸ”₯ The difference between viral and profitable
  • πŸ”₯ How to build a repeatable content machine
  • πŸ”₯ The future of short-form media

If you’re serious about growing your brand online β€” this episode is a must-watch.

πŸ‘‰ Explore the Supercharged Podcast

⚑ Trends for the Future

Anthropic hands Claude Code more control, but keeps it on a leash

Anthropic introduces auto mode for Claude Code with AI-powered safety controls.

Developers working with AI coding tools currently face a frustrating dilemma: constantly supervise every action the AI takes or risk letting it run wild with potentially dangerous consequences. Anthropic's latest Claude update aims to solve this problem by introducing an "auto mode" that lets the AI autonomously decide which actions are safe to execute.

The new feature, currently in research preview, represents the industry's broader push toward more autonomous AI tools that can act without constant human oversight. The challenge lies in finding the right balance between speed and safety - too many restrictions slow down development, while too few create unpredictable risks.

Auto mode works by implementing AI-powered safeguards that review each potential action before execution. The system checks for risky behavior that users didn't request and screens for prompt injection attacks, where malicious instructions hidden in content could cause unintended actions. Safe actions proceed automatically, while risky ones are blocked entirely.

This builds upon Claude Code's existing "dangerously-skip-permissions" command, which previously gave the AI complete decision-making authority, but now adds a crucial safety layer on top. The approach goes beyond similar autonomous coding tools from GitHub and OpenAI by shifting the permission decision-making process from users to the AI itself.

However, Anthropic hasn't revealed the specific criteria its safety layer uses to distinguish between safe and risky actions - information that developers will likely need before widespread adoption. The feature will roll out to Enterprise and API users in the coming days, working exclusively with Claude Sonnet 4.6 and Opus 4.6 models.

The company recommends using auto mode only in isolated, sandboxed environments separate from production systems to limit potential damage if something goes wrong. This launch follows Anthropic's recent releases of Claude Code Review for automated bug detection and Dispatch for Cowork for task delegation to AI agents.

Digital Brainstorm

⚑ Let’s Make AI Actually Useful:
What Would Move the Needle in *Your* Industry?

AI has potential β€” but generic advice rarely helps.

What would be genuinely valuable for AI to do in your industry right now?

β€’ Automate a painful workflow?
β€’ Improve decision-making?
β€’ Replace a manual process that wastes time?
β€’ Help your team upskill faster?

Tell us what you’d want AI to handle β€” or where you feel stuck.

We’re using these insights to curate **industry-specific trainings, live webinars, and practical guidance** you can actually apply.

🌑️ Use the Satisfaction Thermometer to show us how much you enjoyed The Supercharged today ;)

How did we do?

Login or Subscribe to participate

The Supercharged is aiming to be the world's #1 AI business magazine and is on a mission to empower 1,000,000 entrepreneurs worldwide by 2025, guiding them through the transition into the AI-driven creative age. We're dedicated to breaking down complex technologies, sharing actionable insights, and fostering a community that thrives on innovation, to become the ultimate resource for businesses navigating the AI revolution.

The Supercharged is the #1 AI Newsletter for Entrepreneurs, with 25,000 + readers working at the world’s leading startups and enterprises. The Supercharged is free for the readers. Main ads are typically sold out 2 weeks in advance. You can book future ad spots here.

I'm sending this email because you registered for one of our workshops or our affiliates brought you. You can unsubscribe at the bottom of each email at any time.

Reply

Avatar

or to participate

Keep Reading