AI-washing — the practice of faking AI capabilities to deceive investors — just cost one founder his freedom.

A startup raised $42 million from investors by promising a cutting-edge AI that could automatically complete online purchases. Neural networks, they said. Proprietary technology, they said. What they didn’t say: workers in the Philippines were manually clicking buttons for nearly every single transaction. The startup was Nate, Inc. The founder is now facing federal fraud charges. And Nate isn’t some obscure anomaly — it’s the most dramatic example of a problem that’s quietly spreading across Silicon Valley, Wall Street, and every boardroom in between. It’s called AI-washing. And it might be the biggest corporate lie of our generation.

WHAT IS AI-WASHING, EXACTLY?

AI-washing is simple in concept: a company claims to use artificial intelligence when it either doesn’t use it at all, or uses it in a far more limited way than advertised. The goal is always the same — higher valuation, more investor capital, more customers. The tactics vary. Some companies slap “AI-powered” on software that’s nothing more than basic automation with a shiny interface. Others claim “proprietary AI” when they’re just reselling OpenAI’s API with a markup.

Some go further, fabricating performance metrics, staging fake demos, and even lying about their executives’ academic credentials in AI fields. Here’s the thing that makes AI-washing particularly insidious: the line between honest marketing enthusiasm and outright fraud is genuinely blurry. Saying “AI will transform our industry” is probably just puffery. Saying “our AI eliminates 90% of human labor” when it doesn’t? That’s potentially criminal. Securities lawyers are now actively litigating exactly where that line sits.

THE SEC HAS HAD ENOUGH

For years, regulators watched the AI hype cycle with mild concern. That era is over. In March 2024, the SEC filed its first explicit AI-washing enforcement cases — and they chose their targets carefully to send a message. Delphia, a registered investment adviser, had been telling clients that it used their social media and banking data with sophisticated AI and machine learning to make better investment decisions. The SEC investigated and found that while Delphia intermittently collected such data between 2019 and 2023, it never actually used any of it in AI models or investment algorithms. The “AI” was essentially marketing fiction. Penalty: $225,000 and a cease-and-desist order.

Global Predictions, another investment adviser, marketed itself as offering “expert AI-driven forecasts” and even claimed to be the “first regulated AI financial advisor.” The SEC found they couldn’t substantiate these claims with anything remotely credible. Penalty: $175,000 and a cease-and-desist order. Neither company admitted wrongdoing.

Neither penalty is particularly large. But the signal was unmistakable: the SEC was now treating AI claims like any other material disclosure — and they would check. Then came Presto Automation. Presto made AI voice assistants for drive-thru restaurants. They marketed the product as eliminating the need for human order takers. What they didn’t disclose: from late 2021 through 2022, their deployed units relied entirely on a third-party AI provider. And after they switched to their own models, they still relied heavily on off-site human workers to process most orders.

The SEC issued a cease-and-desist order in January 2025. No financial penalty this time — Presto cooperated and was in rough financial shape — but the formal order made clear that even partial AI-washing in SEC filings is treated as a disclosure violation. My take: the leniency toward Presto won’t last. As the SEC builds more AI-specific expertise through its new Cyber and Emerging Technologies Unit (CETU), expect penalties to get significantly larger.

THE $42 MILLION FRAUD: THE NATE STORY IN FULL

The Nate case is worth understanding in detail because it shows exactly how far AI-washing can go when there’s real money involved. Alberto Saniger founded Nate with a genuinely appealing pitch: an AI that could shop online for you. Point it at any e-commerce site, and its neural networks would navigate the checkout process autonomously. No more entering card details on every site. No more friction. Investors loved it. Nate raised over $42 million in seed and Series A rounds.

What Saniger allegedly didn’t tell them: the AI barely worked. Nearly all transactions were completed manually by contractors working offshore. The “automation success rates” he provided to investors — allegedly above 90% — were fabricated. The demos he ran for potential investors were staged to make it appear the system was working autonomously.

In April 2025, the SEC filed civil charges. The DOJ filed criminal wire fraud charges separately. Saniger faces potential prison time. What makes this case significant beyond the dollar amounts is the personal exposure. The SEC is seeking to bar Saniger from serving as an officer or director of any public company. The DOJ wants disgorgement of the roughly $3 million he personally made from share sales while the fraud was ongoing. AI-washing isn’t just a company-level risk anymore. Founders and executives are personally on the hook.

THE DOCGO CASE: WHEN CREDENTIALS BECOME FRAUD

The DocGo case introduced a new wrinkle to AI-washing litigation that nobody had fully anticipated: fake credentials. DocGo is a mobile medical and telehealth company that heavily promoted its AI-driven care coordination capabilities. Their then-CEO Anthony Capone reportedly claimed a graduate degree in “computational learning theory” — a credential that directly supported the narrative that DocGo had serious AI expertise at the top.

The degree, investigators found, didn’t exist. A securities class action in the Southern District of New York alleged that DocGo’s AI claims and Capone’s fabricated credentials together misled investors about the company’s technological sophistication. The court denied DocGo’s motion to dismiss — meaning the case was credible enough to proceed — and in November 2025, Judge Katherine Polk Failla granted preliminary approval to a $12.5 million settlement.

Final approval is scheduled for March 2026. The DocGo case established something important: if you’re going to use AI as a central part of your investor story, the credentials of the people supposedly building that AI are material information. Lie about them, and you’ve compounded your legal exposure significantly.

THE CLASS ACTION EXPLOSION

SEC enforcement is one thing. Private litigation is another, and right now it’s moving faster. AI-related securities class actions more than doubled from 2023 to 2024. By late 2025, there were approximately 29 federal filings. And unlike many securities cases, these are hard to dismiss — plaintiffs in AI-washing cases get to discovery at significantly higher rates than in other securities litigation.

The recurring theories from plaintiffs’ firms follow clear patterns: The most common is straightforward exaggeration of AI capabilities. Innodata allegedly claimed advanced AI automation while relying on manual overseas labor and having minimal actual AI headcount.

Oddity Tech reportedly marketed “proprietary AI product-matching” that turned out to be a simple rules-based questionnaire. Then there are companies concealing AI’s limitations while making rosy projections. And companies presenting false validation data — Evolv Technologies allegedly claimed its AI weapons-detection product was independently tested and validated, when plaintiffs allege the testing was manipulated. Tempus AI represents a subtler version: a company marketed as an AI healthcare company where most actual revenue came from non-AI services and acquisitions.

The “AI company” valuation premium, plaintiffs argue, was built on a misleading picture of what the business actually was. What do all these cases have in common? Companies that let their marketing departments write checks that their technology departments couldn’t cash.

HOW TO SPOT AI-WASHING BEFORE IT COSTS YOU

Whether you’re an investor, a customer, or a journalist covering AI companies, these are the red flags that should make you stop and ask harder questions.

The first is buzzwords without substance. Overuse of “AI-driven,” “machine learning-enabled,” or “proprietary AI” with no explanation of what the model actually does, how it’s trained, or what data it uses. Legitimate AI companies can answer these questions. AI-washing companies deflect them.

The second is inability to quantify. Ask: what percentage of the workflow is actually automated? What’s the error rate? How does performance compare to your baseline without AI? A company with real AI can give you real numbers. A company doing AI-washing gives you adjectives.

Third: no independent validation. Third-party testing, academic partnerships, recognized benchmarks. If all the performance claims come from the company’s own marketing team, that’s a problem.

Fourth: too-good-to-be-true promises. “Flawless accuracy.” “Zero human intervention.” “Fully autonomous.” Real AI researchers are deeply familiar with the limitations of current systems. Companies making these claims either don’t understand their own technology or are deliberately misrepresenting it.

Fifth: check the team. Are there actually credible AI researchers and engineers in leadership or technical roles? Can you verify their credentials independently? The DocGo case showed that fake credentials tied to AI claims are material information.

For investors specifically, the practical due diligence questions are:

  • What type of model are you using and why?
  • What metrics demonstrate AI’s incremental value over a baseline?
  • What percentage of the workflow is genuinely automated?
  • What data sources feed the model and how are they validated?
  • Who independently audits performance?
  • If a company can’t answer these clearly and specifically, treat their AI claims with significant skepticism.

WHAT HAPPENS WHEN YOU GET CAUGHT

The consequences of AI-washing have moved well beyond reputational damage. On the civil enforcement side: financial penalties, cease-and-desist orders, disgorgement of personal gains, and officer/director bars from serving at public companies. On the criminal side: the DOJ has made clear that existing wire fraud and securities fraud statutes are sufficient to prosecute AI-washing. The Joonko Diversity case in 2024 was the first criminal AI-washing prosecution.

Nate followed. Each charge carries potential prison sentences of up to 20 years. On the litigation side: class actions that survive to discovery, where all the internal communications about what the AI actually did become discoverable. This is where reputations — and companies — truly get destroyed. And underlying all of it: when the AI-washing is exposed, the stock falls, the fundraising dries up, the enterprise customers leave, and whatever real business existed underneath the AI narrative has to rebuild from scratch with none of the credibility it started with. The math doesn’t work. It never did.

THE BOTTOM LINE

AI-washing is happening at scale. The pressure to appear AI-native — from investors, from customers, from competitors — is intense enough that companies are making claims they can’t substantiate, and some are crossing into outright fraud. The regulatory response is accelerating. The SEC’s new Cyber and Emerging Technologies Unit exists specifically to pursue these cases.

The DOJ is treating AI fraud as ordinary fraud, which means the tools and the appetite for prosecution are already in place. For investors: demand specifics, verify credentials, and treat “AI-powered” claims the same way you’d treat any other material financial claim — with evidence, not marketing.

For companies: the era of getting away with vague AI claims is ending. Substantiate everything, document everything, and ensure your actual product matches what your sales team is promising. The companies building real AI have nothing to fear from this scrutiny. The ones who don’t are running out of time.

AINetizens signing off!