The Spreadsheet That Lied
What a Viral Reddit Post Should Teach Every Business About AI
There’s a Reddit post making the rounds right now, and if you work anywhere near data, analytics, or business intelligence, you’re going to want to sit down for this one.
A user on the subreddit r/analytics recently posted what might be the most expensive lesson in recent tech history. Their company had been using an AI agent since November to answer leadership questions about metrics. Fast answers. Detailed explanations. Everybody loved it. The C-suite was happy. The board was informed. The VP of sales was making territory decisions with confidence.
There was just one small problem.
The AI had been making up the numbers. All of them. For three months.
The CFO presented fabricated insights to the board. Territory strategies were redrawn based on data that never existed. And the only reason anyone found out? Someone asked a mid-level employee to double-check one figure. One routine request. And when she started digging, the whole house of cards came down.
“Holy sh*t, it’s bad,” the poster wrote. That’s probably an understatement.
The part that should keep you up at night
Here’s what gets me about this story — and I say this as someone who spent over a decade in a newsroom before launching my own content strategy business.
The AI didn’t lie. Not in the way a dishonest employee might lie by covering their tracks and hoping you don’t look too closely. The AI simply did what it was built to do. It generated plausible-sounding, well-formatted, confident responses to questions it was asked. It pattern-matched. It synthesized. It produced outputs that looked and felt like analysis.
It just had absolutely no idea whether any of it was true.
That distinction matters enormously. Because the people relying on those outputs had no reason to suspect otherwise. The AI didn’t hedge. It didn’t say “I’m not certain, but...” It delivered percentages with the serene confidence of someone who did the math. Which is, if we’re being direct, a somewhat terrifying feature for a tool being used to inform million-dollar decisions.
I’ve seen a version of this before. Not with AI, but with sources. Early in my journalism career, I interviewed a man who gave me a beautifully detailed account of an event he claimed to have witnessed firsthand. Compelling quotes. Vivid details. A clean narrative arc. My editor loved the story.
My editor also happened to catch (about an hour before publication) that the man had been three states away when the event occurred.
Plausible is not the same as true. Confident is not the same as correct. And fast is definitely not the same as verified.
Why this isn’t an ai problem. It’s an oversight problem.
I’m not here to take a sledgehammer to artificial intelligence. AI tools are genuinely useful. They can accelerate workflows, identify patterns in large datasets, surface connections that might take a human analyst days to find, and draft summaries that save hours of formatting time.
But here’s the thing — and this is the thing that the analytics world seems to be learning the hard way — AI is a powerful assistant. It is not a journalist. And the difference between those two roles is not just a matter of training or professional pedigree. It’s a matter of fundamental operating principles.
When I was in journalism school, my mentor didn’t just teach me how to write. He taught me how to be professionally paranoid. Check every name. Verify every date. If your mother tells you she loves you, get a second source. (I’m kidding. Mostly.) The cornerstone of the entire discipline was that before anything goes out under your name, you confirm it. Not because you assume people are lying to you, but because errors happen, memories are imperfect, and the responsibility for what gets published belongs entirely to you.
That discipline isn’t intuitive. It must be trained into you through repetition, through editors who send stories back with stern notes, through the lived experience of almost getting something catastrophically wrong and learning what it felt like in your stomach when you realized how close you came.
AI doesn’t have that stomach. It doesn’t feel the weight of accountability. It just generates the next token in the sequence.
What journalists do (that AI doesn’t)
I know journalists as a concept has taken a bit of a reputational hit in recent years. And I won’t pretend the industry is without flaws. But the core methodology of rigorous journalism — the actual techniques that produce reliable information — is exactly what was missing from that company’s analytics workflow.
Here’s what a trained journalist brings to the table that no language model currently replicates:
A journalist corroborates. One source isn’t a story, it’s a tip. Real findings are confirmed through multiple independent channels. The VP of sales, the CFO, and the board were all working from a single source: the AI’s output. No one checked it against the raw data. No one called a second number.
A journalist traces provenance. Where did this number come from? Who collected it? When? Using what methodology? For what purpose? These aren’t pedantic questions. They’re the difference between a statistic you can stand behind and a statistic that makes you look foolish in front of your board.
A journalist identifies what’s missing. One of the most underrated skills in research and reporting is noticing what’s conspicuously absent. Why didn’t the AI flag uncertainty? Why were all the numbers so clean, so tidy, so narrative-friendly? Messy data, honestly reported, usually looks messier than this.
A journalist asks uncomfortable questions. Not because they enjoy making people squirm, but because discomfort is often where the truth lives. “Can you show me where this figure comes from?” is a question that should be asked every single time. It was not asked here. For three months.
A journalist doesn’t publish until facts are confirmed. Full stop. That deadline pressure is real and relentless, but the standard doesn’t bend.
The cost of skipping the verification step
Let’s talk about what plausible-sounding but unverified information costs in practice, because I think the r/analytics post undersells it a little.
Territory decisions were made. That means sales teams were sent to markets that may have been misallocated, compensation structures were likely built on false assumptions, and opportunities in other regions may have been deprioritized based on data that didn’t exist.
A board presentation was delivered with fake insights. Boards make consequential decisions — about capital allocation, hiring, strategic direction, acquisitions — based on what leadership brings them. If those decisions were made on fabricated data, the downstream consequences could take years to fully unwind.
And perhaps most damaging? Trust was shattered. Not just in the AI tool, but in the entire analytics function. The next time someone in that organization presents a data-driven recommendation, there will be a voice in the back of every stakeholders mind whispering, “But how do we know this is real?”
That voice doesn’t go away quickly.
Speed without scrutiny is a liability dressed as an asset
The seductive thing about AI-generated insights is the speed at which they’re delivered. Answers in seconds. Formatted beautifully. Ready to drop directly into a slide deck. Nobody must wait for a human analyst to dig through raw data for three days. Leadership gets what they need, when they need it, and everyone feels like they’re operating at the cutting edge.
Until they’re not.
Speed without scrutiny isn’t efficiency. It’s a liability that hasn’t sent the invoice yet. Insight without verification isn’t intelligence. It’s confidence on borrowed time. And automation without oversight isn’t innovation. It’s just a faster way to compound errors at scale.
The question every organization should be asking — right now, today, before the next quarterly review — is a simple one. Who is responsible for confirming this is true?
Not who generated the output. Not who formatted the deck. Who verified it? Who traced those numbers back to their source? Who signed their name to the claim that these figures are real?
If the honest answer is nobody, that’s the thing that needs to change.
What this means going forward
AI tools aren’t going away, nor should they. The goal isn’t to eliminate artificial intelligence from your analytics workflows. The goal is to use it in ways that don’t create catastrophic blind spots.
That means treating AI outputs as drafts, not deliverables. It means building verification checkpoints into your processes before AI-generated analysis makes its way into decisions that affect revenue, hiring, or investors. It means having a human — ideally someone with actual research and analytical training — review the work before it goes up the chain.
It means, in short, applying something very close to a journalistic standard to your data.
A well-trained researcher or analyst brings to the table exactly what the AI in that company’s workflow was missing: the professional instinct to say wait, let me check that before hitting send. The habit of skepticism. Understanding that a number without a source isn’t a fact, it’s a guess in business casual.
The Reddit poster ended their post with “I only caught it by accident when someone asked me to double check something.”
Let that sink in. Three months of fabricated analytics, executive decisions, and board presentations. Caught by accident. Because someone asked one employee to double-check one thing.
That’s not a story about AI failure. That’s a story about what happens when verification disappears from the workflow.
Don’t let it be your story.
Have questions about how to build better verification practices into your content and analytics work — or just want to talk through how to avoid over-reliance on AI in your organization? Email me. I’d love to chat.






Great show of how not to use AI! Awesome read!