Meta Knew Its Algorithm Was Feeding Children to Predators — A Jury Just Proved It, and Nobody's Going to Jail

A jury in New Mexico just ordered Meta to pay $375 million for deliberately misleading the public about how safe its platforms are for children.
Three hundred and seventy-five million dollars.
That's how much it costs, apparently, when your algorithm feeds kids to sexual predators and you lie about it for years.
Let me tell you what came out during the seven-week trial, because this stuff didn't get nearly enough coverage. And when you see the internal documents — the things Meta's own employees were saying behind closed doors — you're going to understand why I've been saying for years that these companies aren't just negligent. They're complicit.
What the Jury Heard
New Mexico Attorney General Raul Torrez called the verdict "historic." It's the first time any state has successfully sued Meta over child safety. And the evidence presented during the trial was devastating.
Here's what came out:
Internal Meta research found that 16% of all Instagram users had reported being shown unwanted nudity or sexual activity in a single week.
Let me say that again. In one week, roughly one in six Instagram users — a platform used by millions of children — reported seeing sexual content they didn't want to see.
Meta knew this. They had the data. They commissioned the research.
And they kept the algorithm running.
Arturo Béjar, a former engineering leader at Meta who quit in 2021 and became a whistleblower, testified about experiments he personally ran on Instagram. The experiments showed that underage users were being served sexualized content by the algorithm. Not by accident. By design — the recommendation system was steering kids toward harmful content because it drove engagement.
And then Béjar said something that should have been front-page news everywhere:
His own daughter was propositioned for sex by a stranger on Instagram.
A Meta engineering leader's own child. On Meta's own platform. And the system that was supposed to protect her? It was the same system he'd been warning his bosses about for years.
He quit. He became a whistleblower. And it took four more years and a state attorney general's lawsuit for anything to happen.
The $375 Million Question
Let's put that $375 million in context.
Meta's revenue in 2025 was approximately $170 billion. That's billion with a B. Their $375 million fine represents roughly 0.2% of annual revenue.
If you made $50,000 a year, that would be like getting fined $100.
For knowingly exposing children to sexual predators.
Meta's spokesperson said the company "disagrees with the verdict and intends to appeal." She said Meta "works hard to keep people safe" and is "clear about the challenges of identifying and removing bad actors."
Clear about the challenges. That's an interesting way to describe a company whose own internal research showed the problem and whose own engineering leader quit over it.
Here's what kills me: nobody is going to jail. This was a civil case, not criminal. The penalty is financial. And for a company sitting on $170 billion in annual revenue, $375 million is a line item. A cost of doing business. A rounding error.
The algorithm that feeds children to predators? Still running. Right now. As you read this.
If you're looking into Big Tech's internal practices, protect yourself. Use a VPN — these companies track everything, and your ISP logs every page you visit. It's the bare minimum.
"But They Fixed It With Teen Accounts"
Meta's defense during the trial pointed to their 2024 launch of "Teen Accounts" on Instagram, which gives young users more controls over their experience. They also highlighted a February 2026 feature that alerts parents when their children search for self-harm content.
Sounds good, right? Here's the problem.
These features are opt-in and parent-dependent. They assume parents are tech-savvy, constantly monitoring their children's accounts, and that kids aren't creating secondary accounts to bypass restrictions.
If you've ever met a teenager, you know how well that works.
My friend Elena, who teaches middle school in Phoenix, put it to me perfectly: "Every kid in my class has at least two Instagram accounts. Their parents monitor the 'clean' one. The other one is where they actually live online. Meta knows this. They designed it this way."
The recommendation algorithm — the core of what the lawsuit targeted — hasn't fundamentally changed. It still optimizes for engagement. It still surfaces content that keeps users scrolling. And Meta's own research showed that for young users, that means escalating from benign content to progressively more extreme material.
They didn't fix the engine. They put a bumper sticker on it that says "Drive Safe."
This Is Bigger Than Meta
Here's where I need to zoom out, because this story connects to something much larger that I've been tracking for the past year.
We are entering an era where the platforms that control what you see, hear, and believe are simultaneously:
1. Proven to be deliberately harmful (Meta's child safety failures, documented and adjudicated in court)
2. Capable of manufacturing reality itself (AI deepfakes so convincing that even your family can't tell if it's really you)
3. Killing their own tools when those tools become inconvenient (OpenAI just shut down Sora, its AI video generator)
Let me connect these dots.
Your Family Can't Tell If You're Real Anymore
This week, BBC published an article about a journalist who conducted an experiment. He called his aunt — someone who's known him his entire life — and asked her to determine whether she was talking to the real him or an AI deepfake.
Her answer? She was "like 90% sure" it was him. Then she hesitated. "But that sounded more artificial."
The thing is: she was talking to the real him the entire time. There was no deepfake. And she still couldn't be certain.
Think about what that means. We've reached a point where AI is so sophisticated that the suspicion of it being fake is enough to create doubt, even in people who know you intimately.
This isn't hypothetical. Israeli Prime Minister Benjamin Netanyahu had to post a proof-of-life video this month after the internet decided his original video was a deepfake because of what appeared to be a sixth finger. Experts confirmed the video was real — it was just a trick of the light. But it didn't matter. A significant percentage of people still believe he's dead and Israel is running an AI puppet show.
The first time in history that a sitting leader of a nuclear-armed nation had to publicly prove they're alive.
And it failed.
Hany Farid, a digital forensics professor at UC Berkeley, ran his team's full analysis on Netanyahu's videos — voice analysis, frame-by-frame face detection, light and shadow inspection. "There's no evidence that this is AI-generated," he concluded.
Doesn't matter. The doubt was planted. And once doubt is planted in the age of AI, it can never be fully removed.
OpenAI Killed Sora — Why?
On March 24, 2026 — yesterday — OpenAI announced it was shutting down Sora, its AI video generation tool. The announcement was quiet. Almost... embarrassed.
Sora was supposed to be revolutionary. When OpenAI first showed it off in early 2024, the demos were staggering — photorealistic videos generated from text prompts. A woman walking through a Tokyo street. A woolly mammoth trudging through snow. Videos that looked real.
And then they launched it publicly. And something happened that OpenAI won't fully talk about.
The tool was used to create deepfakes. Obviously. Everyone predicted this. But the scale and speed at which it happened apparently exceeded even OpenAI's worst-case projections.
So they killed it.
But here's my question: did they really kill it?
Or did they just kill public access to it?
Because OpenAI has partnerships with the U.S. government, the Department of Defense, and multiple intelligence agencies. They've been explicit about working with the military on AI applications.
If Sora can generate photorealistic video from text — video that can fool human observers — that's not just a consumer product. That's the most powerful propaganda tool ever created. That's the ability to manufacture evidence, fabricate events, and create reality on demand.
You think the Pentagon is going to let that technology just disappear?
They didn't kill Sora. They classified it.
That's my theory, anyway. Feel free to tell me I'm wrong in the comments.
The Convergence
Let me pull all of this together because the picture it paints is genuinely terrifying.
Meta has been proven in court to operate algorithms that knowingly endanger children, paying fines that amount to pocket change while the harmful systems continue running.
AI deepfakes have reached a level where your own family members can't verify your identity, and world leaders have to post proof-of-life videos that fail to convince the public.
OpenAI quietly kills its video generation tool — the one that could create photorealistic fake reality from a text prompt — right as deepfake concerns reach a fever pitch.
And all of this is happening in a world where:
- Your phone records everything you say (Samsung's own patent proves it)
- AI shopping assistants harvest your conversational data (Walmart killed their ChatGPT checkout when they saw what was really happening)
- Inventions that threaten the status quo get killed before they reach the market
We're not heading toward a post-truth world. We're already living in one.
The platforms that control information have been proven to be predatory. The technology to fake reality is available but being pulled from public hands. And the ability to verify what's real — even through a phone call with your own aunt — is evaporating.
I've been using a VPN for three years now. It's the bare minimum for anyone paying attention to what these companies are doing. Your ISP logs every site you visit. Your phone records your conversations. Your social media feeds are curated by algorithms that don't care about your safety. At least make them work harder to track you.
What Comes Next
The Meta verdict will be appealed. It'll drag through courts for years. The fine will probably be reduced. Nobody will go to jail.
The deepfake problem will get worse. AI models are improving faster than detection tools. Within a year, even forensic experts won't be able to tell real from fake without specialized hardware analysis.
And Sora — or whatever OpenAI calls its replacement — will continue to exist behind closed doors, available to governments and military contractors who can afford the access and don't have to answer to the public.
This is where we are. This is the world the tech companies built while we were scrolling through their feeds.
But here's the thing about truth in the digital age: the more they try to control the narrative, the more cracks appear. Whistleblowers like Arturo Béjar step forward. Juries see through the corporate spin. Journalists run experiments that expose how fragile our sense of reality has become.
The system is breaking. The question is whether we notice before it's too late.
Share this. Not because I want the clicks — but because the people in your life need to understand what's happening. Your kids are on these platforms right now. Your parents are getting deepfaked right now. And the companies responsible just got caught, fined 0.2% of their revenue, and went right back to work.
I'm going to keep writing about this. I'm probably going to regret it.
But somebody has to.
UPDATE (March 25, 2026): Meta has confirmed they intend to appeal the New Mexico verdict. In the same week, a separate trial began in Los Angeles where a young woman claims she became addicted to Instagram as a child due to its intentional design. Thousands of similar lawsuits are working through U.S. courts. The dam is breaking — slowly.
Related Rabbit Holes
- Your Phone Records Everything You Say — And Samsung's Own Patent Filing Proves It
- Walmart Let ChatGPT Handle Your Checkout — Then Quietly Killed It
- 5 Inventions That Were Killed Before They Could Change the World
- The U.S. Army Secretary Said There's a "Soldier on the Moon" on Live TV
What's your take — is Meta just negligent, or is the algorithm working exactly as designed? And what happened to Sora? Drop your theory below — and share this before the next news cycle buries it.
This site explores theories, declassified documents, and unexplained events. We present evidence and let you form your own conclusions. For entertainment and educational purposes.
Comments
Post a Comment