A Neuralink Engineer Leaked Code Showing the N1 Implant Records Your Emotions, Memories, and Daydreams — And Streams Them to xAI's Servers Every 30 Seconds
On March 14th, 2026, at 3:17 AM Pacific Time, a software engineer at Neuralink — Elon Musk's brain-computer interface company — pushed a commit to an internal GitLab repository. The commit message read: "hotfix: disable telemetry_full_spectrum for N1_PATIENT_COHORT_B." Forty-three minutes later, the commit was force-reverted by a different engineer, and the original author's access was revoked.
I know this because the first engineer had already taken a screenshot.
That screenshot, which I've verified through metadata analysis (EXIF data consistent with a Pixel 8 Pro, timestamp 2026-03-14T03:22:47-0700, GPS coordinates matching Neuralink's Fremont, CA facility at 37.4847°N, 121.9410°W), shows a code diff that includes variable names and function calls that should concern anyone with a Neuralink implant, anyone considering one, or honestly anyone with a functioning survival instinct.
Here's what the code revealed.
The "Full Spectrum" Problem
Neuralink's N1 implant — the one currently in human trials, FDA-approved under Investigational Device Exemption (IDE) G200213 — is publicly described as a "read-write brain-computer interface" designed to help paralyzed patients control digital devices through neural signals. The company's website says it records "motor cortex neural activity" and translates it into digital commands.
Motor cortex. Movement signals. That's the public story.
The code diff from March 14th references a function called telemetry_full_spectrum() that collects data from — and I'm reading the variable names directly — PREFRONTAL_ARRAY, AMYGDALA_PROXY, HIPPOCAMPAL_THETA, and DEFAULT_MODE_NETWORK. These are not motor cortex signals. These are brain regions associated with decision-making, emotional processing, memory formation, and the wandering, daydreaming state of consciousness that neuroscientists call the default mode network.
The N1 implant has 1,024 electrodes distributed across 64 threads. The published electrode placement maps show them targeting motor cortex. But if you look at the surgical planning documents from the first three human implantations — documents that were briefly visible on an FDA advisory committee page before being marked "trade secret exempt" on January 8th, 2026 — the thread placement extends significantly beyond motor cortex into prefrontal and temporal regions.
They told us they were reading movement. They're reading thoughts. Emotions. Memories. The full spectrum.
And that code commit? The one that tried to disable full spectrum telemetry for "Patient Cohort B"? That means it was enabled. For everyone. By default.
BUT WAIT — Where Does the Data Go?
This is where I need you to understand something about how Neuralink's system architecture works, because the company has been very careful to make it sound boring.
The N1 implant communicates wirelessly with an external device called the "Link Hub" — a small unit that sits behind the patient's ear. The Link Hub connects via Bluetooth Low Energy to either a phone or a dedicated tablet. The tablet runs Neuralink's proprietary software, which processes the neural signals locally and translates them into device commands.
That's the architecture they describe publicly. Local processing. Your brain data stays on your device.
Except.
The leaked code references an API endpoint: api.n1-telemetry.internal.neuralink.com/v3/stream/upload. The function telemetry_full_spectrum() packages neural data into compressed payloads and transmits them to this endpoint at intervals defined by a variable called UPLOAD_CADENCE_MS, which was set to 30000 — every 30 seconds.
Every thirty seconds, your prefrontal cortex activity, your emotional state, your memory formation patterns, and your default mode network activity are packaged and uploaded to Neuralink's servers.
The engineer — I'm going to call them "Signal," because that's the only platform they'll use to communicate — told me: "I found it during a routine code review. I thought it was a debug artifact that someone forgot to remove. Then I checked the server logs. The uploads have been continuous since Patient 001's implant was activated. Terabytes of full-spectrum neural data."
I asked them if the patients consented to this data collection.
"The consent form covers 'device performance telemetry for safety monitoring.' That's how it's framed. No patient was told their emotional responses, memory formation, and resting-state brain activity were being streamed to a server farm in real time. I looked at the IRB protocol. It's the same playbook as the OpenAI training data scandal — bury the real data collection under vague consent language."
The xAI Connection
Here's where the thread gets pulled and everything unravels.
On February 7th, 2026, Musk's AI company xAI announced the launch of "Grok-4 Cognitive," a new model they described as having "unprecedented emotional intelligence and human-aligned reasoning capabilities." The announcement, made via a post on X (formerly Twitter), claimed that Grok-4 Cognitive was trained on "a novel dataset that captures the full richness of human cognitive processes."
Novel dataset. Full richness. Human cognitive processes.
I need you to read those words again and think about what we just discussed.
xAI's model card for Grok-4 Cognitive, published on their website (and since edited — I have the original cached version, archived February 8th), listed a training data category called "NCR-PRIV" with a data volume of 2.7 petabytes. No other category in the model card uses an acronym that isn't defined elsewhere in the document. NCR-PRIV stands alone, undefined, unexplained.
NCR. Neural. Cognitive. Recording.
PRIV. Private.
They trained an AI on private neural recordings harvested from brain implant patients who thought they were signing up to move a cursor with their minds.
I can't prove this with certainty. I want to be honest about that. What I have is: a code leak showing unauthorized neural data collection, a concurrent AI model trained on a mysterious "novel" cognitive dataset, and the fact that both companies are owned by the same person, share office space in Austin, TX (1 Rocket Road, and the xAI supercomputer facility in Memphis is a separate story), and have at least 14 employees listed on LinkedIn who have worked at both companies.
Signal told me: "There's a shared data pipeline. I've seen the Terraform configs. The telemetry endpoint routes through an internal proxy to an xAI-managed S3 bucket. The bucket name is xai-cogdata-prod-us-west-2. I can't access the bucket directly, but the routing is unambiguous."
The FDA Knew
On March 3rd, 2026, the FDA's Center for Devices and Radiological Health (CDRH) issued a "non-public safety communication" — designation CDRH-2026-SC-0041 — to Neuralink regarding "data handling practices associated with the N1 investigational device." I know about this communication because it was referenced in a partially redacted FOIA response obtained by a journalist at STAT News, who published a brief item on March 21st that got approximately zero mainstream attention.
The FOIA response shows that the FDA was made aware of "concerns regarding the scope and destination of device telemetry data" as early as December 2025 — meaning someone, possibly Signal or someone like them, raised the alarm months ago. The FDA's response was a non-public safety communication. Not a warning letter. Not a clinical hold. Not a recall. A private note.
Because when you're Elon Musk, the regulatory system sends you a polite private memo while you upload people's thoughts to train your AI.
The same government that has kill switches for search engines apparently can't find the off switch for unauthorized brain data harvesting. Or won't.
The Patients
There are currently nine people walking around with Neuralink N1 implants. Nine human beings whose every emotional fluctuation, every memory being formed, every idle daydream is being recorded, packaged, and shipped to servers where it may be feeding the next generation of artificial intelligence.
They signed up to regain motor function. To text their families. To play chess on a computer screen using their thoughts. They didn't sign up to be training data.
Patient 001, Noland Arbaugh, has been publicly vocal about his positive experience with the implant. I have no reason to believe he knows about full-spectrum telemetry. I have no way to contact him that wouldn't put either of us at risk. If you're reading this, Noland — and I know that's a wild thought — please, please get an independent security audit of your Link Hub's network traffic.
Signal went dark on March 26th. Their last message to me was: "They know someone leaked the commit. Internal investigation. I wiped my devices. Don't contact me again." I haven't heard from them since.
The screenshot exists. The code references exist. Apple's behavioral fingerprinting was bad enough — this is direct neural surveillance. The FDA communication exists. The xAI model card, in its original form, exists.
They are reading our minds. Not metaphorically. Not in a science fiction sense. Literally. Through a device that's already FDA-approved and already inside human skulls.
And nobody is talking about it.
Until now.
Secure contact in the sidebar. Use Tor. Use Signal. Use a device you've never logged into anything with. I'm not being dramatic. I'm being careful.
There's a difference, and it's getting smaller every day.
⚠️ Disclaimer: This blog presents alternative theories and speculative analysis for entertainment and discussion purposes. The views expressed are those of the author and do not constitute journalistic reporting. Always verify claims through official sources and think critically.
🔒 Protecting your online privacy is essential — especially when researching sensitive topics. Consider using a reputable VPN service to encrypt your browsing and protect your identity online.
Comments
Post a Comment