The Phantom 13 TB on My Synology NAS
The Phantom 13 TB on My Synology NAS
Opened Storage Manager on a Thursday night and saw this: Volume 1 — 80% full, 4.3 TB free. That was surprising because I could only account for maybe 4 TB of actual stuff.
Where the hell was the other 13 TB?
This is the story of chasing it down with Claude Code driving the SSH and btrfs commands, hitting four dead ends, and eventually watching DSM reclaim all of it overnight.
The Setup
DS920+, RAID 5 across four drives (2×7.3 TB + 2×10.9 TB). Mismatched sizes mean usable capacity = 3 × min(drives) = 21.8 TB. I run Plex, store ebooks, and had Synology Drive installed for a while — then uninstalled it months ago.
The shares I could enumerate via File Station API added up to around 4 TB. But df -h /volume1 insisted 17 TB was in use. The gap was real enough to matter.
Dead End #1: File Station API Lies
The Synology FileStation API is great for user shares. It is blind to anything prefixed with @ — which is where every system folder lives. /volume1/@SynoDrive, /volume1/@sharesnap, /volume1/@ActiveBackup — none of these show up. I wasted half an hour thinking “huh, must be in one of the visible shares.”
Lesson: the API is a user-space abstraction. For real answers, you need the filesystem.
Dead End #2: du --one-file-system
SSH’d in. Ran the obvious thing:
cd /volume1 && for d in */ @*/; do du -sh --one-file-system "$d"; done
Got back ~3.93 TB total. Which… matched what I already knew. Which meant du was missing ~13 TB.
Here’s the trap: on Synology btrfs, every top-level share is its own subvolume. Each subvolume has a different st_dev from the kernel’s perspective. So --one-file-system treats /volume1/Everything as a separate filesystem and refuses to descend into it.
Drop the -x flag and it works. But there’s a bigger problem coming.
Dead End #3: ACL Paranoia
Tried the scan again as root with sudo. Same number. ~3.93 TB.
I was sure this was it — some folder with tight ACLs blocking my user. The smoking gun: /volume1/ActiveBackupforBusiness showed d---------+ permissions (zero Unix bits, ACL extension present). My user couldn’t even ls it.
With sudo? 16K. Empty.
Ran the same comparison on /homes, /PlexMediaServer, /downloads — user and sudo gave identical numbers.
So ACLs weren’t hiding anything. The 13 TB was… somewhere else. Somewhere du literally could not see.
The Truth Serum: btrfs filesystem usage
This command is the ground truth on any btrfs volume:
sudo btrfs filesystem usage /volume1
Overall:
Device size: 21.80TiB
Device allocated: 20.93TiB
Device unallocated: 890.92GiB
Used: 16.54TiB
Free (estimated): 5.22TiB
Global reserve: 2.00GiB (used: 16.00EiB) ← wat
Two things jumped out:
- Used: 16.54 TiB — that’s the real number.
duwas right that my files totaled ~4 TB, but something on the block layer was using 16.54 TiB. - Global reserve used: 16.00 EiB — that’s not real. That’s
2^64bytes, the unsigned-integer representation of-1. A classic btrfs counter-corruption symptom. Harmless but diagnostic.
The 12+ TB delta between “files I can see” and “data on disk” is the signature of btrfs subvolumes/snapshots that are no longer linked from any visible path — usually uninstalled-package residue.
The Trigger
I didn’t actually delete anything. I just ran two commands:
sudo btrfs balance start -dusage=0 /volume1
sudo btrfs scrub start /volume1
balance -dusage=0 only reclaims fully empty chunks, and it reported “0 out of about 0 chunks balanced” — technically a no-op. Scrub is a read-only integrity check. Neither should have freed any space.
But something about kicking those two processes off woke DSM’s own garbage collector up. Or it fixed the stuck counter. Or both. Within a few hours:
| Time | Used | Free | % Full |
|---|---|---|---|
| 11 PM | 17 TB | 4.3 TB | 80% |
| 2 AM | 13 TB | 8.6 TB | 60% |
| 4 AM | 12.4 TB | 9.3 TB | 58% |
| Next evening | 3.9 TB | 17.8 TB | 19% |
13 TB reclaimed. Zero user files touched. Every number now reconciles — du total ≈ btrfs filesystem usage ≈ df. The ghost is gone.
What Was Actually There
Uninstalling a Synology package from Package Center removes the binaries. It does not remove the app’s data. In my case that was Synology Drive’s file-versioning history — every tracked file retained as point-in-time chunks in @SynoDrive/, invisible to File Station, invisible to normal du, but very real on disk.
DSM does have a background GC that cleans this up. Mine was stuck — probably waiting on the broken global-reserve counter. btrfs balance appears to have nudged it.
Takeaways
For anyone debugging “where did my Synology space go”:
btrfs filesystem usage /volume1is the first command to run. Notdf. Notdu. TheUsed:line is truth.- If
Usedis way higher thandu -sh /volume1/*/accounts for, suspect package residue. - Don’t trust File Station’s share list for a full picture — it hides
@folders. - Don’t trust
du --one-file-systemon btrfs — it won’t cross subvolume boundaries. btrfs balance -dusage=0is safe to kick off even when you don’t expect it to do anything. Sometimes the side effect is unsticking DSM.- Install the Storage Analyzer package from Package Center. It’s a first-party Synology tool that generates proper usage reports without any of this nonsense. I didn’t remember it existed until the investigation was over.
For Claude Code specifically: it drove every SSH session, wrote the expect scripts to handle password auth (Synology DSM requires it by default — no key auth), and kept tabs on the overnight cleanup. The say command piped through SSH made for nice audio alerts when long scans finished. The fact that Claude’s scans kept timing out — that was actually useful signal: the array was IO-saturated from cleanup happening underneath me.
Sometimes the best debugging move is to kick a process and go to bed.