Version Control for Large Files
Git wasn't built for video. Dits was. Content-defined chunking, smart deduplication, and video-native features for modern media workflows.
Built for Performance
Every component optimized for speed and efficiency with large files
Chunk reuse
on file edits*
To chunk
1GB video*
BLAKE3 hash
per core
CLI commands
full-featured
Direct sharing
no cloud required
Transport
fast & encrypted
Open source
MIT licensed
Video aware
keyframe splitting
*Based on typical use cases with content-defined chunking. Actual results vary by content type and edit patterns.
Install in Seconds
Choose your preferred package manager
npm install -g @byronwade/ditsThen run dits init in any directory to get started
If You Know Git, You Know Dits
Same commands, same workflow. Just optimized for files that Git can't handle. No new mental models to learn.
- init, add, commit, push, pull - all the commands you expect
- Branch and merge workflows work exactly like Git
- Status shows file changes and deduplication stats
- Log shows full history with storage savings
raw_interview.mov (800 MB)
edit_v1.mov (800 MB) — shares 70% with raw
edit_v2.mov (800 MB) — shares 85% with v1
Chunking: 2,400 chunks created (~10s)
Dedup: 600 duplicates found → 1,800 unique chunks
3 files, 2.4 GB logical → 840 MB stored (65% saved)
↳ Savings from shared content between edits
Done in 4s (200+ MB/s)
Join code: ABC-123
Store Less, Keep Everything
Dits splits files into pieces, stores each piece once, and rebuilds them when needed
Your Video
2.4 GB file
2,400 Chunks
~1MB each
Each chunk gets a unique fingerprint. Identical chunks are stored only once.
All chunks (with duplicates)
12 chunks total
Unique chunks stored
5 chunks stored (58% saved)
When you need the file, chunks reassemble instantly
Byte-perfect reconstruction, verified by BLAKE3 hash
2.4 GB
Original file
840 MB
Actually stored
*Savings vary by content. Based on typical video with shared segments.
Edit once, save once
Change 10 seconds of a video? Only those chunks are new. The rest stays deduplicated.
Same footage = same chunks
Using the same B-roll across projects? It's stored once, referenced everywhere.
Fetch only what you need
Mount a repo and scrub through footage. Chunks stream on-demand, no full download.
Stop Re-uploading the Same Files
Traditional file sharing means uploading entire files every time. Dits only transfers what's actually changed.
Traditional Cloud Storage
Editing a 10GB project over a week:
Every save = full re-upload. Same file uploaded to 5 team members = 5× the bandwidth.
With Dits
Same project, same week:
Only changed chunks transfer. Team members fetch only what they don't already have.
Team Collaboration: Everyone Has the Same Chunks
Alice
Editor
Bob
Colorist
Carol
Sound
Shared Storage
1,892 unique chunks
When Bob pulls Alice's changes:
Not 10 GB. Just 45 MB.
99%+
Delta sync ratio
typical for small edits
1×
Storage per chunk
shared across all users
<10ms
First-byte latency
local/cached data
∞
Version history
no duplicate storage
Share Directly, No Cloud Required
Stop uploading your 50GB projects to the cloud just to share with a colleague. Dits shares repositories directly between computers using peer-to-peer connections.
- Share any repository with a simple join code
- End-to-end encrypted P2P transfers
- Works through firewalls and NATs
- No file size limits or bandwidth caps
- Direct computer-to-computer sharing
Join code: 7KJM-XBCD
Listening on 0.0.0.0:4433
Repository mounted at: ./shared-project
Connected successfully!
Built for Large Files
Every feature designed with video production and large binary files in mind
How Dits Stacks Up
Real numbers from real benchmarks. See how Dits compares to the tools you might be using today.
Performance Benchmarks
| Metric | Dits | Git LFS | Perforce | Dropbox |
|---|---|---|---|---|
| Hash speed (1GB file) | ~330msBLAKE3 (3GB/s) | ~600msSHA-256 | ~500msMD5 | N/A— |
| Chunking throughput | 2 GB/sFastCDC | N/AFull file | ~200 MB/sDelta | ~300 MB/sBlock sync |
| Incremental sync (small edit) | ~45 MBChanged chunks | 10 GBFull file | ~100 MBDelta | ~50 MBBlock sync |
| Upload speed (reported) | Wire speedQUIC | 1-25 MB/sHTTP | 50-100 MB/sTCP | VariableThrottled |
| Clone 10GB repo | <2 minSparse + VFS | 30+ minFull download | ~5 minProxy cache | ~10 minSmart Sync |
* Dits benchmarks from internal testing (see docs). BLAKE3 benchmarks from official testing. Git LFS speeds from GitHub issues #2328, #4144.
Competitor Landscape
Version Control
Git + LFS
Full file re-uploads. 1-25 MB/s speeds. No dedup.
Perforce Helix
Delta compression. Game industry standard. $740/user/yr
Plastic SCM
1TB+ repos. Now Unity Version Control. $45/user/mo
SVN
Better than Git for binaries. Centralized. Declining support.
Mercurial
Scales well (Facebook). Less tooling. Niche usage.
DVC
ML-focused. File-level only. Struggles >200K files.
LakeFS
Git for data lakes. S3-native. File-level dedup.
XetHub
Block-level dedup. 5-8x faster than DVC. Hugging Face.
Cloud Storage & Sync
Dropbox
Block-level sync. 8-16x faster than cloud. 2TB/day limit.
Google Drive
No block sync. Full re-upload on changes. 5TB limit.
OneDrive
Block sync MS files only. 250GB limit. Unreliable.
Resilio Sync
P2P, 16x faster than cloud. 10Gbps capable. No versioning.
Synology Drive
NAS-native. 500K file limit. Slow with many files.
rclone
Mount any cloud. VFS caching. No dedup or versioning.
Wasabi
S3-compatible. 80% cheaper. No egress fees. Storage only.
Media & Video Tools
Frame.io
5x faster uploads. Review-focused. No local VCS. Adobe-owned.
LucidLink
Streaming file access. Great latency. No version control. $$$
Iconik
MAM with AI tagging. Multi-cloud. Review tools. No dedup.
MediaSilo
Video collaboration. Frame-accurate review. Enterprise. $$$
Bynder
DAM leader. Version control. No chunk-level dedup. $$$
Canto
User-friendly DAM. AI tagging. Limited versioning.
Anchorpoint
Git LFS GUI for games. Sparse checkout. Still LFS limits.
Dits
Content-defined chunking (FastCDC at 2GB/s). BLAKE3 hashing at 3+ GB/s per core. Video-aware splitting at keyframes. Cross-file deduplication. VFS streaming. Git-compatible workflow. QUIC transport. Self-hostable. Free & open source.
Detailed Feature Matrix
| Feature | Dits | Git LFS | Perforce | XetHub | Resilio | Dropbox | LucidLink |
|---|---|---|---|---|---|---|---|
| Content-defined chunking | — | — | — | partial | |||
| Cross-file deduplication | — | — | — | — | — | ||
| Video-aware chunking | — | — | — | — | — | — | |
| Delta/incremental sync | — | ||||||
| Virtual filesystem (VFS) | — | — | — | — | partial | ||
| Streaming playback | — | — | — | — | — | ||
| Git-compatible workflow | — | — | — | — | |||
| Branching & merging | — | — | — | ||||
| File locking | planned | — | — | — | — | ||
| P2P transfer | — | — | — | — | — | ||
| Works offline | partial | — | partial | partial | partial | ||
| Self-hostable | — | — | — | ||||
| Free & open source | — | — | — | — | — |
Why these numbers matter
Git LFS users consistently report upload speeds of 1-25 MB/s even on fast connections due to HTTP overhead. Perforce excels at scale but costs $740/user/year. XetHub (now Hugging Face) pioneered block-level dedup for ML. Cloud storage lacks version control semantics. LucidLink streams well but has no versioning. Dits combines the best: Git workflow + content-defined chunking + cross-file dedup + VFS streaming + video-aware splitting.
Built for Creators
From solo editors to large studios
Building the Future ofLarge File Version Control
Track our progress as we build the most advanced content-addressable storage system for creative professionals.
Engine
Chunking & dedup
Atom Exploder
MP4 parsing
VFS
FUSE mount
Git Parity
Branch & merge
Introspection
Stats & inspect
P2P Sharing
Direct file sharing
Network
QUIC sync
Locking
File locks
Hologram
Proxy edit
Freeze
Cold storage
Want to contribute or follow along?
Join our open-source community on GitHub
Ready to Take Control?
Start versioning your large files today. Free, open source, and built for your workflow.