Storage Tiers
Dits supports multiple storage tiers, automatically moving data between fast local storage, cloud storage, and cold archive based on access patterns and policies.
Storage Hierarchy
Dits organizes storage into three tiers:
Hot (Local)
Fast local storage for actively used data. Instant access, highest cost per GB.
Warm (Cloud)
Cloud object storage for recent data. Seconds to access, moderate cost.
Cold (Archive)
Deep archive for rarely accessed data. Hours to retrieve, lowest cost.
How It Works
Data Flow:
Add file → HOT (local .dits/objects/)
↓
Push → WARM (cloud storage)
↓
Age out → COLD (archive)
Access triggers promotion:
Request archived file → Restore from COLD → WARM → HOTTier Configuration
Basic Setup
# .dits/config
[storage]
# Local hot storage
hotPath = .dits/objects
hotLimit = 100GB
[storage.warm]
# AWS S3 for warm storage
type = s3
bucket = my-project-dits
region = us-west-2
[storage.cold]
# Glacier for archive
type = s3-glacier
bucket = my-project-archive
region = us-west-2Storage Backends
| Backend | Type | Tier | Notes |
|---|---|---|---|
| Local filesystem | local | Hot | Default for .dits/objects |
| AWS S3 | s3 | Warm | Standard, IA, One Zone |
| AWS Glacier | s3-glacier | Cold | Instant, Flexible, Deep |
| Google Cloud Storage | gcs | Warm/Cold | Standard, Nearline, Archive |
| Azure Blob | azure | Warm/Cold | Hot, Cool, Archive |
| Backblaze B2 | b2 | Warm | Cost-effective option |
Lifecycle Policies
Define rules for automatic data movement:
# .dits/config
[lifecycle]
# Move to warm after not accessed for 7 days
warmAfter = 7d
# Move to cold after not accessed for 90 days
coldAfter = 90d
# Delete from hot after synced to warm
evictHotAfter = 30d
[lifecycle.rules.project-files]
# Project files stay hot longer
pattern = *.prproj
warmAfter = 30d
coldAfter = 365d
[lifecycle.rules.raw-footage]
# Raw footage moves to cold faster
pattern = raw/**
warmAfter = 3d
coldAfter = 30dManual Tier Management
Check Storage Status
$ dits storage status
Storage Tiers:
HOT (local):
Path: .dits/objects/
Used: 45.2 GB / 100 GB (45%)
Objects: 12,456 chunks
WARM (s3://my-project-dits):
Used: 234.5 GB
Objects: 45,892 chunks
COLD (s3-glacier://my-project-archive):
Used: 1.2 TB
Objects: 156,234 chunks
Recent Activity:
Promoted to HOT: 234 chunks (2.1 GB) today
Demoted to WARM: 0 chunks
Archived to COLD: 1,234 chunks (15 GB) this weekMove Data Between Tiers
# Promote specific file to hot storage
$ dits storage promote footage/scene1.mov
Promoting footage/scene1.mov...
Restoring from WARM... done
10,234 chunks (10.2 GB) now in HOT storage
# Demote to warm (keep locally accessible but push to cloud)
$ dits storage demote footage/old-takes/
Demoting footage/old-takes/...
Uploading to WARM... done
5,678 chunks (5.5 GB) demoted
# Archive to cold storage
$ dits storage archive footage/2023-archive/
Archiving footage/2023-archive/...
Moving to COLD... done
Note: Retrieval will take 3-5 hoursPin Data to Tier
# Keep file always in hot storage
$ dits storage pin hot footage/hero-shot.mov
Pinned footage/hero-shot.mov to HOT tier
# Pin entire directory
$ dits storage pin hot project-files/
# Unpin
$ dits storage unpin footage/hero-shot.mov
# List pinned items
$ dits storage pinned
HOT:
footage/hero-shot.mov (15 GB)
project-files/ (45 MB)Retrieval from Cold Storage
Archive Retrieval Times
Cold storage (Glacier, Archive tiers) has retrieval delays:
- Glacier Instant: 1-5 minutes
- Glacier Flexible: 3-5 hours
- Glacier Deep: 12-48 hours
# Request restoration (async)
$ dits storage restore footage/2023-archive/
Initiating restore from COLD storage...
Restore request submitted.
Estimated completion: 3-5 hours
You will be notified when ready.
# Check restore status
$ dits storage restore-status
In Progress:
footage/2023-archive/ (156 GB)
Status: RESTORING
ETA: 2 hours remaining
# Fast restore (higher cost)
$ dits storage restore --expedited footage/urgent-file.mov
Expedited restore initiated.
Estimated completion: 1-5 minutesCost Optimization
Analyze Storage Costs
$ dits storage cost-report
Monthly Cost Estimate:
HOT (local): $0 (local storage)
WARM (S3 Standard):
Storage: 234.5 GB × $0.023/GB = $5.39
Requests: 45,000 × $0.0004 = $0.18
Transfer: 50 GB × $0.09/GB = $4.50
Subtotal: $10.07
COLD (Glacier Flexible):
Storage: 1.2 TB × $0.004/GB = $4.80
Retrieval: 2 restores × $0.03/GB = $3.00
Subtotal: $7.80
Total Estimated: $17.87/month
Optimization Suggestions:
- Move 45 GB of inactive warm data to cold: Save $0.87/mo
- Use Glacier Deep for 500 GB archive: Save $1.50/moOptimize Storage
# Run optimization analysis
$ dits storage optimize --dry-run
Optimization Plan:
1. Archive 45 GB to COLD (not accessed in 90+ days)
Savings: $0.87/month
2. Deduplicate 12 GB across projects
Savings: $0.28/month
3. Remove 5 GB orphaned chunks
Savings: $0.12/month
Total potential savings: $1.27/month
Apply optimizations? [y/N]Multi-Region Configuration
# .dits/config
[storage.warm.primary]
type = s3
bucket = project-us-west
region = us-west-2
[storage.warm.replica]
type = s3
bucket = project-eu-west
region = eu-west-1
[storage.replication]
enabled = true
targets = primary, replica
consistency = eventualStorage Backends Configuration
AWS S3
[storage.warm]
type = s3
bucket = my-dits-bucket
region = us-west-2
accessKey = ${DITS_AWS_ACCESS_KEY}
secretKey = ${DITS_AWS_SECRET_KEY}
storageClass = STANDARD_IA # or STANDARD, ONEZONE_IAGoogle Cloud Storage
[storage.warm]
type = gcs
bucket = my-dits-bucket
project = my-project
credentialsFile = ~/.config/gcloud/credentials.json
storageClass = NEARLINE # or STANDARD, COLDLINE, ARCHIVEAzure Blob Storage
[storage.warm]
type = azure
container = my-dits-container
accountName = myaccount
accountKey = ${DITS_AZURE_KEY}
tier = Cool # or Hot, Archive