Skip to main content
Documentation

Storage Tiers

Dits supports multiple storage tiers, automatically moving data between fast local storage, cloud storage, and cold archive based on access patterns and policies.

Storage Hierarchy

Dits organizes storage into three tiers:

Hot (Local)
Fast local storage for actively used data. Instant access, highest cost per GB.
Warm (Cloud)
Cloud object storage for recent data. Seconds to access, moderate cost.
Cold (Archive)
Deep archive for rarely accessed data. Hours to retrieve, lowest cost.

How It Works

Data Flow:

  Add file → HOT (local .dits/objects/)
      ↓
  Push → WARM (cloud storage)
      ↓
  Age out → COLD (archive)

Access triggers promotion:
  Request archived file → Restore from COLD → WARM → HOT

Tier Configuration

Basic Setup

# .dits/config
[storage]
    # Local hot storage
    hotPath = .dits/objects
    hotLimit = 100GB

[storage.warm]
    # AWS S3 for warm storage
    type = s3
    bucket = my-project-dits
    region = us-west-2

[storage.cold]
    # Glacier for archive
    type = s3-glacier
    bucket = my-project-archive
    region = us-west-2

Storage Backends

BackendTypeTierNotes
Local filesystemlocalHotDefault for .dits/objects
AWS S3s3WarmStandard, IA, One Zone
AWS Glaciers3-glacierColdInstant, Flexible, Deep
Google Cloud StoragegcsWarm/ColdStandard, Nearline, Archive
Azure BlobazureWarm/ColdHot, Cool, Archive
Backblaze B2b2WarmCost-effective option

Lifecycle Policies

Define rules for automatic data movement:

# .dits/config
[lifecycle]
    # Move to warm after not accessed for 7 days
    warmAfter = 7d

    # Move to cold after not accessed for 90 days
    coldAfter = 90d

    # Delete from hot after synced to warm
    evictHotAfter = 30d

[lifecycle.rules.project-files]
    # Project files stay hot longer
    pattern = *.prproj
    warmAfter = 30d
    coldAfter = 365d

[lifecycle.rules.raw-footage]
    # Raw footage moves to cold faster
    pattern = raw/**
    warmAfter = 3d
    coldAfter = 30d

Manual Tier Management

Check Storage Status

$ dits storage status

Storage Tiers:
  HOT (local):
    Path: .dits/objects/
    Used: 45.2 GB / 100 GB (45%)
    Objects: 12,456 chunks

  WARM (s3://my-project-dits):
    Used: 234.5 GB
    Objects: 45,892 chunks

  COLD (s3-glacier://my-project-archive):
    Used: 1.2 TB
    Objects: 156,234 chunks

Recent Activity:
  Promoted to HOT: 234 chunks (2.1 GB) today
  Demoted to WARM: 0 chunks
  Archived to COLD: 1,234 chunks (15 GB) this week

Move Data Between Tiers

# Promote specific file to hot storage
$ dits storage promote footage/scene1.mov
Promoting footage/scene1.mov...
  Restoring from WARM... done
  10,234 chunks (10.2 GB) now in HOT storage

# Demote to warm (keep locally accessible but push to cloud)
$ dits storage demote footage/old-takes/
Demoting footage/old-takes/...
  Uploading to WARM... done
  5,678 chunks (5.5 GB) demoted

# Archive to cold storage
$ dits storage archive footage/2023-archive/
Archiving footage/2023-archive/...
  Moving to COLD... done
  Note: Retrieval will take 3-5 hours

Pin Data to Tier

# Keep file always in hot storage
$ dits storage pin hot footage/hero-shot.mov
Pinned footage/hero-shot.mov to HOT tier

# Pin entire directory
$ dits storage pin hot project-files/

# Unpin
$ dits storage unpin footage/hero-shot.mov

# List pinned items
$ dits storage pinned
HOT:
  footage/hero-shot.mov (15 GB)
  project-files/ (45 MB)

Retrieval from Cold Storage

# Request restoration (async)
$ dits storage restore footage/2023-archive/
Initiating restore from COLD storage...
Restore request submitted.
Estimated completion: 3-5 hours
You will be notified when ready.

# Check restore status
$ dits storage restore-status
In Progress:
  footage/2023-archive/ (156 GB)
    Status: RESTORING
    ETA: 2 hours remaining

# Fast restore (higher cost)
$ dits storage restore --expedited footage/urgent-file.mov
Expedited restore initiated.
Estimated completion: 1-5 minutes

Cost Optimization

Analyze Storage Costs

$ dits storage cost-report

Monthly Cost Estimate:

  HOT (local): $0 (local storage)

  WARM (S3 Standard):
    Storage: 234.5 GB × $0.023/GB = $5.39
    Requests: 45,000 × $0.0004 = $0.18
    Transfer: 50 GB × $0.09/GB = $4.50
    Subtotal: $10.07

  COLD (Glacier Flexible):
    Storage: 1.2 TB × $0.004/GB = $4.80
    Retrieval: 2 restores × $0.03/GB = $3.00
    Subtotal: $7.80

  Total Estimated: $17.87/month

Optimization Suggestions:
  - Move 45 GB of inactive warm data to cold: Save $0.87/mo
  - Use Glacier Deep for 500 GB archive: Save $1.50/mo

Optimize Storage

# Run optimization analysis
$ dits storage optimize --dry-run

Optimization Plan:
  1. Archive 45 GB to COLD (not accessed in 90+ days)
     Savings: $0.87/month
  2. Deduplicate 12 GB across projects
     Savings: $0.28/month
  3. Remove 5 GB orphaned chunks
     Savings: $0.12/month

Total potential savings: $1.27/month

Apply optimizations? [y/N]

Multi-Region Configuration

# .dits/config
[storage.warm.primary]
    type = s3
    bucket = project-us-west
    region = us-west-2

[storage.warm.replica]
    type = s3
    bucket = project-eu-west
    region = eu-west-1

[storage.replication]
    enabled = true
    targets = primary, replica
    consistency = eventual

Storage Backends Configuration

AWS S3

[storage.warm]
    type = s3
    bucket = my-dits-bucket
    region = us-west-2
    accessKey = ${DITS_AWS_ACCESS_KEY}
    secretKey = ${DITS_AWS_SECRET_KEY}
    storageClass = STANDARD_IA  # or STANDARD, ONEZONE_IA

Google Cloud Storage

[storage.warm]
    type = gcs
    bucket = my-dits-bucket
    project = my-project
    credentialsFile = ~/.config/gcloud/credentials.json
    storageClass = NEARLINE  # or STANDARD, COLDLINE, ARCHIVE

Azure Blob Storage

[storage.warm]
    type = azure
    container = my-dits-container
    accountName = myaccount
    accountKey = ${DITS_AZURE_KEY}
    tier = Cool  # or Hot, Archive

Related Topics