⏱️ 3 min read

The Ultimate Guide to Archiving with Rclone


Table of Contents

As your recording library expands, local storage eventually hits a wall. Whether you’re running a 4-bay RAID or a single NVMe drive, you need a strategy for cold storage.

This guide covers how to integrate rclone—the swiss-army knife of cloud storage—with your recording workflow to automatically move finished captures to the cloud (Cloudflare R2, Backblaze B2, or S3).

Why Rclone?

Rclone is a command-line program to manage files on cloud storage. It supports over 40 cloud storage providers and is built for high-performance, multi-threaded transfers. For our purposes, it allows us to treat a cloud bucket like a local folder that only grows.

Prerequisites

  1. Rclone Installed: sudo apt install rclone (Linux) or brew install rclone (macOS).
  2. Cloud Account: A Cloudflare account with R2 enabled or a Backblaze B2 account.
  3. API Keys: Access Key and Secret Key for your storage bucket.

Step 1: Configuring the Remote

Run rclone config and follow the interactive prompt to create a new remote named r2_archive or b2_archive.

Sample rclone.conf (Cloudflare R2)

[r2_archive]
type = s3
provider = Cloudflare
access_key_id = your_access_key
secret_access_key = your_secret_key
endpoint = https://your_account_id.r2.cloudflarestorage.com
acl = private

Step 2: Automation with CaptureGem Hooks

The goal is to trigger a cloud upload as soon as a recording is finalized. We’ll use the on_finished hook to execute a small script.

The Bash Hook Script (move_to_cloud.sh)

#!/bin/bash
# Usage: ./move_to_cloud.sh "/path/to/recording.mp4"

FILE_PATH=$1
REMOTE_PATH="r2_archive:my-cam-archives/"

echo "Starting archival of $FILE_PATH..."

# --bwlimit 20M ensures we don't starve the live recorder bandwidth
rclone move "$FILE_PATH" "$REMOTE_PATH" \
    --bwlimit 20M \
    --transfers 4 \
    --log-file /home/user/logs/rclone.log \
    --delete-empty-src-dirs

if [ $? -eq 0 ]; then
    echo "Archival successful: $FILE_PATH is now in the cloud."
else
    echo "Archival failed. Check logs at /home/user/logs/rclone.log"
    exit 1
fi

Step 3: Integrating into hooks.yaml

Add the script to your CaptureGem configuration so it runs automatically.

# CaptureGem hooks.yaml
on_recording_finished:
  - command: "/home/user/scripts/move_to_cloud.sh"
    args: ["{file_path}"]
    wait: false  # Run in background so next recording isn't delayed

Bandwidth Management

Crucially, you must use --bwlimit. If you are recording 10 concurrent streams at 8 Mbps each, your total upload usage is 80 Mbps. If your cloud upload consumes your entire 100 Mbps uplink, your live streams will drop frames.

Pro Tip: Set --bwlimit to 25% of your total upload bandwidth to ensure a safe buffer.

Verification: Byte-Perfect Archives

To ensure your local files were moved correctly and aren’t corrupted, run a check command periodically:

rclone check /path/to/local/staging r2_archive:my-cam-archives/

Cost Analysis (10TB Example)

  • Cloudflare R2: $0.015/GB/mo -> $150/mo for 10TB. (Zero egress fees!)
  • Backblaze B2: $0.006/GB/mo -> $60/mo for 10TB. ($0.01/GB egress fee).

If you rarely download your archives, B2 is the cost leader. If you frequently pull clips for VOD editing, R2’s zero-egress policy makes it the winner.

Summary Checklist

  1. Config Remote: Test with rclone ls r2_archive:.
  2. Set Bandwidth Limits: Calculate your uplink safety margin.
  3. Automate: Link the script to on_recording_finished.
  4. Monitor: Check rclone.log for any “Model Offline” or timeout errors.

Related guides

Rate this guide

Loading ratings...

Was this guide helpful?

Comments