Backup Restore Workflow Example

Example demonstrating backup and restore workflow for Solo

Backup and Restore Workflow Example

This example demonstrates a complete backup and restore workflow for a Hiero network using Solo’s config ops backup and config ops restore commands. It shows how to:

  1. Deploy a complete network infrastructure (consensus + block + mirror + relay + explorer)
  2. Generate transactions to create network state
  3. Freeze the network and create a comprehensive backup
  4. Destroy the entire cluster
  5. Redeploy a fresh network
  6. Restore from backup (ConfigMaps, Secrets, Logs, and State)
  7. Verify the restored network is fully operational

Prerequisites

  • Task installed (brew install go-task/tap/go-task on macOS)
  • Kind installed (brew install kind on macOS)
  • kubectl installed
  • Node.js 22+ and npm installed
  • Docker Desktop running with sufficient resources (8GB+ RAM recommended)

Quick Start

Run Complete Workflow

Execute the entire backup/restore workflow with a single command:

task

This will:

  • ✅ Create a Kind cluster and deploy the complete network
  • ✅ Generate test transactions
  • ✅ Backup the entire network (ConfigMaps, Secrets, Logs, State)
  • ✅ Destroy the cluster completely
  • ✅ Redeploy a fresh network from scratch
  • ✅ Restore all components from backup
  • ✅ Verify the network is operational with new transactions

Clean Up

Remove the cluster and all backup files:

task destroy

Available Tasks

Main Workflow Tasks

TaskDescription
task (default)Run complete backup/restore workflow
task setupDeploy complete network infrastructure
task backupFreeze network and create backup
task restoreRestore from backup
task verifyVerify restored network functionality
task destroyRemove cluster and backup files

Component Tasks

TaskDescription
task create-clusterCreate Kind cluster
task init-soloInitialize Solo configuration
task deploy-networkDeploy all network components
task generate-transactionsCreate test transactions
task destroy-clusterDelete entire cluster
task redeployRedeploy network after cluster deletion

Step-by-Step Workflow

1. Deploy Initial Network

task setup

This deploys:

  • 2 consensus nodes (node1, node2)
  • 1 block node
  • 1 mirror node
  • Relay nodes for each consensus node
  • 1 explorer node

2. Generate Network State

task generate-transactions

Creates 3 test accounts with 100 HBAR each to generate network state.

3. Create Backup

task backup

This will:

  • Destroy the mirror node (required before freeze)
  • Freeze the network
  • Backup ConfigMaps, Secrets, Logs, and State files using solo config ops backup

Backup Location:

  • All backup files: ./solo-backup/

4. Destroy and Redeploy

# Destroy cluster
task destroy-cluster

# Redeploy fresh network
task redeploy

This simulates a complete disaster recovery scenario.

5. Restore from Backup

task restore

This will:

  • Stop consensus nodes
  • Restore ConfigMaps, Secrets, Logs, and State files using solo config ops restore
  • Restart consensus nodes

6. Verify Restored Network

task verify

Verifies:

  • All pods are running
  • Previously created accounts exist (e.g., account 0.0.3)
  • Network can process new transactions

Configuration

Edit variables in Taskfile.yml to customize:

vars:
  NETWORK_SIZE: "2"              # Number of consensus nodes
  NODE_ALIASES: "node1,node2"    # Node identifiers
  DEPLOYMENT: "backup-restore-deployment"
  NAMESPACE: "backup-restore-namespace"
  CLUSTER_NAME: "backup-restore-cluster"
  BACKUP_DIR: "./solo-backup"    # All backup files location

What Gets Backed Up?

The solo config ops backup command backs up:

ConfigMaps

  • Network configuration (network-node-data-config-cm)
  • Bootstrap properties
  • Application properties
  • Genesis network configuration
  • Address book

Secrets

  • Node keys (TLS, signing, agreement)
  • Consensus keys
  • All Opaque secrets in the namespace

Logs (from each pod)

  • Account balances
  • Record streams
  • Statistics
  • Application logs
  • Network logs

State Files (from each consensus node)

  • Consensus state
  • Merkle tree state
  • Platform state
  • Swirlds state

Backup Directory Structure

solo-backup/
└── kind-backup-restore-cluster/
    ├── configmaps/
    │   ├── network-node-data-config-cm.yaml
    │   └── ...
    ├── secrets/
    │   ├── node1-keys.yaml
    │   └── ...
    └── logs/
        ├── network-node1-0.zip  (includes state files)
        └── network-node2-0.zip  (includes state files)

Troubleshooting

View Cluster Status

kubectl cluster-info --context kind-backup-restore-cluster
kubectl get pods -n backup-restore-namespace -o wide

View Pod Logs

kubectl logs -n backup-restore-namespace network-node1-0 -c root-container --tail=100

Open Shell in Pod

kubectl exec -it -n backup-restore-namespace network-node1-0 -c root-container -- /bin/bash

Manual Cleanup

# Delete cluster
kind delete cluster -n backup-restore-cluster

# Remove backup files
rm -rf ./solo-backup

# Clean Solo cache
rm -rf ~/.solo/*

Advanced Usage

Run Individual Steps

# Deploy network only
task setup

# Generate test data
task generate-transactions

# Create backup
task backup

# Manually inspect backup
ls -lh ./solo-backup/

# Restore whenever ready (nodes must be running first)
task restore

# Verify
task verify

Use Released Version of Solo

By default, the Taskfile uses the development version (npm run solo-test --). To use the released version:

USE_RELEASED_VERSION=true task

Key Commands Used

This example demonstrates the following Solo commands:

  • solo config ops backup - Backs up ConfigMaps, Secrets, Logs, and State files
  • solo config ops restore - Restores ConfigMaps, Secrets, Logs, and State files
  • solo consensus network freeze - Freezes the network before backup
  • solo consensus node stop/start - Controls node lifecycle during restore

Important Notes

  • The network must be frozen before backup to ensure consistent state
  • Mirror node must be destroyed before freezing the network
  • Backup process can take several minutes depending on state size
  • Restore requires nodes to be stopped to prevent conflicts
  • Backup files can be large - ensure sufficient disk space (1GB+ per node)

Support

For issues or questions: