State Save and Restore Example
State Save and Restore Example
This example demonstrates how to save network state from a running Solo network, recreate a new network, and load the saved state with a mirror node using an external PostgreSQL database.
What it does
- Creates an initial Solo network with consensus nodes and mirror node
- Uses an external PostgreSQL database for the mirror node
- Runs transactions to generate state
- Downloads and saves the network state and database dump
- Destroys the initial network
- Creates a new network with the same configuration
- Restores the saved state and database to the new network
Prerequisites
- Kind - Kubernetes in Docker
- kubectl - Kubernetes CLI
- Node.js - JavaScript runtime
- Task - Task runner
- Helm - Kubernetes package manager (for external database option)
Quick Start
Run Complete Workflow (One Command)
task # Run entire workflow: setup → save → restore
task destroy # Cleanup when done
Step-by-Step Workflow
task setup # 1. Deploy network with external database (5-10 min)
task save-state # 2. Save state and database (2-5 min)
task restore # 3. Recreate and restore (3-5 min)
task destroy # 5. Cleanup
Usage
1. Deploy Initial Network
task setup
This will:
- Create a Kind cluster
- Deploy PostgreSQL database
- Initialize Solo
- Deploy consensus network with 3 nodes
- Deploy mirror node connected to external database
- Run sample transactions to generate state
2. Save Network State and Database
task save-state
This will:
- Download state from all consensus nodes
- Export PostgreSQL database dump
- Save both to
./saved-states/directory - Display saved state information
3. Restore Network and Database
task restore
This will:
- Stop and destroy existing network
- Recreate PostgreSQL database
- Import database dump
- Create new consensus network with same configuration
- Upload saved state to new nodes
- Start nodes with restored state
- Reconnect mirror node to database
- Verify the restored state
4. Cleanup
task destroy
This will delete the Kind cluster and clean up all resources.
Available Tasks
default(or justtask) - Run complete workflow: setup → save-state → restoresetup- Deploy initial network with external PostgreSQL databasesave-state- Download consensus node state and export databaserestore- Recreate network and restore state with databaseverify-state- Verify restored state matches originaldestroy- Delete cluster and clean up all resourcesclean-state- Remove saved state files
Customization
You can adjust settings by editing the vars: section in Taskfile.yml:
NETWORK_SIZE- Number of consensus nodes (default: 2)NODE_ALIASES- Node identifiers (default: node1,node2)STATE_SAVE_DIR- Directory to save state files (default: ./saved-states)POSTGRES_PASSWORD- PostgreSQL password for external database
State Files
Saved state files are stored in ./saved-states/ with the following structure:
saved-states/
├── network-node1-0-state.zip # Used for all nodes during restore
├── network-node2-0-state.zip # Downloaded but not used during restore
└── database-dump.sql # PostgreSQL database export
Notes:
- State files are named using the pod naming convention:
network-<node-alias>-0-state.zip - During save: All node state files are downloaded
- During restore: Only the first node’s state file is used for all nodes (node IDs are automatically renamed)
The example also includes:
scripts/
└── init.sh # Database initialization script
The init.sh script sets up the PostgreSQL database with:
- mirror_node database
- Required schemas (public, temporary)
- Roles and users (postgres, readonlyuser)
- PostgreSQL extensions (btree_gist, pg_stat_statements, pg_trgm)
- Proper permissions and grants
How It Works
State Saving Process
- Download State: Uses
solo consensus state downloadto download signed state from each consensus node to~/.solo/logs/<namespace>/ - Copy State Files: Copies state files from
~/.solo/logs/<namespace>/to./saved-states/directory - Export Database: Uses
pg_dumpwith--clean --if-existsflags to export the complete database including schema and data
State Restoration Process
- Database Recreation: Deploys fresh PostgreSQL and runs
init.shto create database structure (database, schemas, roles, users, extensions) - Database Restore: Imports database dump which drops and recreates tables with all data
- Network Recreation: Creates new network with identical configuration
- State Upload: Uploads the first node’s state file to all nodes using
solo consensus node start --state-file- State files are extracted to
data/saved/ - Cleanup: Only the latest/biggest round is kept, older rounds are automatically deleted to save disk space
- Node ID Renaming: Directory paths containing node IDs are automatically renamed to match each target node
- State files are extracted to
- Mirror Node: Deploys mirror node connected to restored database and seeds initial data
- Verification: Checks that restored state matches original
Notes
- State files can be large (several GB per node) depending on network activity
- Ensure sufficient disk space in
./saved-states/directory - External PostgreSQL database provides data persistence and queryability
- State restoration maintains transaction history and account balances
- Mirror node will resume from the restored state point
- Simplified State Restore: Uses the first node’s state file for all nodes with automatic processing:
- Old rounds are cleaned up first - only the latest round number is kept to optimize disk usage
- Node ID directories are then automatically renamed to match each target node
- Database dump includes all mirror node data (transactions, accounts, etc.)
View Logs
# Consensus node logs
kubectl logs -n state-restore-namespace network-node1-0 -f
# Mirror node logs
kubectl logs -n state-restore-namespace mirror-node-<pod-name> -f
# Database logs
kubectl logs -n database state-restore-postgresql-0 -f
Manual State Operations
# Download state manually
npm run solo --silent -- consensus state download --deployment state-restore-deployment --node-aliases node1
# Check downloaded state files (in Solo logs directory)
ls -lh ~/.solo/logs/state-restore-namespace/
# Check saved state files (in saved-states directory)
ls -lh ./saved-states/
Expected Timeline
- Initial setup: 5-10 minutes
- State download: 2-5 minutes (depends on state size)
- Network restoration: 3-5 minutes
- Total workflow: ~15-20 minutes
File Sizes
Typical state file sizes:
- Small network (few transactions): 100-500 MB per node
- Medium activity: 1-3 GB per node
- Heavy activity: 5-10+ GB per node
Ensure you have sufficient disk space in ./saved-states/ directory.
Advanced Usage
Save State at Specific Time
Run task save-state at any point after running transactions. The state captures the network at that moment.
Restore to Different Cluster
- Save state on cluster A
- Copy
./saved-states/directory to cluster B - Run
task restoreon cluster B
Multiple State Snapshots
# Save multiple snapshots
task save-state
mv saved-states saved-states-backup1
# Later...
task save-state
mv saved-states saved-states-backup2
# Restore specific snapshot
mv saved-states-backup1 saved-states
task restore
Troubleshooting
State download fails:
- Ensure nodes are running and healthy
- Check pod logs:
kubectl logs -n <namespace> <pod-name> - Increase timeout or download nodes sequentially
Restore fails:
- Verify state files exist in
./saved-states/ - Check file permissions
- Ensure network configuration matches original
- Check state file integrity
Database connection fails:
- Verify PostgreSQL pod is ready
- Check credentials in Taskfile.yml
- Review PostgreSQL logs
Out of disk space:
- Clean old state files with
task clean-state - Check available disk space before saving state
Debugging Commands
# Check pod status
kubectl get pods -n state-restore-namespace
# Describe problematic pod
kubectl describe pod <pod-name> -n state-restore-namespace
# Get pod logs
kubectl logs <pod-name> -n state-restore-namespace
# Access database shell
kubectl exec -it state-restore-postgresql-0 -n database -- psql -U postgres -d mirror_node
Example Output
$ task setup
✓ Create Kind cluster
✓ Initialize Solo
✓ Deploy consensus network (3 nodes)
✓ Deploy mirror node
✓ Generate sample transactions
Network ready at: http://localhost:5551
$ task save-state
✓ Downloading state from node1... (2.3 GB)
✓ Downloading state from node2... (2.3 GB)
✓ Downloading state from node3... (2.3 GB)
✓ Saving metadata
State saved to: ./saved-states/
$ task restore
✓ Stopping existing network
✓ Creating new network
✓ Uploading state to node1...
✓ Uploading state to node2...
✓ Uploading state to node3...
✓ Starting nodes with restored state
✓ Verifying restoration
State restored successfully!
This example is self-contained and does not require files from outside this directory.