Examples
The examples section provides information on some examples of how Solo can be used and leveraged.
The usage of examples in Solo
Table of Contents
| Example Directory | Description |
|---|
| address-book | Example of using Yahcli to pull the ledger and mirror node address book |
| multicluster-backup-restore | Multi-cluster backup/restore workflow with external PostgreSQL database and distributed consensus nodes |
| custom-network-config | Deploy a Solo network with custom configuration settings (log4j2, properties, etc.) |
| external-database-test | Deploy a Solo network with an external PostgreSQL database |
| hardhat-with-solo | Example of using Hardhat to test a smart contract with a local Solo deployment |
| local-build-with-custom-config | Example of how to create and manage a custom Hiero Hashgraph Solo deployment using locally built consensus nodes |
| network-with-block-node | Deploy a Solo network that includes a block node |
| network-with-domain-names | Setup a network using custom domain names for all components |
| node-create-transaction | Manually write a NodeCreateTransaction and use the add-prepare/prepare-upgrade/freeze-upgrade/add-execute commands. |
| node-delete-transaction | Manually write a NodeDeleteTransaction and use the add-prepare/prepare-upgrade/freeze-upgrade/add-execute commands. |
| node-update-transaction | Manually write a NodeUpdateTransaction and use the add-prepare/prepare-upgrade/freeze-upgrade/add-execute commands. |
| one-shot-falcon | Example of how to use the Solo one-shot falcon commands |
| rapid-fire | Example of how to use the Solo rapid-fire commands |
| state-save-and-restore | Save network state, recreate network, and restore state with mirror node (with optional external database) |
| version-upgrade-test | Example of how to upgrade all components of a Hiero network to current versions |
Accessing Examples
From GitHub Repository
All examples are available in the examples directory of the Solo repository. You can browse the source code, documentation, and configuration files directly on GitHub.
Downloading Example Archives
Pre-packaged example archives are available for download from the Solo releases page. Each example is packaged as a standalone zip file that includes all necessary configuration files and documentation.
To download a specific example:
- Visit the Solo releases page
- Navigate to the desired release version
- Download the example archive (e.g.,
example-backup-restore-workflow.zip)
Example download URL format:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-<example-name>.zip
For example, to download the backup-restore-workflow example from release v0.49.0:
https://github.com/hiero-ledger/solo/releases/download/v0.49.0/example-backup-restore-workflow.zip
After downloading, extract the archive and follow the README instructions inside.
Prerequisites
- install taskfile:
npm install -g @go-task/cli
Running the examples with Taskfile
cd into the directory under examples that has the Taskfile.yml, e.g. (from solo repo root directory) cd examples/network-with-block-node/- make sure that your current kubeconfig context is pointing to the cluster that you want to deploy to
- run
task which will do the rest and deploy the network and take care of many of the pre-requisites
NOTES:
- Some of these examples are for running against large clusters with a lot of resources available.
- Edit the values of the variables as needed.
Customizing the examples
- take a look at the Taskfile.yml sitting in the subdirectory for the deployment you want to run
- make sure your cluster can handle the number in SOLO_NETWORK_SIZE, if not, then you will have to update that and make it match the number of nodes in the
init-containers-values.yaml: hedera.nodes[] - take a look at the
init-containers-values.yaml file and make sure the values are correct for your deployment with special attention to:- resources
- nodeSelector
- tolerations
1 - Address Book Example
Example of how to use Yahcli to read/update ledger and mirror node address book
Yahcli Address Book Example
This is an example of how to use Yahcli to pull the ledger and mirror node address book. And to update the ledger address book. It updates File 101 (the ledger address book file) and File 102 (the ledger node details file).
NOTE: Mirror Node refers to File 102 as its address book.
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-address-book.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Usage
To get the address book from the ledger, this requires a port forward to be setup on port 50211 to consensus node with node ID = 0.
[!NOTE]
Due to file size, the Yahcli.jar file is stored with Git LFS (Large File Storage). You will need to install Git LFS prior to cloning this repository to automatically download the Yahcli.jar file. For instructions on how to install see: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage
After cloning the repository, navigate to this directory and run the following command to pull the Yahcli.jar file:
git lfs install
git lfs pull
# try and detect if the port forward is already setup
netstat -na | grep 50211
ps -ef | grep 50211 | grep -v grep
# setup a port forward if you need to
kubectl port-forward -n "${SOLO_NAMESPACE}" pod/network-node1-0 50211:50211
Navigate to the examples/address-book directory in the Solo repository:
cd <solo-root>/examples/address-book
If you don’t already have a running Solo network, you can start one by running the following command:
To get the address book from the ledger, run the following command:
task get:ledger:addressbook
It will output the address book in JSON format to:
examples/address-book/localhost/sysfiles/addressBook.jsonexamples/address-book/localhost/sysfiles/nodeDetails.json
You can update the address book files with your favorite text editor.
Once the files are ready, you can upload them to the ledger by running the following command:
cd <solo-root>/examples/address-book
task update:ledger:addressbook
To get the address book from the mirror node, run the following command:
cd <solo-root>/examples/address-book
task get:mirror:addressbook
NOTE: Mirror Node may not pick up the changes automatically, it might require running some transactions through, example:
cd <solo-root>
npm run solo -- ledger account create
npm run solo -- ledger account create
npm run solo -- ledger account create
npm run solo -- ledger account create
npm run solo -- ledger account create
npm run solo -- ledger account update -n solo-e2e --account-id 0.0.1004 --hbar-amount 78910
Stop the Solo network when you are done:
2 - Custom Network Config Example
Example of how to create and manage a custom Solo deployment and configure it with custom settings
Custom Network Config Example
This example demonstrates how to create and manage a custom Hiero Hashgraph Solo deployment and configure it with custom settings.
What It Does
- Defines a custom network topology (number of nodes, namespaces, deployments, etc.)
- Provides a Taskfile for automating cluster creation, deployment, key management, and network operations
- Supports local development and testing of Hedera network features
- Can be extended to include mirror nodes, explorers, and relays
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-custom-network-config.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
How to Use
- Install dependencies:
- Customize your network:
- Edit
Taskfile.yml to set the desired network size, namespaces, and other parameters.
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add the cluster to the deployment
- Generate node keys
- Deploy the network with custom configuration files
- Set up and start nodes
- Deploy mirror node, relay, and explorer
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationinit-containers-values.yaml, settings.txt, log4j2.xml, application.properties — Example config files for customizing your deployment
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
3 - Local Build with Custom Config Example
Example of how to create and manage a custom Hiero Hashgraph Solo deployment using locally built consensus nodes with custom configuration settings
Local Build with Custom Config Example
This example demonstrates how to create and manage a custom Hiero Hashgraph Solo deployment using locally built consensus nodes with custom configuration settings.
What It Does
- Uses local consensus node builds from a specified build path for development and testing
- Provides configurable Helm chart versions for Block Node, Mirror Node, Explorer, and Relay components
- Supports custom values files for each component (Block Node, Mirror Node, Explorer, Relay)
- Includes custom application.properties and other configuration files
- Automates the complete deployment workflow with decision tree logic based on consensus node release tags
- Defines a custom network topology (number of nodes, namespaces, deployments, etc.)
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-local-build-with-custom-config.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Configuration Options
Consensus Node Configuration
- Local Build Path:
CN_LOCAL_BUILD_PATH - Path to locally built consensus node artifacts - Release Tag:
CN_VERSION - Consensus node version for decision tree logic - Local Build Flag: Automatically applied to use local builds instead of released versions
Component Version Control
- Block Node:
BLOCK_NODE_RELEASE_TAG - Helm chart version (e.g., “–chart-version v0.18.0”) - Mirror Node:
MIRROR_NODE_VERSION_FLAG - Version flag (e.g., “–mirror-node-version v0.136.1”) - Relay:
RELAY_RELEASE_FLAG - Release flag (e.g., “–relay-release 0.70.1”) - Explorer:
EXPLORER_VERSION_FLAG - Version flag (e.g., “–explorer-version 25.0.0”)
Custom Values Files
Each component can use custom Helm values files:
- Block Node:
block-node-values.yaml - Mirror Node:
mirror-node-values.yaml - Relay:
relay-node-values.yaml - Explorer:
hiero-explorer-node-values.yaml
How to Use
Install dependencies:
Prepare local consensus node build:
- Build the consensus node locally or ensure the build path (
CN_LOCAL_BUILD_PATH) points to valid artifacts - Default path:
../hiero-consensus-node/hedera-node/data
Customize your configuration:
- Edit
Taskfile.yml to adjust network size, component versions, and paths - Modify values files (
*-values.yaml) for component-specific customizations - Update
application.properties for consensus node configuration
Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo and configure cluster reference
- Add block node with specified release tag
- Generate consensus node keys
- Deploy the network with local build and custom configuration
- Set up and start consensus nodes using local builds
- Deploy mirror node, relay, and explorer with custom versions and values
Destroy the network:
- Run:
- This will clean up all deployed components and delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationinit-containers-values.yaml, settings.txt, log4j2.xml, application.properties — Example config files for customizing your deployment
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
4 - Multi-Cluster Backup and Restore Example
Example demonstrating multi-cluster backup and restore workflow with external PostgreSQL database
Multi-Cluster Backup and Restore Example
This example demonstrates a complete multi-cluster backup and restore workflow for a Hiero network using Solo’s config ops commands. It showcases an advanced deployment pattern with:
- Dual-cluster deployment - Consensus nodes distributed across two Kubernetes clusters
- External PostgreSQL database - Mirror node using external database for production-like setup
- Complete component stack - Consensus, block, mirror, relay, and explorer nodes
- Full backup/restore cycle - ConfigMaps, Secrets, Logs, State files, and database dumps
- Disaster recovery - Complete cluster recreation and restoration from backup
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/multicluster-backup-restore.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Architecture
Cluster Distribution
- Cluster 1 (solo-e2e-c1): node1 (first consensus node)
- Cluster 2 (solo-e2e-c2): node2 (second consensus node), block node, mirror node, explorer, relay, PostgreSQL database
This demonstrates a realistic multi-cluster deployment where components are distributed across different Kubernetes clusters for high availability and fault tolerance.
Self-Contained Example: All configuration files and scripts are included in this directory - no external dependencies required.
Prerequisites
- Task installed (
brew install go-task/tap/go-task on macOS) - Kind installed (
brew install kind on macOS) - kubectl installed
- Helm installed (
brew install helm on macOS) - Node.js 22+ and npm installed
- Docker Desktop running with sufficient resources (Memory: 12GB+ recommended for dual clusters)
Quick Start
Run Complete Workflow
Execute the entire backup/restore workflow with a single command:
This will:
- ✅ Create two Kind clusters with Docker networking
- ✅ Deploy consensus nodes across both clusters (node1 on cluster 1, node2 on cluster 2)
- ✅ Deploy PostgreSQL database on cluster 2
- ✅ Deploy block node, mirror node (with external DB), explorer, and relay on cluster 2
- ✅ Generate test transactions to create network state
- ✅ Backup the entire network (ConfigMaps, Secrets, Logs, State, Database)
- ✅ Destroy both clusters completely
- ✅ Recreate clusters and restore all components from backup
- ✅ Verify the network is operational with new transactions
Clean Up
Remove the cluster and all backup files:
Available Tasks
Main Workflow Tasks
| Task | Description |
|---|
task (default) | Run complete multi-cluster backup/restore workflow |
task initial-deploy | Create dual clusters and deploy complete network |
task generate-transactions | Create test transactions |
task backup | Freeze network and create backup (including database) |
task restore-clusters | Recreate Kind clusters from backup |
task restore-network | Restore network components from backup |
task restore-config | Restore ConfigMaps, Secrets, Logs, State, and database |
task verify | Verify restored network functionality |
task destroy | Remove clusters and backup files |
Component Tasks
| Task | Description |
|---|
task deploy-external-database | Deploy PostgreSQL database with Helm |
task deploy-mirror-external | Seed database for mirror node |
task destroy-cluster | Delete all Kind clusters |
Step-by-Step Workflow
1. Deploy Initial Multi-Cluster Network
This creates and configures:
Infrastructure:
- 2 Kind clusters (solo-e2e-c1, solo-e2e-c2)
- Docker network for inter-cluster communication
- MetalLB load balancer on both clusters
- PostgreSQL database on cluster 2
Network Components:
- node1 on cluster 1
- node2 on cluster 2
- 1 block node on cluster 2
- 1 mirror node (with external PostgreSQL) on cluster 2
- 1 explorer node on cluster 2
- Relay nodes on cluster 2
2. Generate Network State
task generate-transactions
Creates 3 test accounts with 100 HBAR each to generate network state.
3. Create Backup
This will:
- Freeze the network
- Backup ConfigMaps, Secrets, Logs, and State files using
solo config ops backup - Export PostgreSQL database to SQL dump
Backup Location:
- All backup files:
./solo-backup/ - Database dump:
./solo-backup/database-dump.sql
4. Destroy Clusters
This deletes both Kind clusters completely, simulating a complete disaster recovery scenario.
5. Restore Clusters
This will:
- Clean Solo cache and temporary files
- Recreate both Kind clusters from backup metadata
- Setup Docker networking and MetalLB
- Initialize cluster configurations
6. Restore Network
This will:
- Deploy PostgreSQL database on cluster 2
- Initialize cluster configurations
- Deploy all network components (consensus, block, mirror, explorer, relay)
7. Restore Configuration and State
This will:
- Freeze the network
- Restore ConfigMaps, Secrets, Logs, and State files using
solo config ops restore-config - Restore PostgreSQL database from SQL dump
- Start consensus nodes
8. Verify Restored Network
Verifies:
- All pods are running across both clusters
- Previously created accounts exist (e.g., account 3.2.3)
- Network can process new transactions
- Database has been restored correctly
Configuration
Edit variables in Taskfile.yml to customize:
vars:
NETWORK_SIZE: "2" # Number of consensus nodes
NODE_ALIASES: "node1,node2" # Node identifiers
DEPLOYMENT: "external-database-test-deployment"
NAMESPACE: "external-database-test"
BACKUP_DIR: "./solo-backup" # All backup files location
# PostgreSQL Configuration
POSTGRES_USERNAME: "postgres"
POSTGRES_PASSWORD: "XXXXXXXX"
POSTGRES_READONLY_USERNAME: "readonlyuser"
POSTGRES_READONLY_PASSWORD: "XXXXXXXX"
POSTGRES_NAME: "my-postgresql"
POSTGRES_DATABASE_NAMESPACE: "database"
POSTGRES_HOST_FQDN: "my-postgresql.database.svc.cluster.local"
Cluster Configuration
The Kind cluster configurations in kind-cluster-1.yaml and kind-cluster-2.yaml can be customized:
- Node count - Add more worker nodes per cluster
- Port mappings - Expose additional ports for services
- Resource limits - Adjust CPU and memory constraints
- Volume mounts - Add persistent storage options
The MetalLB configurations in metallb-cluster-1.yaml and metallb-cluster-2.yaml define:
- IP address ranges for load balancer services
- Load balancer type (Layer 2 mode)
- Address allocation per cluster
What Gets Backed Up?
The backup process captures:
ConfigMaps (via solo config ops backup)
- Network configuration (
network-node-data-config-cm) - Bootstrap properties
- Application properties
- Genesis network configuration
- Address book
Secrets (via solo config ops backup)
- Node keys (TLS, signing, agreement)
- Consensus keys
- All Opaque secrets in the namespace
Logs (from each pod via solo config ops backup)
- Account balances
- Record streams
- Statistics
- Application logs
- Network logs
State Files (from each consensus node via solo config ops backup)
- Consensus state
- Merkle tree state
- Platform state
- Swirlds state
PostgreSQL Database
- Complete database dump (via
pg_dump) - Mirror node schema and data
- Account balances and transaction history
Backup Directory Structure
solo-backup/
├── solo-e2e-c1/ # Cluster 1 backup
│ ├── configmaps/
│ │ ├── network-node-data-config-cm.yaml
│ │ └── ...
│ ├── secrets/
│ │ ├── node1-keys.yaml
│ │ └── ...
│ ├── logs/
│ │ └── network-node1-0.zip (includes state files)
│ └── solo-remote-config.yaml
├── solo-e2e-c2/ # Cluster 2 backup
│ ├── configmaps/
│ ├── secrets/
│ ├── logs/
│ │ └── network-node2-0.zip (includes state files)
│ └── solo-remote-config.yaml
└── database-dump.sql # PostgreSQL database dump
Troubleshooting
View Cluster Status
# Cluster 1
kubectl cluster-info --context kind-solo-e2e-c1
kubectl get pods -n external-database-test -o wide --context kind-solo-e2e-c1
# Cluster 2
kubectl cluster-info --context kind-solo-e2e-c2
kubectl get pods -n external-database-test -o wide --context kind-solo-e2e-c2
kubectl get pods -n database -o wide --context kind-solo-e2e-c2
View Pod Logs
# Node 1 (Cluster 1)
kubectl logs -n external-database-test network-node1-0 -c root-container --tail=100 --context kind-solo-e2e-c1
# Node 2 (Cluster 2)
kubectl logs -n external-database-test network-node2-0 -c root-container --tail=100 --context kind-solo-e2e-c2
# PostgreSQL
kubectl logs -n database my-postgresql-0 --tail=100 --context kind-solo-e2e-c2
Open Shell in Pod
# Consensus node
kubectl exec -it -n external-database-test network-node1-0 -c root-container --context kind-solo-e2e-c1 -- /bin/bash
# PostgreSQL
kubectl exec -it -n database my-postgresql-0 --context kind-solo-e2e-c2 -- /bin/bash
Check Database
# Connect to PostgreSQL
kubectl exec -it -n database my-postgresql-0 --context kind-solo-e2e-c2 -- \
env PGPASSWORD=XXXXXXXX psql -U postgres -d mirror_node
# List tables
\dt
# Check account balances
SELECT * FROM account_balance LIMIT 10;
Manual Cleanup
# Delete clusters
kind delete cluster -n solo-e2e-c1
kind delete cluster -n solo-e2e-c2
# Remove Docker network
docker network rm kind
# Remove backup files
rm -rf ./solo-backup
# Clean Solo cache
rm -rf ~/.solo/*
rm -rf test/data/tmp/*
Advanced Usage
Run Individual Steps
# Deploy dual-cluster network
task initial-deploy
# Generate test data
task generate-transactions
# Create backup (includes database)
task backup
# Manually inspect backup
ls -lh ./solo-backup/
ls -lh ./solo-backup/solo-e2e-c1/
ls -lh ./solo-backup/solo-e2e-c2/
# Destroy clusters
task destroy-cluster
# Restore clusters only
task restore-clusters
# Restore network components
task restore-network
# Restore configuration and state
task restore-config
# Verify
task verify
Use Released Version of Solo
By default, the Taskfile uses the development version (npm run solo-test --). To use the released version:
USE_RELEASED_VERSION=true task
Customize Component Options
Edit command.yaml to customize mirror node deployment options:
mirror:
- --deployment
- external-database-test-deployment
- --cluster-ref
- solo-e2e-c2
- --enable-ingress
- --pinger
- --dev
- --quiet-mode
- --use-external-database
- --external-database-host
- my-postgresql.database.svc.cluster.local
# Add more options as needed
Modify Cluster Configuration
Since all configuration files are local, you can easily customize the clusters:
# Edit cluster configurations
vim kind-cluster-1.yaml # Modify cluster 1 setup
vim kind-cluster-2.yaml # Modify cluster 2 setup
# Edit MetalLB configurations
vim metallb-cluster-1.yaml # Adjust IP ranges for cluster 1
vim metallb-cluster-2.yaml # Adjust IP ranges for cluster 2
# Then run the deployment
task initial-deploy
You can specify custom MetalLB configuration files during restore operations:
# Use custom metallb configuration files
$SOLO_COMMAND config ops restore-clusters \
--input-dir ./solo-backup \
--metallb-config custom-metallb-{index}.yaml
# The {index} placeholder gets replaced with the cluster number (1, 2, etc.)
# Result: custom-metallb-1.yaml, custom-metallb-2.yaml, etc.
The metallb configuration files use the {index} placeholder to support multiple clusters:
metallb-cluster-{index}.yaml → metallb-cluster-1.yaml, metallb-cluster-2.yaml- Custom patterns like
custom/loadbalancer-{index}.yaml also work
Key Commands Used
This example demonstrates the following Solo commands:
Backup/Restore Commands
solo config ops backup - Backs up ConfigMaps, Secrets, Logs, and State filessolo config ops restore-clusters - Recreates clusters from backup metadata (supports --metallb-config flag)solo config ops restore-network - Restores network components from backupsolo config ops restore-config - Restores ConfigMaps, Secrets, Logs, and State filessolo consensus network freeze - Freezes the network before backup
Multi-Cluster Commands
solo cluster-ref config setup - Setup cluster reference configurationsolo cluster-ref config connect - Connect cluster reference to kubectl contextsolo deployment config create - Create deployment with realm and shardsolo deployment cluster attach - Attach cluster to deployment with node count
Component Deployment Commands
solo consensus network deploy - Deploy consensus network with load balancersolo consensus node setup/start - Setup and start consensus nodessolo block node add - Add block node to specific clustersolo mirror node add - Add mirror node with external databasesolo explorer node add - Add explorer node with TLSsolo relay node add - Add relay node
Important Notes
- Multi-cluster networking - Docker network enables communication between Kind clusters
- External database - PostgreSQL must be backed up and restored separately
- Network must be frozen before backup to ensure consistent state
- Backup includes database - PostgreSQL dump is part of the backup process
- Restore is multi-step - Clusters → Network → Configuration (in order)
- Backup files can be large - Ensure sufficient disk space (2GB+ for dual clusters)
- Realm and shard - Configured as realm 2, shard 3 for testing non-zero values
Files
Taskfile.yml - Main automation tasks and configurationcommand.yaml - Component deployment options for restorescripts/init.sh - PostgreSQL database initialization scriptkind-cluster-1.yaml - Kind cluster 1 configurationkind-cluster-2.yaml - Kind cluster 2 configurationmetallb-cluster-1.yaml - MetalLB configuration for cluster 1metallb-cluster-2.yaml - MetalLB configuration for cluster 2
Support
For issues or questions:
5 - Network with an External PostgreSQL Database Example
example of how to deploy a Solo network with an external PostgreSQL database
External Database Test Example
This example demonstrates how to deploy a Hiero Hashgraph Solo network with an external PostgreSQL database using Kubernetes, Helm, and Taskfile automation.
What It Does
- Creates a Kind Kubernetes cluster for local testing
- Installs the Solo CLI and initializes a Solo network
- Deploys a PostgreSQL database using Helm
- Seeds the database and configures Solo to use it as an external database for the mirror node
- Deploys mirror node, explorer, relay, and runs a smoke test
- All steps are named for clear logging and troubleshooting
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-external-database-test.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Usage
Install dependencies:
Customize your deployment:
- Edit
Taskfile.yml to set database credentials, network size, and other parameters as needed.
Start the network:
This will:
- Create the Kind cluster
- Install and initialize Solo
- Deploy and configure PostgreSQL
- Seed the database
- Deploy all Solo components (mirror node, explorer, relay)
- Run a smoke test
Destroy the network:
This will delete the Kind cluster and all resources.
Files
Taskfile.yml — Automation tasks and configurationscripts/init.sh — Script to initialize the database- Other config files as needed for your deployment
Notes
- All commands in the Taskfile are named for clarity in logs and troubleshooting.
- This example is self-contained and does not require files from outside this directory except for the Solo CLI npm package.
- You can extend the Taskfile to add more custom resources or steps as needed.
6 - Network with Block Node Example
Example of how to create and manage a custom Solo deployment and configure it with custom settings
Network with Block Node Example
This example demonstrates how to deploy a Hiero Hashgraph Solo network with a block node using Kubernetes and Taskfile.
What it does
- Creates a local Kubernetes cluster using Kind
- Deploys a Solo network with a single consensus node, mirror node, relay, explorer, and block node
- Provides tasks to install (start) and destroy the network
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-network-with-block-node.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Usage
Install dependencies
Deploy the network
This will:
- Install the Solo CLI
- Create a Kind cluster
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add a block node
- Generate node keys
- Deploy the network, node, mirror node, relay, and explorer
Destroy the network
This will:
- Stop the node
- Destroy the mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Tasks
install: Installs and starts the Solo network with a block node, mirror node, relay, and explorer.destroy: Stops and removes all network components and deletes the Kind cluster.
Customization
You can adjust the number of nodes and other settings by editing the vars: section in the Taskfile.yml.
Advanced: Block Node Routing Configuration
The --block-node-cfg flag allows you to configure how each consensus node sends blocks to specific block nodes.
Usage
The flag accepts either:
JSON string directly:
solo consensus network deploy --block-node-cfg '{"node1":[1,3],"node2":[2]}'
Path to a JSON file:
# Create block-config.json
echo '{"node1":[1,3],"node2":[2]}' > block-config.json
# Use the file
solo consensus network deploy --block-node-cfg block-config.json
The JSON configuration maps consensus node names to arrays of block node IDs:
{
"node1": [1, 3],
"node2": [2]
}
This example means:
- Consensus node
node1 sends blocks to block nodes 1 and 3 - Consensus node
node2 sends blocks to block node 2
Example: Multi-Node Setup with Custom Routing
# Deploy network with 3 consensus nodes and 3 block nodes
solo consensus network deploy \
--deployment my-network \
--number-of-consensus-nodes 3 \
--block-node-cfg '{"node1":[1],"node2":[2],"node3":[3]}'
# This creates isolated routing: each consensus node talks to one block node
This example is self-contained and does not require any files from outside this directory.
7 - Network With Domain Names Example
Example of how to deploy a Solo network with custom domain names
Network with Domain Names Example
This example demonstrates how to deploy a Hiero Hashgraph Solo network with custom domain names for nodes, mirror node, relay, and explorer using Kubernetes and Taskfile.
What it does
- Creates a local Kubernetes cluster using Kind
- Deploys a Solo network with a single consensus node, mirror node, relay, explorer, and custom domain names for all services
- Provides tasks to install (start) and destroy the network
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-network-with-domain-names.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Usage
Install dependencies
Deploy the network
This will:
- Install the Solo CLI
- Create a Kind cluster
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Generate node keys
- Deploy the network, node, mirror node, relay, and explorer with custom domain names
- Set up port forwarding for key services
- Run a sample SDK connection script
Destroy the network
This will:
- Stop the node
- Destroy the mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Tasks
install: Installs and starts the Solo network with custom domain names for all components, sets up port forwarding, and runs a sample SDK connection.destroy: Stops and removes all network components and deletes the Kind cluster.
Customization
You can adjust the domain names and other settings by editing the vars: section in the Taskfile.yaml.
8 - Node Create Transaction Example
Using Solo with a custom NodeCreateTransaction from an SDK call
Node Create Transaction Example
This example demonstrates how to use the node add-prepare/prepare-upgrade/freeze-upgrade/add-execute commands against a network in order to manually write a NodeCreateTransaction.
What It Does
- Stands up a network with two existing nodes
- Runs
solo node add-prepare to get artifacts needed for the SDK NodeCreateTransaction - Runs a JavaScript program using the Hiero SDK JS code to run a NodeCreateTransaction
- Runs
solo consensus dev-freeze prepare-upgrade and solo consensus dev-freeze freeze-upgrade to put the network into a freeze state - Runs
solo consensus dev-node-add execute to add network resources for a third consensus node, configures it, then restarts the network to come out of the freeze and leverage the new node - Contains the destroy commands to bring down the network if desired
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-node-create-transaction.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
How to Use
- Install dependencies:
- Make sure you have Task, Node.js, npm, kubectl, and kind installed.
- Run
npm install while in this directory so that the solo-node-create-transaction.js script will work correctly when ran
- Choose your Solo command:
- Edit
Taskfile.yml and comment out/uncomment depending on if you want to run Solo checked out of the repository or running Solo with an NPM installSOLO_COMMAND: "npm run solo --": use this if running with solo source repositorySOLO_COMMAND: "solo": use this if running with installed version of Solo
- Provide your custom
application.properties if desired: - CN_VERSION:
- The following is only used for certain decision logic. It is best to have it as close to possible as the local build you are using of consensus node:
CN_VERSION: "v0.66.0" - The script is configured to leverage a local build of the Consensus Node, for example the
main branch. You will need to clone the Hiero Consensus Node yourself and then from its root directory run ./gradlew assemble, this assumes you have all its prerequisites configured, see: https://github.com/hiero-ledger/hiero-consensus-node/blob/main/docs/README.md
- Updating Directory Locations
- The script was designed to run from this directory and so if you copy down the example without the repository or change other locations you might need to make changes
- The
dir: ../.. setting says to run the script two directories above, CN_LOCAL_BUILD_PATH can be updated to be relative to that, or can be changed to have the full path to the consensus node directory - The
CN_LOCAL_BUILD_PATH actually points to the <hiero-consensus-node>/hedera-node/data, this is because this is the location of the artifacts that Solo needs to upload to the network node
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add the cluster to the deployment
- Generate node keys
- Deploy the network with custom configuration files
- Set up and start nodes
- Deploy mirror node, relay, and explorer
- Perform the consensus node add as described in the ‘What It Does’ section above
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationpackage.json - Contains the libraries for the solo-node-create-transaction.js to functionpackage-lock.json - A snapshot of what was last used when npm install was ran, run npm ci to install these versions specificallysolo-node-create-transaction.js - The script to run the Hiero SDK JS calls
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
9 - Node Delete Transaction Example
Using Solo with a custom NodeDeleteTransaction from an SDK call
Node Delete Transaction Example
This example demonstrates how to use the node add-prepare/prepare-upgrade/freeze-upgrade/add-execute commands against a network in order to manually write a NodeDeleteTransaction.
What It Does
- Stands up a network with two existing nodes
- Runs
solo consensus dev-node-delete prepare to get artifacts needed for the SDK NodeDeleteTransaction - Runs a JavaScript program using the Hiero SDK JS code to run a NodeDeleteTransaction
- Runs
solo consensus dev-freeze prepare-upgrade and solo consensus dev-freeze freeze-upgrade to put the network into a freeze state - Runs
solo node delete-execute to configure the network to stop using the deleted node, then restarts the network to come out of the freeze and run with the new configurations - Contains the destroy commands to bring down the network if desired
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-node-delete-transaction.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
How to Use
- Install dependencies:
- Make sure you have Task, Node.js, npm, kubectl, and kind installed.
- Run
npm install while in this directory so that the solo-node-delete-transaction.js script will work correctly when ran
- Choose your Solo command:
- Edit
Taskfile.yml and comment out/uncomment depending on if you want to run Solo checked out of the repository or running Solo with an NPM installSOLO_COMMAND: "npm run solo --": use this if running with solo source repositorySOLO_COMMAND: "solo": use this if running with installed version of Solo
- Provide your custom
application.properties if desired: - CN_VERSION:
- The following is only used for certain decision logic. It is best to have it as close to possible as the local build you are using of consensus node:
CN_VERSION: "v0.66.0" - The script is configured to leverage a local build of the Consensus Node, for example the
main branch. You will need to clone the Hiero Consensus Node yourself and then from its root directory run ./gradlew assemble, this assumes you have all its prerequisites configured, see: https://github.com/hiero-ledger/hiero-consensus-node/blob/main/docs/README.md
- Updating Directory Locations
- The script was designed to run from this directory and so if you copy down the example without the repository or change other locations you might need to make changes
- The
dir: ../.. setting says to run the script two directories above, CN_LOCAL_BUILD_PATH can be updated to be relative to that, or can be changed to have the full path to the consensus node directory - The
CN_LOCAL_BUILD_PATH actually points to the <hiero-consensus-node>/hedera-node/data, this is because this is the location of the artifacts that Solo needs to upload to the network node
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add the cluster to the deployment
- Generate node keys
- Deploy the network with custom configuration files
- Set up and start nodes
- Deploy mirror node, relay, and explorer
- Perform the node delete as described in the ‘What It Does’ section above
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationpackage.json - Contains the libraries for the solo-node-delete-transaction.js to functionpackage-lock.json - A snapshot of what was last used when npm install was ran, run npm ci to install these versions specificallysolo-node-delete-transaction.js - The script to run the Hiero SDK JS calls
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
10 - Node Update Transaction Example
Using Solo with a custom NodeUpdateTransaction from an SDK call
Node Update Transaction Example
This example demonstrates how to use the node add-prepare/prepare-upgrade/freeze-upgrade/add-execute commands against a network in order to manually write a NodeUpdateTransaction.
What It Does
- Stands up a network with two existing nodes
- Runs
solo consensus dev-node-update prepare to get artifacts needed for the SDK NodeUpdateTransaction - Runs a JavaScript program using the Hiero SDK JS code to run a NodeUpdateTransaction
- Runs
solo consensus dev-freeze prepare-upgrade and solo consensus dev-freeze freeze-upgrade to put the network into a freeze state - Runs
solo consensus dev-node-update execute to update network resources for the changes to the updated node, then restarts the network to come out of the freeze and leverage the changes - Contains the destroy commands to bring down the network if desired
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-node-update-transaction.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
How to Use
- Install dependencies:
- Make sure you have Task, Node.js, npm, kubectl, and kind installed.
- Run
npm install while in this directory so that the solo-node-update-transaction.js script will work correctly when ran
- Choose your Solo command:
- Edit
Taskfile.yml and comment out/uncomment depending on if you want to run Solo checked out of the repository or running Solo with an NPM installSOLO_COMMAND: "npm run solo --": use this if running with solo source repositorySOLO_COMMAND: "solo": use this if running with installed version of Solo
- Provide your custom
application.properties if desired: - CN_VERSION:
- The following is only used for certain decision logic. It is best to have it as close to possible as the local build you are using of consensus node:
CN_VERSION: "v0.66.0" - The script is configured to leverage a local build of the Consensus Node, for example the
main branch. You will need to clone the Hiero Consensus Node yourself and then from its root directory run ./gradlew assemble, this assumes you have all its prerequisites configured, see: https://github.com/hiero-ledger/hiero-consensus-node/blob/main/docs/README.md
- Updating Directory Locations
- The script was designed to run from this directory and so if you copy down the example without the repository or change other locations you might need to make changes
- The
dir: ../.. setting says to run the script two directories above, CN_LOCAL_BUILD_PATH can be updated to be relative to that, or can be changed to have the full path to the consensus node directory - The
CN_LOCAL_BUILD_PATH actually points to the <hiero-consensus-node>/hedera-node/data, this is because this is the location of the artifacts that Solo needs to upload to the network node
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add the cluster to the deployment
- Generate node keys
- Deploy the network with custom configuration files
- Set up and start nodes
- Deploy mirror node, relay, and explorer
- Perform the consensus node update as described in the ‘What It Does’ section above
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationpackage.json - Contains the libraries for the solo-node-update-transaction.js to functionpackage-lock.json - A snapshot of what was last used when npm install was ran, run npm ci to install these versions specificallysolo-node-update-transaction.js - The script to run the Hiero SDK JS calls
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
11 - One-Shot Falcon Deployment Example
Example of how to use the Solo one-shot falcon commands.
One-Shot Falcon Deployment Example
This example demonstrates how to use the Solo one-shot falcon commands to quickly deploy and destroy a complete Hiero Hashgraph network with all components in a single command.
What It Does
- Deploys a complete network stack with consensus nodes, mirror node, explorer, and relay in one command
- Uses a values file to configure all network components with custom settings
- Simplifies deployment by avoiding multiple manual steps
- Provides quick teardown with the destroy command
- Ideal for testing and development workflows
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-one-shot-falcon.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
How to Use
Install dependencies:
Customize your network:
- Edit
falcon-values.yaml to configure network settings, node parameters, and component options.
Deploy the network:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Deploy the complete network using
solo one-shot falcon deploy
Destroy the network:
- Run:
- This will:
- Destroy the Solo network using
solo one-shot falcon destroy - Delete the Kind cluster
Files
Taskfile.yml — Automation tasks for deploy and destroy operationsfalcon-values.yaml — Configuration file with network and component settings
Notes
- The one-shot falcon commands are designed to streamline deployment workflows
- All network components are configured through a single values file
- This is perfect for CI/CD pipelines and automated testing
- For more advanced customization, see the main Solo documentation
Configuration Sections
The falcon-values.yaml file contains the following configuration sections:
network - Network-wide settings (release tag, application properties, etc.)setup - Node setup configuration (keys, admin settings, etc.)consensusNode - Consensus node start parametersmirrorNode - Mirror node deployment settingsexplorerNode - Explorer deployment settingsrelayNode - Relay deployment settingsblockNode - Block node deployment settings (optional)
12 - Rapid-Fire Example
Example of how to use the Solo rapid-fire commands.
Rapid-Fire Example
This example demonstrates how to deploy a minimal Hiero Hashgraph Solo network and run a suite of rapid-fire load tests against it using the Solo CLI.
What It Does
- Automates deployment of a single-node Solo network using Kubernetes Kind
- Runs rapid-fire load tests for:
- Crypto transfers
- Token transfers
- NFT transfers
- Smart contract calls
- HeliSwap operations
- Longevity (endurance) testing
- Cleans up all resources after testing
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-rapid-fire.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Prerequisites
How to Use
- Install dependencies (if not already installed):
- See the prerequisites above.
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Deploy a single-node Solo network
- Run all rapid-fire load tests
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — Automation for deployment, testing, and cleanupnlg-values.yaml — Example values file for load tests (if present)
Notes
- This example is self-contained and does not require files from outside this directory.
- You can customize the load test parameters in
Taskfile.yml. - For more advanced usage, see the main Solo documentation.
13 - Solo deployment with Hardhat Example
example of how to deploy a Solo network and run Hardhat tests against it
External Database Test Example
This example demonstrates how to deploy a Hiero Hashgraph Solo deployment via the one-shot command, configure a hardhat project to connect to it, and run tests against the local Solo deployment.
What It Does
- Installs the Solo CLI and initializes a Solo deployment
- Installs
hardhat and configures it to connect to the local Solo deployment - Runs sample tests against the Solo deployment
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-hardhat-with-solo.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Usage
Install dependencies:
Customize your deployment:
- Edit
Taskfile.yml to set database credentials, network size, and other parameters as needed.
Start the deployment:
This will:
- Create the Kind cluster
- Install and initialize Solo
- Create a Solo deployment via
one-shot, install all dependencies (kubectl, helm, kind), create a cluster and install all Solo components (mirror node, explorer, relay) - Configure
hardhat to connect to the local Solo deployment - Run a smoke test
Destroy the deployment:
This will delete the Solo deployment and all resources.
Files
Taskfile.yml — Automation tasks and configurationhardhat-example/hardhat.config.ts — Configuration file for hardhat to connect to the local Solo deploymenthardhat-example/contracts/SimpleStorage.sol — Sample Solidity contract to deploy to the Solo deploymenthardhat-example/test/SimpleStorage.ts — Sample test file to run against the Solo deployment
Hardhat Configuration
When creating a deployment with solo one-shot single deploy three groups of accounts with predefined private keys is generated. The accounts from the group ECDSA Alias Accounts (EVM compatible) can be used by hardhat.
The account data can be found in the output of the command and in $SOLO_HOME/one-shot-$DEPLOYMENT_NAME/accounts.json.
Examine the contents of the hardhat-example/hardhat.config.ts file to see how to configure the network and accounts.
Notes
- All commands in the Taskfile are named for clarity in logs and troubleshooting.
- This example is self-contained and does not require files from outside this directory except for the Solo CLI npm package.
- You can extend the Taskfile to add more custom resources or steps as needed.
14 - State Save and Restore Example
Example of how to save network state and restore it later
State Save and Restore Example
This example demonstrates how to save network state from a running Solo network, recreate a new network, and load the saved state with a mirror node using an external PostgreSQL database.
What it does
- Creates an initial Solo network with consensus nodes and mirror node
- Uses an external PostgreSQL database for the mirror node
- Runs transactions to generate state
- Downloads and saves the network state and database dump
- Destroys the initial network
- Creates a new network with the same configuration
- Restores the saved state and database to the new network
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-state-save-and-restore.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Prerequisites
- Kind - Kubernetes in Docker
- kubectl - Kubernetes CLI
- Node.js - JavaScript runtime
- Task - Task runner
- Helm - Kubernetes package manager (for external database option)
Quick Start
Run Complete Workflow (One Command)
task # Run entire workflow: setup → save → restore
task destroy # Cleanup when done
Step-by-Step Workflow
task setup # 1. Deploy network with external database (5-10 min)
task save-state # 2. Save state and database (2-5 min)
task restore # 3. Recreate and restore (3-5 min)
task destroy # 5. Cleanup
Usage
1. Deploy Initial Network
This will:
- Create a Kind cluster
- Deploy PostgreSQL database
- Initialize Solo
- Deploy consensus network with 3 nodes
- Deploy mirror node connected to external database
- Run sample transactions to generate state
2. Save Network State and Database
This will:
- Download state from all consensus nodes
- Export PostgreSQL database dump
- Save both to
./saved-states/ directory - Display saved state information
3. Restore Network and Database
This will:
- Stop and destroy existing network
- Recreate PostgreSQL database
- Import database dump
- Create new consensus network with same configuration
- Upload saved state to new nodes
- Start nodes with restored state
- Reconnect mirror node to database
- Verify the restored state
4. Cleanup
This will delete the Kind cluster and clean up all resources.
Available Tasks
default (or just task) - Run complete workflow: setup → save-state → restoresetup - Deploy initial network with external PostgreSQL databasesave-state - Download consensus node state and export databaserestore - Recreate network and restore state with databaseverify-state - Verify restored state matches originaldestroy - Delete cluster and clean up all resourcesclean-state - Remove saved state files
Customization
You can adjust settings by editing the vars: section in Taskfile.yml:
NETWORK_SIZE - Number of consensus nodes (default: 2)NODE_ALIASES - Node identifiers (default: node1,node2)STATE_SAVE_DIR - Directory to save state files (default: ./saved-states)POSTGRES_PASSWORD - PostgreSQL password for external database
State Files
Saved state files are stored in ./saved-states/ with the following structure:
saved-states/
├── network-node1-0-state.zip # Used for all nodes during restore
├── network-node2-0-state.zip # Downloaded but not used during restore
└── database-dump.sql # PostgreSQL database export
Notes:
- State files are named using the pod naming convention:
network-<node-alias>-0-state.zip - During save: All node state files are downloaded
- During restore: Only the first node’s state file is used for all nodes (node IDs are automatically renamed)
The example also includes:
scripts/
└── init.sh # Database initialization script
The init.sh script sets up the PostgreSQL database with:
- mirror_node database
- Required schemas (public, temporary)
- Roles and users (postgres, readonlyuser)
- PostgreSQL extensions (btree_gist, pg_stat_statements, pg_trgm)
- Proper permissions and grants
How It Works
State Saving Process
- Download State: Uses
solo consensus state download to download signed state from each consensus node to ~/.solo/logs/<namespace>/ - Copy State Files: Copies state files from
~/.solo/logs/<namespace>/ to ./saved-states/ directory - Export Database: Uses
pg_dump with --clean --if-exists flags to export the complete database including schema and data
State Restoration Process
- Database Recreation: Deploys fresh PostgreSQL and runs
init.sh to create database structure (database, schemas, roles, users, extensions) - Database Restore: Imports database dump which drops and recreates tables with all data
- Network Recreation: Creates new network with identical configuration
- State Upload: Uploads the first node’s state file to all nodes using
solo consensus node start --state-file- State files are extracted to
data/saved/ - Cleanup: Only the latest/biggest round is kept, older rounds are automatically deleted to save disk space
- Node ID Renaming: Directory paths containing node IDs are automatically renamed to match each target node
- Mirror Node: Deploys mirror node connected to restored database and seeds initial data
- Verification: Checks that restored state matches original
Notes
- State files can be large (several GB per node) depending on network activity
- Ensure sufficient disk space in
./saved-states/ directory - External PostgreSQL database provides data persistence and queryability
- State restoration maintains transaction history and account balances
- Mirror node will resume from the restored state point
- Simplified State Restore: Uses the first node’s state file for all nodes with automatic processing:
- Old rounds are cleaned up first - only the latest round number is kept to optimize disk usage
- Node ID directories are then automatically renamed to match each target node
- Database dump includes all mirror node data (transactions, accounts, etc.)
View Logs
# Consensus node logs
kubectl logs -n state-restore-namespace network-node1-0 -f
# Mirror node logs
kubectl logs -n state-restore-namespace mirror-node-<pod-name> -f
# Database logs
kubectl logs -n database state-restore-postgresql-0 -f
Manual State Operations
# Download state manually
npm run solo --silent -- consensus state download --deployment state-restore-deployment --node-aliases node1
# Check downloaded state files (in Solo logs directory)
ls -lh ~/.solo/logs/state-restore-namespace/
# Check saved state files (in saved-states directory)
ls -lh ./saved-states/
Expected Timeline
- Initial setup: 5-10 minutes
- State download: 2-5 minutes (depends on state size)
- Network restoration: 3-5 minutes
- Total workflow: ~15-20 minutes
File Sizes
Typical state file sizes:
- Small network (few transactions): 100-500 MB per node
- Medium activity: 1-3 GB per node
- Heavy activity: 5-10+ GB per node
Ensure you have sufficient disk space in ./saved-states/ directory.
Advanced Usage
Save State at Specific Time
Run task save-state at any point after running transactions. The state captures the network at that moment.
Restore to Different Cluster
- Save state on cluster A
- Copy
./saved-states/ directory to cluster B - Run
task restore on cluster B
Multiple State Snapshots
# Save multiple snapshots
task save-state
mv saved-states saved-states-backup1
# Later...
task save-state
mv saved-states saved-states-backup2
# Restore specific snapshot
mv saved-states-backup1 saved-states
task restore
Troubleshooting
State download fails:
- Ensure nodes are running and healthy
- Check pod logs:
kubectl logs -n <namespace> <pod-name> - Increase timeout or download nodes sequentially
Restore fails:
- Verify state files exist in
./saved-states/ - Check file permissions
- Ensure network configuration matches original
- Check state file integrity
Database connection fails:
- Verify PostgreSQL pod is ready
- Check credentials in Taskfile.yml
- Review PostgreSQL logs
Out of disk space:
- Clean old state files with
task clean-state - Check available disk space before saving state
Debugging Commands
# Check pod status
kubectl get pods -n state-restore-namespace
# Describe problematic pod
kubectl describe pod <pod-name> -n state-restore-namespace
# Get pod logs
kubectl logs <pod-name> -n state-restore-namespace
# Access database shell
kubectl exec -it state-restore-postgresql-0 -n database -- psql -U postgres -d mirror_node
Example Output
$ task setup
✓ Create Kind cluster
✓ Initialize Solo
✓ Deploy consensus network (3 nodes)
✓ Deploy mirror node
✓ Generate sample transactions
Network ready at: http://localhost:5551
$ task save-state
✓ Downloading state from node1... (2.3 GB)
✓ Downloading state from node2... (2.3 GB)
✓ Downloading state from node3... (2.3 GB)
✓ Saving metadata
State saved to: ./saved-states/
$ task restore
✓ Stopping existing network
✓ Creating new network
✓ Uploading state to node1...
✓ Uploading state to node2...
✓ Uploading state to node3...
✓ Starting nodes with restored state
✓ Verifying restoration
State restored successfully!
This example is self-contained and does not require files from outside this directory.
15 - Version Upgrade Test Example
Example of how to upgrade all components of a Hedera network to current versions
Version Upgrade Test Example
This example demonstrates how to deploy a complete Hedera network with previous versions of all components and then upgrade them to current versions, including testing functionality after upgrades.
Overview
This test scenario performs the following operations:
- Deploy with Previous Versions: Deploys a network with consensus nodes, block node, mirror node, relay, and explorer using previous versions
- Upgrade Components: Upgrades each component individually to the current version
- Network Upgrade with Local Build: Upgrades the consensus network using the
--local-build-path flag - Functionality Verification: Creates accounts, verifies Explorer API responses, and tests Relay functionality
Getting This Example
Download Archive
You can download this example as a standalone archive from the Solo releases page:
https://github.com/hiero-ledger/solo/releases/download/<release_version>/example-version-upgrade-test.zip
View on GitHub
Browse the source code and configuration files for this example in the GitHub repository.
Prerequisites
- Kind cluster support
- Docker or compatible container runtime
- Node.js and npm
- Task runner (
go-task/task) - Local Hedera consensus node build (for network upgrade with local build path)
Usage
Navigate to the example directory:
cd examples/version-upgrade-test
Run Complete Test Scenario
To run the full version upgrade test:
This will execute all steps in sequence:
- Setup cluster and Solo environment
- Deploy all components with previous versions
- Upgrade each component to current version
- Verify functionality of all components
Individual Tasks
You can also run individual tasks:
Setup Cluster
Deploy with Old Versions
Upgrade Components
Verify Functionality
task verify-functionality
Port Forwarding
The example includes setup of port forwarding for easy access to services:
- Explorer: http://localhost:8080
- Relay: http://localhost:7546
- Mirror Node: http://localhost:8081
Verification Steps
The verification process includes:
- Account Creation: Creates two accounts and captures the first account ID
- Explorer API Test: Queries the Explorer REST API to verify the created account appears
- Relay API Test: Makes a JSON-RPC call to the relay to ensure it’s responding correctly
Local Build Path
The network upgrade step uses the --local-build-path flag to upgrade the consensus network with a locally built version. Ensure you have the Hedera consensus node repository cloned and built at:
../hiero-consensus-node/hedera-node/data
You can modify the CN_LOCAL_BUILD_PATH variable in the Taskfile.yml if your local build is in a different location.
Cleanup
To destroy the network and cleanup all resources:
This will:
- Stop all consensus nodes
- Destroy all deployed components
- Delete the Kind cluster
- Clean up temporary files
Troubleshooting
Port Forward Issues
If port forwarding fails, check if the services are running:
kubectl get services -n namespace-version-upgrade-test
Component Status
Check the status of all pods:
Service Logs
View logs for specific components:
kubectl logs -n namespace-version-upgrade-test -l app=network-node1
kubectl logs -n namespace-version-upgrade-test -l app=mirror-node
kubectl logs -n namespace-version-upgrade-test -l app=hedera-json-rpc-relay
kubectl logs -n namespace-version-upgrade-test -l app=explorer
API Verification
If API verification fails, ensure port forwarding is active and services are ready:
# Check if port forwards are running
ps aux | grep port-forward
# Test connectivity manually
curl http://localhost:8080/api/v1/accounts
curl -X POST http://localhost:7546 -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
Configuration
The Taskfile.yml contains several configurable variables:
NODE_IDENTIFIERS: Consensus node aliases (default: “node1,node2”)SOLO_NETWORK_SIZE: Number of consensus nodes (default: “2”)DEPLOYMENT: Deployment nameNAMESPACE: Kubernetes namespaceCLUSTER_NAME: Kind cluster name- Version variables for current and previous versions
Notes
- This example assumes you have the necessary permissions to create Kind clusters
- The local build path feature requires a local Hedera consensus node build
- API verification steps may need adjustment based on actual service endpoints and ingress configuration