Examples
The examples section provides information on some examples of how Solo can be used and leveraged.
The usage of examples in Solo
Table of Contents
Prerequisites
- install taskfile:
npm install -g @go-task/cli
Running the examples with Taskfile
cd into the directory under examples that has the Taskfile.yml, e.g. (from solo repo root directory) cd examples/network-with-block-node/- make sure that your current kubeconfig context is pointing to the cluster that you want to deploy to
- run
task which will do the rest and deploy the network and take care of many of the pre-requisites
NOTES:
- Some of these examples are for running against large clusters with a lot of resources available.
- Edit the values of the variables as needed.
Customizing the examples
- take a look at the Taskfile.yml sitting in the subdirectory for the deployment you want to run
- make sure your cluster can handle the number in SOLO_NETWORK_SIZE, if not, then you will have to update that and make it match the number of nodes in the
init-containers-values.yaml: hedera.nodes[] - take a look at the
init-containers-values.yaml file and make sure the values are correct for your deployment with special attention to:- resources
- nodeSelector
- tolerations
1 - Address Book Example
Example of how to use Yahcli to read/update ledger and mirror node address book
Yahcli Address Book Example
This is an example of how to use Yahcli to pull the ledger and mirror node address book. And to update the ledger address book. It updates File 101 (the ledger address book file) and File 102 (the ledger node details file).
NOTE: Mirror Node refers to File 102 as its address book.
Usage
To get the address book from the ledger, this requires a port forward to be setup on port 50211 to consensus node with node ID = 0.
[!NOTE]
Due to file size, the Yahcli.jar file is stored with Git LFS (Large File Storage). You will need to install Git LFS prior to cloning this repository to automatically download the Yahcli.jar file. For instructions on how to install see: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage
After cloning the repository, navigate to this directory and run the following command to pull the Yahcli.jar file:
git lfs install
git lfs pull
# try and detect if the port forward is already setup
netstat -na | grep 50211
ps -ef | grep 50211 | grep -v grep
# setup a port forward if you need to
kubectl port-forward -n "${SOLO_NAMESPACE}" pod/network-node1-0 50211:50211
Navigate to the examples/address-book directory in the Solo repository:
cd <solo-root>/examples/address-book
If you don’t already have a running Solo network, you can start one by running the following command:
To get the address book from the ledger, run the following command:
task get:ledger:addressbook
It will output the address book in JSON format to:
examples/address-book/localhost/sysfiles/addressBook.jsonexamples/address-book/localhost/sysfiles/nodeDetails.json
You can update the address book files with your favorite text editor.
Once the files are ready, you can upload them to the ledger by running the following command:
cd <solo-root>/examples/address-book
task update:ledger:addressbook
To get the address book from the mirror node, run the following command:
cd <solo-root>/examples/address-book
task get:mirror:addressbook
NOTE: Mirror Node may not pick up the changes automatically, it might require running some transactions through, example:
cd <solo-root>
npm run solo -- ledger account create
npm run solo -- ledger account create
npm run solo -- ledger account create
npm run solo -- ledger account create
npm run solo -- ledger account create
npm run solo -- ledger account update -n solo-e2e --account-id 0.0.1004 --hbar-amount 78910
Stop the Solo network when you are done:
2 - Custom Network Config Example
Example of how to create and manage a custom Solo deployment and configure it with custom settings
Custom Network Config Example
This example demonstrates how to create and manage a custom Hiero Hashgraph Solo deployment and configure it with custom settings.
What It Does
- Defines a custom network topology (number of nodes, namespaces, deployments, etc.)
- Provides a Taskfile for automating cluster creation, deployment, key management, and network operations
- Supports local development and testing of Hedera network features
- Can be extended to include mirror nodes, explorers, and relays
How to Use
- Install dependencies:
- Customize your network:
- Edit
Taskfile.yml to set the desired network size, namespaces, and other parameters.
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add the cluster to the deployment
- Generate node keys
- Deploy the network with custom configuration files
- Set up and start nodes
- Deploy mirror node, relay, and explorer
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationinit-containers-values.yaml, settings.txt, log4j2.xml, application.properties — Example config files for customizing your deployment
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
3 - Local Build with Custom Config Example
Example of how to create and manage a custom Hiero Hashgraph Solo deployment using locally built consensus nodes with custom configuration settings
Local Build with Custom Config Example
This example demonstrates how to create and manage a custom Hiero Hashgraph Solo deployment using locally built consensus nodes with custom configuration settings.
What It Does
- Uses local consensus node builds from a specified build path for development and testing
- Provides configurable Helm chart versions for Block Node, Mirror Node, Explorer, and Relay components
- Supports custom values files for each component (Block Node, Mirror Node, Explorer, Relay)
- Includes custom application.properties and other configuration files
- Automates the complete deployment workflow with decision tree logic based on consensus node release tags
- Defines a custom network topology (number of nodes, namespaces, deployments, etc.)
Configuration Options
Consensus Node Configuration
- Local Build Path:
CN_LOCAL_BUILD_PATH - Path to locally built consensus node artifacts - Release Tag:
CN_VERSION - Consensus node version for decision tree logic - Local Build Flag: Automatically applied to use local builds instead of released versions
Component Version Control
- Block Node:
BLOCK_NODE_RELEASE_TAG - Helm chart version (e.g., “–chart-version v0.18.0”) - Mirror Node:
MIRROR_NODE_VERSION_FLAG - Version flag (e.g., “–mirror-node-version v0.136.1”) - Relay:
RELAY_RELEASE_FLAG - Release flag (e.g., “–relay-release 0.70.1”) - Explorer:
EXPLORER_VERSION_FLAG - Version flag (e.g., “–explorer-version 25.0.0”)
Custom Values Files
Each component can use custom Helm values files:
- Block Node:
block-node-values.yaml - Mirror Node:
mirror-node-values.yaml - Relay:
relay-node-values.yaml - Explorer:
hiero-explorer-node-values.yaml
How to Use
Install dependencies:
Prepare local consensus node build:
- Build the consensus node locally or ensure the build path (
CN_LOCAL_BUILD_PATH) points to valid artifacts - Default path:
../hiero-consensus-node/hedera-node/data
Customize your configuration:
- Edit
Taskfile.yml to adjust network size, component versions, and paths - Modify values files (
*-values.yaml) for component-specific customizations - Update
application.properties for consensus node configuration
Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo and configure cluster reference
- Add block node with specified release tag
- Generate consensus node keys
- Deploy the network with local build and custom configuration
- Set up and start consensus nodes using local builds
- Deploy mirror node, relay, and explorer with custom versions and values
Destroy the network:
- Run:
- This will clean up all deployed components and delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationinit-containers-values.yaml, settings.txt, log4j2.xml, application.properties — Example config files for customizing your deployment
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
4 - Network with an External PostgreSQL Database Example
example of how to deploy a Solo network with an external PostgreSQL database
External Database Test Example
This example demonstrates how to deploy a Hiero Hashgraph Solo network with an external PostgreSQL database using Kubernetes, Helm, and Taskfile automation.
What It Does
- Creates a Kind Kubernetes cluster for local testing
- Installs the Solo CLI and initializes a Solo network
- Deploys a PostgreSQL database using Helm
- Seeds the database and configures Solo to use it as an external database for the mirror node
- Deploys mirror node, explorer, relay, and runs a smoke test
- All steps are named for clear logging and troubleshooting
Usage
Install dependencies:
Customize your deployment:
- Edit
Taskfile.yml to set database credentials, network size, and other parameters as needed.
Start the network:
This will:
- Create the Kind cluster
- Install and initialize Solo
- Deploy and configure PostgreSQL
- Seed the database
- Deploy all Solo components (mirror node, explorer, relay)
- Run a smoke test
Destroy the network:
This will delete the Kind cluster and all resources.
Files
Taskfile.yml — Automation tasks and configurationscripts/init.sh — Script to initialize the database- Other config files as needed for your deployment
Notes
- All commands in the Taskfile are named for clarity in logs and troubleshooting.
- This example is self-contained and does not require files from outside this directory except for the Solo CLI npm package.
- You can extend the Taskfile to add more custom resources or steps as needed.
5 - Network with Block Node Example
Example of how to create and manage a custom Solo deployment and configure it with custom settings
Network with Block Node Example
This example demonstrates how to deploy a Hiero Hashgraph Solo network with a block node using Kubernetes and Taskfile.
What it does
- Creates a local Kubernetes cluster using Kind
- Deploys a Solo network with a single consensus node, mirror node, relay, explorer, and block node
- Provides tasks to install (start) and destroy the network
Usage
Install dependencies
Deploy the network
This will:
- Install the Solo CLI
- Create a Kind cluster
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add a block node
- Generate node keys
- Deploy the network, node, mirror node, relay, and explorer
Destroy the network
This will:
- Stop the node
- Destroy the mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Tasks
install: Installs and starts the Solo network with a block node, mirror node, relay, and explorer.destroy: Stops and removes all network components and deletes the Kind cluster.
Customization
You can adjust the number of nodes and other settings by editing the vars: section in the Taskfile.yml.
Advanced: Block Node Routing Configuration
The --block-node-cfg flag allows you to configure how each consensus node sends blocks to specific block nodes.
Usage
The flag accepts either:
JSON string directly:
solo consensus network deploy --block-node-cfg '{"node1":[1,3],"node2":[2]}'
Path to a JSON file:
# Create block-config.json
echo '{"node1":[1,3],"node2":[2]}' > block-config.json
# Use the file
solo consensus network deploy --block-node-cfg block-config.json
The JSON configuration maps consensus node names to arrays of block node IDs:
{
"node1": [1, 3],
"node2": [2]
}
This example means:
- Consensus node
node1 sends blocks to block nodes 1 and 3 - Consensus node
node2 sends blocks to block node 2
Example: Multi-Node Setup with Custom Routing
# Deploy network with 3 consensus nodes and 3 block nodes
solo consensus network deploy \
--deployment my-network \
--number-of-consensus-nodes 3 \
--block-node-cfg '{"node1":[1],"node2":[2],"node3":[3]}'
# This creates isolated routing: each consensus node talks to one block node
This example is self-contained and does not require any files from outside this directory.
6 - Network With Domain Names Example
Example of how to deploy a Solo network with custom domain names
Network with Domain Names Example
This example demonstrates how to deploy a Hiero Hashgraph Solo network with custom domain names for nodes, mirror node, relay, and explorer using Kubernetes and Taskfile.
What it does
- Creates a local Kubernetes cluster using Kind
- Deploys a Solo network with a single consensus node, mirror node, relay, explorer, and custom domain names for all services
- Provides tasks to install (start) and destroy the network
Usage
Install dependencies
Deploy the network
This will:
- Install the Solo CLI
- Create a Kind cluster
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Generate node keys
- Deploy the network, node, mirror node, relay, and explorer with custom domain names
- Set up port forwarding for key services
- Run a sample SDK connection script
Destroy the network
This will:
- Stop the node
- Destroy the mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Tasks
install: Installs and starts the Solo network with custom domain names for all components, sets up port forwarding, and runs a sample SDK connection.destroy: Stops and removes all network components and deletes the Kind cluster.
Customization
You can adjust the domain names and other settings by editing the vars: section in the Taskfile.yaml.
7 - Node Create Transaction Example
Using Solo with a custom NodeCreateTransaction from an SDK call
Node Create Transaction Example
This example demonstrates how to use the node add-prepare/prepare-upgrade/freeze-upgrade/add-execute commands against a network in order to manually write a NodeCreateTransaction.
What It Does
- Stands up a network with two existing nodes
- Runs
solo node add-prepare to get artifacts needed for the SDK NodeCreateTransaction - Runs a JavaScript program using the Hiero SDK JS code to run a NodeCreateTransaction
- Runs
solo consensus dev-freeze prepare-upgrade and solo consensus dev-freeze freeze-upgrade to put the network into a freeze state - Runs
solo consensus dev-node-add execute to add network resources for a third consensus node, configures it, then restarts the network to come out of the freeze and leverage the new node - Contains the destroy commands to bring down the network if desired
How to Use
- Install dependencies:
- Make sure you have Task, Node.js, npm, kubectl, and kind installed.
- Run
npm install while in this directory so that the solo-node-create-transaction.js script will work correctly when ran
- Choose your Solo command:
- Edit
Taskfile.yml and comment out/uncomment depending on if you want to run Solo checked out of the repository or running Solo with an NPM installSOLO_COMMAND: "npm run solo --": use this if running with solo source repositorySOLO_COMMAND: "solo": use this if running with installed version of Solo
- Provide your custom
application.properties if desired: - CN_VERSION:
- The following is only used for certain decision logic. It is best to have it as close to possible as the local build you are using of consensus node:
CN_VERSION: "v0.66.0" - The script is configured to leverage a local build of the Consensus Node, for example the
main branch. You will need to clone the Hiero Consensus Node yourself and then from its root directory run ./gradlew assemble, this assumes you have all its prerequisites configured, see: https://github.com/hiero-ledger/hiero-consensus-node/blob/main/docs/README.md
- Updating Directory Locations
- The script was designed to run from this directory and so if you copy down the example without the repository or change other locations you might need to make changes
- The
dir: ../.. setting says to run the script two directories above, CN_LOCAL_BUILD_PATH can be updated to be relative to that, or can be changed to have the full path to the consensus node directory - The
CN_LOCAL_BUILD_PATH actually points to the <hiero-consensus-node>/hedera-node/data, this is because this is the location of the artifacts that Solo needs to upload to the network node
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add the cluster to the deployment
- Generate node keys
- Deploy the network with custom configuration files
- Set up and start nodes
- Deploy mirror node, relay, and explorer
- Perform the consensus node add as described in the ‘What It Does’ section above
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationpackage.json - Contains the libraries for the solo-node-create-transaction.js to functionpackage-lock.json - A snapshot of what was last used when npm install was ran, run npm ci to install these versions specificallysolo-node-create-transaction.js - The script to run the Hiero SDK JS calls
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
8 - Node Delete Transaction Example
Using Solo with a custom NodeDeleteTransaction from an SDK call
Node Delete Transaction Example
This example demonstrates how to use the node add-prepare/prepare-upgrade/freeze-upgrade/add-execute commands against a network in order to manually write a NodeDeleteTransaction.
What It Does
- Stands up a network with two existing nodes
- Runs
solo consensus dev-node-delete prepare to get artifacts needed for the SDK NodeDeleteTransaction - Runs a JavaScript program using the Hiero SDK JS code to run a NodeDeleteTransaction
- Runs
solo consensus dev-freeze prepare-upgrade and solo consensus dev-freeze freeze-upgrade to put the network into a freeze state - Runs
solo node delete-execute to configure the network to stop using the deleted node, then restarts the network to come out of the freeze and run with the new configurations - Contains the destroy commands to bring down the network if desired
How to Use
- Install dependencies:
- Make sure you have Task, Node.js, npm, kubectl, and kind installed.
- Run
npm install while in this directory so that the solo-node-delete-transaction.js script will work correctly when ran
- Choose your Solo command:
- Edit
Taskfile.yml and comment out/uncomment depending on if you want to run Solo checked out of the repository or running Solo with an NPM installSOLO_COMMAND: "npm run solo --": use this if running with solo source repositorySOLO_COMMAND: "solo": use this if running with installed version of Solo
- Provide your custom
application.properties if desired: - CN_VERSION:
- The following is only used for certain decision logic. It is best to have it as close to possible as the local build you are using of consensus node:
CN_VERSION: "v0.66.0" - The script is configured to leverage a local build of the Consensus Node, for example the
main branch. You will need to clone the Hiero Consensus Node yourself and then from its root directory run ./gradlew assemble, this assumes you have all its prerequisites configured, see: https://github.com/hiero-ledger/hiero-consensus-node/blob/main/docs/README.md
- Updating Directory Locations
- The script was designed to run from this directory and so if you copy down the example without the repository or change other locations you might need to make changes
- The
dir: ../.. setting says to run the script two directories above, CN_LOCAL_BUILD_PATH can be updated to be relative to that, or can be changed to have the full path to the consensus node directory - The
CN_LOCAL_BUILD_PATH actually points to the <hiero-consensus-node>/hedera-node/data, this is because this is the location of the artifacts that Solo needs to upload to the network node
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add the cluster to the deployment
- Generate node keys
- Deploy the network with custom configuration files
- Set up and start nodes
- Deploy mirror node, relay, and explorer
- Perform the node delete as described in the ‘What It Does’ section above
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationpackage.json - Contains the libraries for the solo-node-delete-transaction.js to functionpackage-lock.json - A snapshot of what was last used when npm install was ran, run npm ci to install these versions specificallysolo-node-delete-transaction.js - The script to run the Hiero SDK JS calls
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
9 - Node Update Transaction Example
Using Solo with a custom NodeUpdateTransaction from an SDK call
Node Update Transaction Example
This example demonstrates how to use the node add-prepare/prepare-upgrade/freeze-upgrade/add-execute commands against a network in order to manually write a NodeUpdateTransaction.
What It Does
- Stands up a network with two existing nodes
- Runs
solo consensus dev-node-update prepare to get artifacts needed for the SDK NodeUpdateTransaction - Runs a JavaScript program using the Hiero SDK JS code to run a NodeUpdateTransaction
- Runs
solo consensus dev-freeze prepare-upgrade and solo consensus dev-freeze freeze-upgrade to put the network into a freeze state - Runs
solo consensus dev-node-update execute to update network resources for the changes to the updated node, then restarts the network to come out of the freeze and leverage the changes - Contains the destroy commands to bring down the network if desired
How to Use
- Install dependencies:
- Make sure you have Task, Node.js, npm, kubectl, and kind installed.
- Run
npm install while in this directory so that the solo-node-update-transaction.js script will work correctly when ran
- Choose your Solo command:
- Edit
Taskfile.yml and comment out/uncomment depending on if you want to run Solo checked out of the repository or running Solo with an NPM installSOLO_COMMAND: "npm run solo --": use this if running with solo source repositorySOLO_COMMAND: "solo": use this if running with installed version of Solo
- Provide your custom
application.properties if desired: - CN_VERSION:
- The following is only used for certain decision logic. It is best to have it as close to possible as the local build you are using of consensus node:
CN_VERSION: "v0.66.0" - The script is configured to leverage a local build of the Consensus Node, for example the
main branch. You will need to clone the Hiero Consensus Node yourself and then from its root directory run ./gradlew assemble, this assumes you have all its prerequisites configured, see: https://github.com/hiero-ledger/hiero-consensus-node/blob/main/docs/README.md
- Updating Directory Locations
- The script was designed to run from this directory and so if you copy down the example without the repository or change other locations you might need to make changes
- The
dir: ../.. setting says to run the script two directories above, CN_LOCAL_BUILD_PATH can be updated to be relative to that, or can be changed to have the full path to the consensus node directory - The
CN_LOCAL_BUILD_PATH actually points to the <hiero-consensus-node>/hedera-node/data, this is because this is the location of the artifacts that Solo needs to upload to the network node
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Initialize Solo
- Connect and set up the cluster reference
- Create and configure the deployment
- Add the cluster to the deployment
- Generate node keys
- Deploy the network with custom configuration files
- Set up and start nodes
- Deploy mirror node, relay, and explorer
- Perform the consensus node update as described in the ‘What It Does’ section above
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy mirror node, relay, and explorer
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — All automation tasks and configurationpackage.json - Contains the libraries for the solo-node-update-transaction.js to functionpackage-lock.json - A snapshot of what was last used when npm install was ran, run npm ci to install these versions specificallysolo-node-update-transaction.js - The script to run the Hiero SDK JS calls
Notes
- This example is self-contained and does not require files from outside this directory.
- All steps in the workflow are named for clear logging and troubleshooting.
- You can extend the Taskfile to add more custom resources or steps as needed.
- For more advanced usage, see the main Solo documentation.
10 - One-Shot Falcon Deployment Example
Example of how to use the Solo one-shot falcon commands.
One-Shot Falcon Deployment Example
This example demonstrates how to use the Solo one-shot falcon commands to quickly deploy and destroy a complete Hiero Hashgraph network with all components in a single command.
What It Does
- Deploys a complete network stack with consensus nodes, mirror node, explorer, and relay in one command
- Uses a values file to configure all network components with custom settings
- Simplifies deployment by avoiding multiple manual steps
- Provides quick teardown with the destroy command
- Ideal for testing and development workflows
How to Use
Install dependencies:
Customize your network:
- Edit
falcon-values.yaml to configure network settings, node parameters, and component options.
Deploy the network:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Set the kubectl context
- Deploy the complete network using
solo one-shot falcon deploy
Destroy the network:
- Run:
- This will:
- Destroy the Solo network using
solo one-shot falcon destroy - Delete the Kind cluster
Files
Taskfile.yml — Automation tasks for deploy and destroy operationsfalcon-values.yaml — Configuration file with network and component settings
Notes
- The one-shot falcon commands are designed to streamline deployment workflows
- All network components are configured through a single values file
- This is perfect for CI/CD pipelines and automated testing
- For more advanced customization, see the main Solo documentation
Configuration Sections
The falcon-values.yaml file contains the following configuration sections:
network - Network-wide settings (release tag, application properties, etc.)setup - Node setup configuration (keys, admin settings, etc.)consensusNode - Consensus node start parametersmirrorNode - Mirror node deployment settingsexplorerNode - Explorer deployment settingsrelayNode - Relay deployment settingsblockNode - Block node deployment settings (optional)
11 - Rapid-Fire Example
Example of how to use the Solo rapid-fire commands.
Rapid-Fire Example
This example demonstrates how to deploy a minimal Hiero Hashgraph Solo network and run a suite of rapid-fire load tests against it using the Solo CLI.
What It Does
- Automates deployment of a single-node Solo network using Kubernetes Kind
- Runs rapid-fire load tests for:
- Crypto transfers
- Token transfers
- NFT transfers
- Smart contract calls
- HeliSwap operations
- Longevity (endurance) testing
- Cleans up all resources after testing
Prerequisites
How to Use
- Install dependencies (if not already installed):
- See the prerequisites above.
- Run the default workflow:
- From this directory, run:
- This will:
- Install the Solo CLI
- Create a Kind cluster
- Deploy a single-node Solo network
- Run all rapid-fire load tests
- Destroy the network:
- Run:
- This will:
- Stop all nodes
- Destroy the Solo network
- Delete the Kind cluster
Files
Taskfile.yml — Automation for deployment, testing, and cleanupnlg-values.yaml — Example values file for load tests (if present)
Notes
- This example is self-contained and does not require files from outside this directory.
- You can customize the load test parameters in
Taskfile.yml. - For more advanced usage, see the main Solo documentation.
12 - Solo deployment with Hardhat Example
example of how to deploy a Solo network and run Hardhat tests against it
External Database Test Example
This example demonstrates how to deploy a Hiero Hashgraph Solo deployment via the one-shot command, configure a hardhat project to connect to it, and run tests against the local Solo deployment.
What It Does
- Installs the Solo CLI and initializes a Solo deployment
- Installs
hardhat and configures it to connect to the local Solo deployment - Runs sample tests against the Solo deployment
Usage
Install dependencies:
Customize your deployment:
- Edit
Taskfile.yml to set database credentials, network size, and other parameters as needed.
Start the deployment:
This will:
- Create the Kind cluster
- Install and initialize Solo
- Create a Solo deployment via
one-shot, install all dependencies (kubectl, helm, kind), create a cluster and install all Solo components (mirror node, explorer, relay) - Configure
hardhat to connect to the local Solo deployment - Run a smoke test
Destroy the deployment:
This will delete the Solo deployment and all resources.
Files
Taskfile.yml — Automation tasks and configurationhardhat-example/hardhat.config.ts — Configuration file for hardhat to connect to the local Solo deploymenthardhat-example/contracts/SimpleStorage.sol — Sample Solidity contract to deploy to the Solo deploymenthardhat-example/test/SimpleStorage.ts — Sample test file to run against the Solo deployment
Hardhat Configuration
When creating a deployment with solo one-shot single deploy three groups of accounts with predefined private keys is generated. The accounts from the group ECDSA Alias Accounts (EVM compatible) can be used by hardhat.
The account data can be found in the output of the command and in $SOLO_HOME/one-shot-$DEPLOYMENT_NAME/accounts.json.
Examine the contents of the hardhat-example/hardhat.config.ts file to see how to configure the network and accounts.
Notes
- All commands in the Taskfile are named for clarity in logs and troubleshooting.
- This example is self-contained and does not require files from outside this directory except for the Solo CLI npm package.
- You can extend the Taskfile to add more custom resources or steps as needed.
13 - Solo Inside a Cluster Example
Example of how to deploy a Solo network within a Kubernetes cluster
Running Solo Inside Cluster Example
This example demonstrates how to run the Solo network inside a privileged Ubuntu pod in a Kubernetes cluster for end-to-end testing. It automates the setup of all required dependencies and configures the environment for Solo to run inside the cluster.
What it does
- Renders Kubernetes manifests for a ServiceAccount and a privileged Ubuntu pod using templates.
- Applies these manifests to your cluster using
kubectl. - Waits for the pod to be ready, then copies and executes a setup script inside the pod.
- The setup script installs all required tools (kubectl, Docker, Helm, Node.js, etc.), installs the Solo CLI locally, and runs Solo commands to initialize and deploy a test network.
Usage
Install dependencies
- Make sure you have kubectl and Task installed.
- You need access to a running Kubernetes cluster (e.g., Kind, Minikube, GKE).
Run the test
This will:
- Render and apply the ServiceAccount and Pod manifests
- Copy and execute the setup script inside the pod
- The pod will install all dependencies and use Solo to create a Hiero deployment
Clean up
- Run the cleanup task to delete the pod and ServiceAccount:
Customization
- You can modify the templates in the
templates/ directory to change the pod configuration or ServiceAccount permissions. - Edit the setup script to adjust which Solo commands are run or which dependencies are installed.
Tasks
start: Sets up and runs the Solo network inside a privileged pod for end-to-end testing.cleanup: Deletes the privileged pod and ServiceAccount used for the test.
14 - State Save and Restore Example
Example of how to save network state and restore it later
State Save and Restore Example
This example demonstrates how to save network state from a running Solo network, recreate a new network, and load the saved state with a mirror node using an external PostgreSQL database.
What it does
- Creates an initial Solo network with consensus nodes and mirror node
- Uses an external PostgreSQL database for the mirror node
- Runs transactions to generate state
- Downloads and saves the network state and database dump
- Destroys the initial network
- Creates a new network with the same configuration
- Restores the saved state and database to the new network
Prerequisites
- Kind - Kubernetes in Docker
- kubectl - Kubernetes CLI
- Node.js - JavaScript runtime
- Task - Task runner
- Helm - Kubernetes package manager (for external database option)
Quick Start
Run Complete Workflow (One Command)
task # Run entire workflow: setup → save → restore
task destroy # Cleanup when done
Step-by-Step Workflow
task setup # 1. Deploy network with external database (5-10 min)
task save-state # 2. Save state and database (2-5 min)
task restore # 3. Recreate and restore (3-5 min)
task destroy # 5. Cleanup
Usage
1. Deploy Initial Network
This will:
- Create a Kind cluster
- Deploy PostgreSQL database
- Initialize Solo
- Deploy consensus network with 3 nodes
- Deploy mirror node connected to external database
- Run sample transactions to generate state
2. Save Network State and Database
This will:
- Download state from all consensus nodes
- Export PostgreSQL database dump
- Save both to
./saved-states/ directory - Display saved state information
3. Restore Network and Database
This will:
- Stop and destroy existing network
- Recreate PostgreSQL database
- Import database dump
- Create new consensus network with same configuration
- Upload saved state to new nodes
- Start nodes with restored state
- Reconnect mirror node to database
- Verify the restored state
4. Cleanup
This will delete the Kind cluster and clean up all resources.
Available Tasks
default (or just task) - Run complete workflow: setup → save-state → restoresetup - Deploy initial network with external PostgreSQL databasesave-state - Download consensus node state and export databaserestore - Recreate network and restore state with databaseverify-state - Verify restored state matches originaldestroy - Delete cluster and clean up all resourcesclean-state - Remove saved state files
Customization
You can adjust settings by editing the vars: section in Taskfile.yml:
NETWORK_SIZE - Number of consensus nodes (default: 2)NODE_ALIASES - Node identifiers (default: node1,node2)STATE_SAVE_DIR - Directory to save state files (default: ./saved-states)POSTGRES_PASSWORD - PostgreSQL password for external database
State Files
Saved state files are stored in ./saved-states/ with the following structure:
saved-states/
├── network-node1-0-state.zip # Used for all nodes during restore
├── network-node2-0-state.zip # Downloaded but not used during restore
└── database-dump.sql # PostgreSQL database export
Notes:
- State files are named using the pod naming convention:
network-<node-alias>-0-state.zip - During save: All node state files are downloaded
- During restore: Only the first node’s state file is used for all nodes (node IDs are automatically renamed)
The example also includes:
scripts/
└── init.sh # Database initialization script
The init.sh script sets up the PostgreSQL database with:
- mirror_node database
- Required schemas (public, temporary)
- Roles and users (postgres, readonlyuser)
- PostgreSQL extensions (btree_gist, pg_stat_statements, pg_trgm)
- Proper permissions and grants
How It Works
State Saving Process
- Download State: Uses
solo consensus state download to download signed state from each consensus node to ~/.solo/logs/<namespace>/ - Copy State Files: Copies state files from
~/.solo/logs/<namespace>/ to ./saved-states/ directory - Export Database: Uses
pg_dump with --clean --if-exists flags to export the complete database including schema and data
State Restoration Process
- Database Recreation: Deploys fresh PostgreSQL and runs
init.sh to create database structure (database, schemas, roles, users, extensions) - Database Restore: Imports database dump which drops and recreates tables with all data
- Network Recreation: Creates new network with identical configuration
- State Upload: Uploads the first node’s state file to all nodes using
solo consensus node start --state-file- State files are extracted to
data/saved/ - Cleanup: Only the latest/biggest round is kept, older rounds are automatically deleted to save disk space
- Node ID Renaming: Directory paths containing node IDs are automatically renamed to match each target node
- Mirror Node: Deploys mirror node connected to restored database and seeds initial data
- Verification: Checks that restored state matches original
Notes
- State files can be large (several GB per node) depending on network activity
- Ensure sufficient disk space in
./saved-states/ directory - External PostgreSQL database provides data persistence and queryability
- State restoration maintains transaction history and account balances
- Mirror node will resume from the restored state point
- Simplified State Restore: Uses the first node’s state file for all nodes with automatic processing:
- Old rounds are cleaned up first - only the latest round number is kept to optimize disk usage
- Node ID directories are then automatically renamed to match each target node
- Database dump includes all mirror node data (transactions, accounts, etc.)
View Logs
# Consensus node logs
kubectl logs -n state-restore-namespace network-node1-0 -f
# Mirror node logs
kubectl logs -n state-restore-namespace mirror-node-<pod-name> -f
# Database logs
kubectl logs -n database state-restore-postgresql-0 -f
Manual State Operations
# Download state manually
npm run solo --silent -- consensus state download --deployment state-restore-deployment --node-aliases node1
# Check downloaded state files (in Solo logs directory)
ls -lh ~/.solo/logs/state-restore-namespace/
# Check saved state files (in saved-states directory)
ls -lh ./saved-states/
Expected Timeline
- Initial setup: 5-10 minutes
- State download: 2-5 minutes (depends on state size)
- Network restoration: 3-5 minutes
- Total workflow: ~15-20 minutes
File Sizes
Typical state file sizes:
- Small network (few transactions): 100-500 MB per node
- Medium activity: 1-3 GB per node
- Heavy activity: 5-10+ GB per node
Ensure you have sufficient disk space in ./saved-states/ directory.
Advanced Usage
Save State at Specific Time
Run task save-state at any point after running transactions. The state captures the network at that moment.
Restore to Different Cluster
- Save state on cluster A
- Copy
./saved-states/ directory to cluster B - Run
task restore on cluster B
Multiple State Snapshots
# Save multiple snapshots
task save-state
mv saved-states saved-states-backup1
# Later...
task save-state
mv saved-states saved-states-backup2
# Restore specific snapshot
mv saved-states-backup1 saved-states
task restore
Troubleshooting
State download fails:
- Ensure nodes are running and healthy
- Check pod logs:
kubectl logs -n <namespace> <pod-name> - Increase timeout or download nodes sequentially
Restore fails:
- Verify state files exist in
./saved-states/ - Check file permissions
- Ensure network configuration matches original
- Check state file integrity
Database connection fails:
- Verify PostgreSQL pod is ready
- Check credentials in Taskfile.yml
- Review PostgreSQL logs
Out of disk space:
- Clean old state files with
task clean-state - Check available disk space before saving state
Debugging Commands
# Check pod status
kubectl get pods -n state-restore-namespace
# Describe problematic pod
kubectl describe pod <pod-name> -n state-restore-namespace
# Get pod logs
kubectl logs <pod-name> -n state-restore-namespace
# Access database shell
kubectl exec -it state-restore-postgresql-0 -n database -- psql -U postgres -d mirror_node
Example Output
$ task setup
✓ Create Kind cluster
✓ Initialize Solo
✓ Deploy consensus network (3 nodes)
✓ Deploy mirror node
✓ Generate sample transactions
Network ready at: http://localhost:5551
$ task save-state
✓ Downloading state from node1... (2.3 GB)
✓ Downloading state from node2... (2.3 GB)
✓ Downloading state from node3... (2.3 GB)
✓ Saving metadata
State saved to: ./saved-states/
$ task restore
✓ Stopping existing network
✓ Creating new network
✓ Uploading state to node1...
✓ Uploading state to node2...
✓ Uploading state to node3...
✓ Starting nodes with restored state
✓ Verifying restoration
State restored successfully!
This example is self-contained and does not require files from outside this directory.
15 - Version Upgrade Test Example
Example of how to upgrade all components of a Hedera network to current versions
Version Upgrade Test Example
This example demonstrates how to deploy a complete Hedera network with previous versions of all components and then upgrade them to current versions, including testing functionality after upgrades.
Overview
This test scenario performs the following operations:
- Deploy with Previous Versions: Deploys a network with consensus nodes, block node, mirror node, relay, and explorer using previous versions
- Upgrade Components: Upgrades each component individually to the current version
- Network Upgrade with Local Build: Upgrades the consensus network using the
--local-build-path flag - Functionality Verification: Creates accounts, verifies Explorer API responses, and tests Relay functionality
Prerequisites
- Kind cluster support
- Docker or compatible container runtime
- Node.js and npm
- Task runner (
go-task/task) - Local Hedera consensus node build (for network upgrade with local build path)
Usage
Navigate to the example directory:
cd examples/version-upgrade-test
Run Complete Test Scenario
To run the full version upgrade test:
This will execute all steps in sequence:
- Setup cluster and Solo environment
- Deploy all components with previous versions
- Upgrade each component to current version
- Verify functionality of all components
Individual Tasks
You can also run individual tasks:
Setup Cluster
Deploy with Old Versions
Upgrade Components
Verify Functionality
task verify-functionality
Port Forwarding
The example includes setup of port forwarding for easy access to services:
- Explorer: http://localhost:8080
- Relay: http://localhost:7546
- Mirror Node: http://localhost:8081
Verification Steps
The verification process includes:
- Account Creation: Creates two accounts and captures the first account ID
- Explorer API Test: Queries the Explorer REST API to verify the created account appears
- Relay API Test: Makes a JSON-RPC call to the relay to ensure it’s responding correctly
Local Build Path
The network upgrade step uses the --local-build-path flag to upgrade the consensus network with a locally built version. Ensure you have the Hedera consensus node repository cloned and built at:
../hiero-consensus-node/hedera-node/data
You can modify the CN_LOCAL_BUILD_PATH variable in the Taskfile.yml if your local build is in a different location.
Cleanup
To destroy the network and cleanup all resources:
This will:
- Stop all consensus nodes
- Destroy all deployed components
- Delete the Kind cluster
- Clean up temporary files
Troubleshooting
Port Forward Issues
If port forwarding fails, check if the services are running:
kubectl get services -n namespace-version-upgrade-test
Component Status
Check the status of all pods:
Service Logs
View logs for specific components:
kubectl logs -n namespace-version-upgrade-test -l app=network-node1
kubectl logs -n namespace-version-upgrade-test -l app=mirror-node
kubectl logs -n namespace-version-upgrade-test -l app=hedera-json-rpc-relay
kubectl logs -n namespace-version-upgrade-test -l app=explorer
API Verification
If API verification fails, ensure port forwarding is active and services are ready:
# Check if port forwards are running
ps aux | grep port-forward
# Test connectivity manually
curl http://localhost:8080/api/v1/accounts
curl -X POST http://localhost:7546 -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}'
Configuration
The Taskfile.yml contains several configurable variables:
NODE_IDENTIFIERS: Consensus node aliases (default: “node1,node2”)SOLO_NETWORK_SIZE: Number of consensus nodes (default: “2”)DEPLOYMENT: Deployment nameNAMESPACE: Kubernetes namespaceCLUSTER_NAME: Kind cluster name- Version variables for current and previous versions
Notes
- This example assumes you have the necessary permissions to create Kind clusters
- The local build path feature requires a local Hedera consensus node build
- API verification steps may need adjustment based on actual service endpoints and ingress configuration