The documentation section provides a comprehensive guide to using Solo to launch a Hiero Consensus Node network, including setup instructions, usage guides, and information for developers. It covers everything from installation to advanced features and troubleshooting.
This is the multi-page printable view of this section. Click here to print.
Documentation
- 1: Getting Started
- 2: Solo User Guide
- 3: Solo CLI User Manual
- 4: Updated CLI Command Mappings
- 5: Solo CLI Commands
- 6: FAQ
- 7: Using Solo with Mirror Node
- 8: Using Solo with Hiero JavaScript SDK
- 9: Hiero Consensus Node Platform Developer
- 10: Hiero Consensus Node Execution Developer
- 11: Attach JVM Debugger and Retrieve Logs
- 12: Using Network Load Generator with Solo
- 13: Using Environment Variables
- 14:
1 - Getting Started
π Solo has a new one-shot command! check it out: Solo User Guide, Solo CLI Commands
Solo
An opinionated CLI tool to deploy and manage standalone test networks.
Releases and Requirements
Solo releases are supported for one month after their release date, after which they are no longer maintained. It is recommended to upgrade to the latest version to benefit from new features and improvements. Every quarter a version will be designated as LTS (Long-Term Support) and will be supported for three months.
Current Releases
| Solo Version | Node.js | Kind | Solo Chart | Hedera | Kubernetes | Kubectl | Helm | k9s | Docker Resources | Release Date | End of Support |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.48.0 (LTS) | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.56.0 | v0.66.0+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-10-24 | 2026-01-24 |
| 0.47.0 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.56.0 | v0.66.0+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-10-16 | 2025-11-16 |
To see a list of legacy releases, please check the legacy versions documentation page.
Hardware Requirements
To run a one-node network, you will need to set up Docker Desktop with at least 12GB of memory and 4 CPUs.

Setup
# install specific nodejs version
# nvm install <version>
# install nodejs version 20.18.0
nvm install v20.18.0
# lists available node versions already installed
nvm ls
# switch to selected node version
# nvm use <version>
nvm use v20.18.0
Install Solo
- Run
npx @hashgraph/solo
Documentation
Contributing
Contributions are welcome. Please see the contributing guide to see how you can get involved.
Code of Conduct
This project is governed by the Contributor Covenant Code of Conduct. By participating, you are expected to uphold this code of conduct.
License
2 - Solo User Guide
π For less than 16 GB of memory to dedicate to Docker please skip the block node add and destroy steps.
π There should be a table of contents on the right side of your screen if your browser width is large enough
Introduction
Welcome to the world of Hedera development! If you’re looking to build and test applications on the Hedera network but don’t want to spend HBAR on testnet or mainnet transactions, you’ve come to the right place. Solo is your gateway to running your own local Hedera test network, giving you complete control over your development environment.
Solo is an opinionated command-line interface (CLI) tool designed to deploy and manage standalone Hedera test networks. Think of it as your personal Hedera sandbox where you can experiment, test features, and develop applications without any external dependencies or costs. Whether you’re building smart contracts, testing consensus mechanisms, or developing DApps, Solo provides the infrastructure you need.
By the end of this tutorial, you’ll have your own Hedera test network running locally, complete with consensus nodes, mirror nodes, and all the infrastructure needed to submit transactions and test your applications. Let’s dive in!
Prerequisites
Before we begin, let’s ensure your system meets the requirements and has all the necessary software installed. Don’t worry if this seems like a lot β we’ll walk through each step together.
System Requirements(for a bare minimum install running 1 node)
First, check that your computer meets these minimum specifications:
- Memory: At least 12GB of RAM (16GB recommended for smoother performance)
- CPU: Minimum 4 cores (8 cores recommended)
- Storage: At least 20GB of free disk space
- Operating System: macOS, Linux, or Windows with WSL2
Required Software
You’ll need to install a few tools before we can set up Solo. Here’s what you need and how to get it:
1. Node.js (β₯20.18.0)
Solo is built on Node.js, so you’ll need version 20.18.0 or higher. We recommend using Node Version Manager (nvm) for easy version management:Details <click to expand/collapse>
# Install nvm (macOS/Linux)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Install nvm (Windows - use nvm-windows)# Download from: https://github.com/coreybutler/nvm-windows# Install Node.js
nvm install 20.18.0
nvm use 20.18.0
# Verify installation
node --version
2. Docker Desktop
Docker is essential for running the containerized Hedera network components: After installation, ensure Docker is running:Details <click to expand/collapse>
docker --version
docker ps
Preparing Your Environment
Now that we have all prerequisites in place, let’s install Solo and set up our environment.
One thing to consider, old installs can really hamper your ability to get a new install up and running. If you have an old install of Solo, or if you are having issues with the install, please run the following commands to clean up your environment before proceeding.
1. Installing Solo
Open your terminal and install Solo globally using npm: You should see output showing the latest version which should match our NPM package version: https://www.npmjs.com/package/@hashgraph/solo The Details <click to expand/collapse>
npx @hashgraph/solo
# Verify the installation
solo --version
# Or use different output formats (Kubernetes-style)
solo --version -o json # JSON format: {"version": "0.46.1"}
solo --version -o yaml # YAML format: version: 0.46.1
solo --version -o wide # Plain text: 0.46.1
--output (or -o) flag can be used with various Solo commands to produce machine-readable output in formats like json, yaml, or wide.
*Cleaning up an old install
The team is presently working on a number of fixes and automation that will relegate the need for this, but currently as deployed Solo can be finnicky with artifacts from prior installs. A quick command to prep your station for a new install is a good idea.Details <click to expand/collapse>
for cluster in $(kind get clusters);do kind delete cluster -n $cluster;done
rm -Rf ~/.solo
2. Setting up your environmental variables
You need to declare some environmental variables. Keep note that unless you intentionally include these in your zsh config when you close your terminal you may lose them. *throughout the remainder of this walkthrough for simplicity sake I will assume in commands these are the values in your .envDetails <click to expand/collapse>
export SOLO_CLUSTER_NAME=solo
export SOLO_NAMESPACE=solo
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
export SOLO_DEPLOYMENT=solo-deployment
3. Create a cluster
Example output:Details <click to expand/collapse>
kind create cluster -n "${SOLO_CLUSTER_NAME}"
Creating cluster "solo-e2e" ...
Ensuring node image (kindest/node:v1.32.2) πΌ ...
β Ensuring node image (kindest/node:v1.32.2) πΌ
Preparing nodes π¦ ...
β Preparing nodes π¦
Writing configuration π ...
β Writing configuration π
Starting control-plane πΉοΈ ...
β Starting control-plane πΉοΈ
Installing CNI π ...
β Installing CNI π
Installing StorageClass πΎ ...
β Installing StorageClass πΎ
Set kubectl context to "kind-solo-e2e"
You can now use your cluster with:
kubectl cluster-info --context kind-solo-e2e
Have a nice day! π
*Connecting to a remote cluster
Details <click to expand/collapse>
kubectl config get-contexts
kubectl config use-context <context-name>
One Shot Deployment
Solo provides three one-shot deployment options to quickly set up your Hedera test network:
Single Node Deployment (Recommended for Development)
For a simple setup with a single node with a mirror node, explorer, and JSON RPC relay, you can follow these quick steps. This is ideal for testing and development purposes.
solo one-shot single deploy
When you’re finished, you can tear down your Solo network just as easily:
solo one-shot single destroy
Multiple Node Deployment (For Consensus Testing)
For testing consensus scenarios or multi-node behavior, you can deploy a network with multiple consensus nodes. This setup includes all the same components as the single node deployment but with multiple consensus nodes for testing consensus mechanisms.
solo one-shot multiple deploy --num-consensus-nodes 2
This command will:
- Deploy multiple consensus nodes (configurable number)
- Set up mirror node, explorer, and JSON RPC relay
- Generate appropriate keys for all nodes
- Create predefined accounts for testing
When you’re finished with the multiple node network:
solo one-shot multiple destroy
π Note: Multiple node deployments require more system resources. Ensure you have adequate memory and CPU allocated to Docker (recommended: 16GB+ RAM, 8+ CPU cores).
Falcon Deployment (Advanced Configuration)
For advanced users who need fine-grained control over all network components, the falcon deployment uses a YAML configuration file to customize every aspect of the network.
solo one-shot falcon deploy --values-file falcon-values.yaml
The falcon deployment allows you to:
- Configure all network components through a single YAML file
- Customize consensus nodes, mirror node, explorer, relay, and block node settings
- Set specific versions, resource allocations, and feature flags
- Perfect for CI/CD pipelines and automated testing scenarios
Example Configuration File (falcon-values.yaml):
network:
--deployment: "my-network"
--release-tag: "v0.65.0"
--node-aliases: "node1"
setup:
--release-tag: "v0.65.0"
--node-aliases: "node1"
consensusNode:
--deployment: "my-network"
--node-aliases: "node1"
--force-port-forward: true
mirrorNode:
--enable-ingress: true
--pinger: true
explorerNode:
--enable-ingress: true
relayNode:
--node-aliases: "node1"
See the falcon example for a complete configuration template.
When you’re finished with the falcon deployment:
solo one-shot falcon destroy
π Note: The falcon deployment reads deployment name and other shared settings from the values file, so you don’t need to specify
--deploymenton the command line.
Step-by-Step Solo Network Deployment
If you have a more complex setup in mind, such as multiple nodes or specific configurations, follow these detailed steps to deploy your Solo network.
1. Initialize solo:
Reset the Example output:Details <click to expand/collapse>
.solo directory before initializing Solo. This step is crucial to ensure a clean setup without any leftover artifacts from previous installations. See: *Cleaning up an old installsolo init
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : init
**********************************************************************************
***************************************************************************************
Note: solo stores various artifacts (config, logs, keys etc.) in its home directory: /home/runner/.solo
If a full reset is needed, delete the directory or relevant sub-directories before running 'solo init'.
***************************************************************************************
**********************************************************************************
'solo init' is now deprecated, you don't need to run it anymore.
**********************************************************************************
Setup home directory and cache
β Setup home directory and cache
Create local configuration
Create local configuration [SKIPPED: Create local configuration]
Copy templates in '/home/runner/.solo/cache'
β Copy templates in '/home/runner/.solo/cache'
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
2. Connect the cluster and create a deployment
This command will create a deployment in the specified clusters, and generate the LocalConfig and RemoteConfig used by k8s. The deployment will: π notice that the Example output:Details <click to expand/collapse>
--cluster-ref value is kind-solo, when you created the Kind cluster it created a cluster reference in the Kubernetes config with the name kind-solo. If you used a different name, replace kind-solo with your cluster name, but prefixing with kind-. If you are working with a remote cluster, you can use the name of your cluster reference which can be gathered with the command: kubectl config get-contexts.
π Note: Solo stores various artifacts (config, logs, keys etc.) in its home directory: ~/.solo. If you need a full reset, delete this directory before running solo init ag# connect to the cluster you created in a previous command
solo cluster-ref config connect --cluster-ref kind-${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
#create the deployment
solo deployment config create -n "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : cluster-ref config connect --cluster-ref kind-solo --context kind-solo
**********************************************************************************
Initialize
β Initialize
Validating cluster ref:
β kind-solo
Test connection to cluster:
β Test connection to cluster: kind-solo
Associate a context with a cluster reference:
β Associate a context with a cluster reference: kind-solo
solo-deployment_CREATE_OUTPUT
3. Add a cluster to the deployment you created
*This command is the first command that will specify how many nodes you want to add to your deployment. For the sake of resource Example output:Details <click to expand/collapse>
# Add a cluster to the deployment you created
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --num-consensus-nodes 1
# If the command line command is unresponsive there's also a handy cluster add configurator you can run `solo deployment cluster attach` without any arguments to get a guided setup.
solo-deployment_ADD_CLUSTER_OUTPUT
4. Generate keys
You need to generate keys for your nodes, or in this case single node. Example output: PEM key files are generated in Details <click to expand/collapse>
solo keys consensus generate --gossip-keys --tls-keys --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : keys consensus generate --gossip-keys --tls-keys --deployment solo-deployment
**********************************************************************************
Initialize
β Initialize
Generate gossip keys
Backup old files
β Backup old files
Gossip key for node: node1
β Gossip key for node: node1
β Generate gossip keys
Generate gRPC TLS Keys
Backup old files
TLS key for node: node1
β Backup old files
β TLS key for node: node1
β Generate gRPC TLS Keys
Finalize
β Finalize
~/.solo/cache/keys directory.hedera-node1.crt hedera-node3.crt s-private-node1.pem s-public-node1.pem unused-gossip-pem
hedera-node1.key hedera-node3.key s-private-node2.pem s-public-node2.pem unused-tls
hedera-node2.crt hedera-node4.crt s-private-node3.pem s-public-node3.pem
hedera-node2.key hedera-node4.key s-private-node4.pem s-public-node4.pem
5. Setup cluster with shared components
Example output:Details <click to expand/collapse>
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : cluster-ref config setup --cluster-setup-namespace solo-cluster
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
β Initialize
Install cluster charts
Skipping Grafana Agent chart installation
Install pod-monitor-role ClusterRole
β
ClusterRole pod-monitor-role installed successfully
β Install pod-monitor-role ClusterRole
Install MinIO Operator chart
β
MinIO Operator chart installed successfully
β Install MinIO Operator chart
Install Prometheus Stack chart
β
Prometheus Stack chart installed successfully
β Install Prometheus Stack chart
β Install cluster charts
Deploying Helm chart with network components
Now comes the exciting part β deploying your Hedera test network!
*Deploy a block node (experimental)
β οΈ Block Node is experimental in Solo. It requires a minimum of 16 GB of memory allocated to Docker. If you have less than 16 GB of memory, skip this step. As mentioned in the warning, Block Node uses a lot of memory. In addition, it requires a version of Consensus Node to be at least v0.62.3. You will need to augment the Example output:Details <click to expand/collapse>
solo consensus network deploy & solo consensus node setup command with the --release-tag v0.62.6 option to ensure that the Consensus Node is at the correct version. *note: v0.62.6 is the latest patch for v0.62solo block node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-"${SOLO_CLUSTER_NAME}" --release-tag v0.62.6
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : block node add --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Prepare release name and block node name
β Prepare release name and block node name
Prepare chart values
β Prepare chart values
Deploy block node
- Installed block-node-1 chart, version: 0.21.1
β Deploy block node
Check block node pod is running
β Check block node pod is running
Check software
β Check software
Check block node pod is ready
β Check block node pod is ready
Check block node readiness
β Check block node readiness - [1/100] success
Add block node component in remote config
β Add block node component in remote config
1. Deploy the network
Deploying the network runs risks of timeouts as images are downloaded, and pods are starting. If you experience a failure double check the resources you’ve allocated in docker engine and give it another try. Example output:Details <click to expand/collapse>
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus network deploy --deployment solo-deployment
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Copy gRPC TLS Certificates
Copy gRPC TLS Certificates [SKIPPED: Copy gRPC TLS Certificates]
Prepare staging directory
Copy Gossip keys to staging
β Copy Gossip keys to staging
Copy gRPC TLS keys to staging
β Copy gRPC TLS keys to staging
β Prepare staging directory
Copy node keys to secrets
Copy TLS keys
Node: node1, cluster: kind-solo
Copy Gossip keys
β Copy Gossip keys
β Node: node1, cluster: kind-solo
β Copy TLS keys
β Copy node keys to secrets
Install chart 'solo-deployment'
- Installed solo-deployment chart, version: 0.57.0
β Install chart 'solo-deployment'
Check for load balancer
Check for load balancer [SKIPPED: Check for load balancer]
Redeploy chart with external IP address config
Redeploy chart with external IP address config [SKIPPED: Redeploy chart with external IP address config]
Check node pods are running
Check Node: node1, Cluster: kind-solo
β Check Node: node1, Cluster: kind-solo
β Check node pods are running
Check proxy pods are running
Check HAProxy for: node1, cluster: kind-solo
Check Envoy Proxy for: node1, cluster: kind-solo
β Check Envoy Proxy for: node1, cluster: kind-solo
β Check HAProxy for: node1, cluster: kind-solo
β Check proxy pods are running
Check auxiliary pods are ready
Check MinIO
β Check MinIO
β Check auxiliary pods are ready
Add node and proxies to remote config
β Add node and proxies to remote config
Copy block-nodes.json
β Copy block-nodes.json
2. Set up a node with Hedera platform software
This step downloads the hedera platform code and sets up your node/nodes. Example output:Details <click to expand/collapse>
# consensus node setup
export CONSENSUS_NODE_VERSION=v0.66.0 # or whatever version you are trying to deploy starting with a `v`
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" --release-tag "${CONSENSUS_NODE_VERSION}"
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus node setup --deployment solo-deployment
**********************************************************************************
Load configuration
β Load configuration
Initialize
β Initialize
Validate nodes states
Validating state for node node1
β Validating state for node node1 - valid state: requested
β Validate nodes states
Identify network pods
Check network pod: node1
β Check network pod: node1
β Identify network pods
Fetch platform software into network nodes
Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ]
β Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ]
β Fetch platform software into network nodes
Setup network nodes
Node: node1
Copy configuration files
β Copy configuration files
Set file permissions
β Set file permissions
β Node: node1
β Setup network nodes
setup network node folders
β setup network node folders
Change node state to configured in remote config
β Change node state to configured in remote config
3. Start the nodes up!
Now that everything is set up you need to start them. Example output:Details <click to expand/collapse>
# start your node/nodes
solo consensus node start --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus node start --deployment solo-deployment
**********************************************************************************
Check dependencies
Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Load configuration
β Load configuration
Initialize
β Initialize
Validate nodes states
Validating state for node node1
β Validating state for node node1 - valid state: configured
β Validate nodes states
Identify existing network nodes
Check network pod: node1
β Check network pod: node1
β Identify existing network nodes
Upload state files network nodes
Upload state files network nodes [SKIPPED: Upload state files network nodes]
Starting nodes
Start node: node1
β Start node: node1
β Starting nodes
Enable port forwarding for debug port and/or GRPC port
Using requested port 50211
β Enable port forwarding for debug port and/or GRPC port
Check all nodes are ACTIVE
Check network pod: node1
β Check network pod: node1 - status ACTIVE, attempt: 17/300
β Check all nodes are ACTIVE
Check node proxies are ACTIVE
Check proxy for node: node1
β Check proxy for node: node1
β Check node proxies are ACTIVE
Change node state to started in remote config
β Change node state to started in remote config
Add node stakes
Adding stake for node: node1
Using requested port 30212
β Adding stake for node: node1
β Add node stakes
set gRPC Web endpoint
β set gRPC Web endpoint
Stopping port-forwarder for port [30212]
4. Deploy a mirror node
This is the most memory intensive step from a resource perspective. If you have issues at this step try checking your local resource utilization and make sure there’s memory available for docker (close all unessential applications). Likewise, you can consider lowering your swap in docker settings to ease the swap demand, and try again. The Example output:Details <click to expand/collapse>
--pinger flag starts a pinging service that sends transactions to the network at regular intervals. This is needed because the record file is not imported into the mirror node until the next one is created.# Deploy with explicit configuration
solo mirror node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --enable-ingress --pinger
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : mirror node add --deployment solo-deployment --cluster-ref kind-solo --enable-ingress --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Using requested port 30212
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Enable mirror-node
Prepare address book
β Prepare address book
Install mirror ingress controller
- Installed haproxy-ingress-1 chart, version: 0.14.5
β Install mirror ingress controller
Deploy mirror-node
- Installed mirror chart, version: v0.141.0
β Deploy mirror-node
β Enable mirror-node
Check pods are ready
Check Postgres DB
Check REST API
Check GRPC
Check Monitor
Check Web3
Check Importer
β Check Postgres DB
β Check GRPC
β Check Monitor
β Check REST API
β Check Web3
β Check Importer
β Check pods are ready
Seed DB data
Insert data in public.file_data
β Insert data in public.file_data
β Seed DB data
Add mirror node to remote config
β Add mirror node to remote config
Enable port forwarding for mirror ingress controller
Using requested port 8081
β Enable port forwarding for mirror ingress controller
Stopping port-forwarder for port [30212]
5. Deploy the explorer
Watch the deployment progress: Example output:Details <click to expand/collapse>
# deploy explorer
solo explorer node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : explorer node add --deployment solo-deployment --cluster-ref kind-solo --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Load remote config
β Load remote config
Install cert manager
Install cert manager [SKIPPED: Install cert manager]
Install explorer
- Installed hiero-explorer-1 chart, version: 25.1.1
β Install explorer
Install explorer ingress controller
Install explorer ingress controller [SKIPPED: Install explorer ingress controller]
Check explorer pod is ready
β Check explorer pod is ready
Check haproxy ingress controller pod is ready
Check haproxy ingress controller pod is ready [SKIPPED: Check haproxy ingress controller pod is ready]
Add explorer to remote config
β Add explorer to remote config
Enable port forwarding for explorer
Using requested port 8080
β Enable port forwarding for explorer
6. Deploy a JSON RPC relay
The JSON RPC relay allows you to interact with your Hedera network using standard JSON RPC calls. This is useful for integrating with existing tools and libraries. Example output:Details <click to expand/collapse>
#deploy a solo JSON RPC relay
solo relay node add -i node1 --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : relay node add --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Check chart is installed
β Check chart is installed
Prepare chart values
Using requested port 30212
β Prepare chart values
Deploy JSON RPC Relay
- Installed relay-1 chart, version: 0.70.0
β Deploy JSON RPC Relay
Check relay is running
β Check relay is running
Check relay is ready
β Check relay is ready
Add relay component in remote config
β Add relay component in remote config
Enable port forwarding for relay node
Using requested port 7546
β Enable port forwarding for relay node
Stopping port-forwarder for port [30212]
*Check Pod Status
Here is a command if you want to check the status of your Solo Kubernetes pods:Details <click to expand/collapse>
# Check pod status
kubectl get pods -n solo
Working with Your Network
Network Endpoints
At this time Solo doesn’t automatically set up port forwarding for you, so you’ll need to do that manually. The port forwarding is now automatic for many endpoints. However, you can set up your own using Details <click to expand/collapse>
kubectl port-forward command:# Consensus Service for node1 (node ID = 0): localhost:50211
# should be automatic: kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 50211:50211 > /dev/null 2>&1 &
# Explorer UI: http://localhost:8080
# should be automatic: kubectl port-forward svc/hiero-explorer -n "${SOLO_NAMESPACE}" 8080:8080 > /dev/null 2>&1 &
# Mirror Node gRPC, REST, REST Java, Web3 will be automatic on `localhost:8081` if you passed `--enable-ingress` to the `solo mirror node add` command
# Mirror Node gRPC: localhost:5600
kubectl port-forward svc/mirror-1-grpc -n "${SOLO_NAMESPACE}" 5600:5600 > /dev/null 2>&1 &
# Mirror Node REST API: http://localhost:5551
kubectl port-forward svc/mirror-1-rest -n "${SOLO_NAMESPACE}" 5551:80 > /dev/null 2>&1 &
# Mirror Node REST Java API http://localhost:8084
kubectl port-forward svc/mirror-1-restjava -n "${SOLO_NAMESPACE}" 8084:80 > /dev/null 2>&1 &
# JSON RPC Relay: localhost:7546
# should be automatic: kubectl port-forward svc/relay-node1-hedera-json-rpc-relay -n "${SOLO_NAMESPACE}" 7546:7546 > /dev/null 2>&1 &
Managing Your Network
Stopping and Starting Nodes
You can control individual nodes or the entire network:Details <click to expand/collapse>
# Stop all nodes
solo consensus node stop --deployment solo-deployment
# Stop a specific node
solo consensus node stop --node-id node-0 --deployment solo-deployment
# Restart nodes
solo consensus node restart --deployment solo-deployment
# Start nodes again
solo consensus node start --deployment solo-deployment
Viewing Logs
Access Solo and Consensus Node logs for troubleshooting:Details <click to expand/collapse>
# Download logs from all nodes
# Logs are saved to ~/.solo/logs/<namespace>/<pod-name>/# You can also use kubectl directly:
solo consensus diagnostics all --deployment solo-deployment
Updating the Network
To update nodes to a new Hedera version, you need to upgrade by one minor version higher at a time:Details <click to expand/collapse>
solo consensus network upgrade --deployment solo-deployment --upgrade-version v0.62.6
Updating a single node
To update a single node to a new Hedera version, you need to update by one minor version higher at a time: It is possible to update a single node to a new Hedera version through a process with separated steps. This is only useful in very specific cases, such as when testing the updating process.Details <click to expand/collapse>
solo consensus node update --deployment solo-deployment --node-alias node1 --release-tag v0.62.6
solo consensus dev-node-update prepare --deployment solo-deployment --node-alias node1 --release-tag v0.62.6 --output-dir context
solo consensus dev-node-update submit-transaction --deployment solo-deployment --input-dir context
solo consensus dev-node-update execute --deployment solo-deployment --input-dir context
Adding a new node to the network
Adding a new node to an existing Solo network: It is possible to add a new node through a process with separated steps. This is only useful in very specific cases, such as when testing the node adding process.Details <click to expand/collapse>
TODO solo consensus node add
solo consensus dev-node-add prepare --gossip-keys true --tls-keys true --deployment solo-deployment --pvcs true --admin-key ***** --node-alias node1 --output-dir context
solo consensus dev-node-add submit-transaction --deployment solo-deployment --input-dir context
solo consensus dev-node-add execute --deployment solo-deployment --input-dir context
Deleting a node from the network
This command is used to delete a node from an existing Solo network: It is possible to delete a node through a process with separated steps. This is only useful in very specific cases, such as when testing the delete process.Details <click to expand/collapse>
TODO solo consensus node destroy
solo consensus dev-node-delete prepare --deployment solo-deployment --node-alias node1 --output-dir context
solo consensus dev-node-delete submit-transaction --deployment solo-deployment --input-dir context
solo consensus dev-node-delete execute --deployment solo-deployment --input-dir context
Troubleshooting: Common Issues and Solutions
1. Pods Not Starting
If pods remain in Details <click to expand/collapse>
Pending or CrashLoopBackOff state:# Check pod events
kubectl describe pod -n solo network-node-0
# Common fixes:# - Increase Docker resources (memory/CPU)# - Check disk space# - Restart Docker and kind cluster
2. Connection Refused Errors
If you can’t connect to network endpoints:Details <click to expand/collapse>
# Check service endpoints
kubectl get svc -n solo
# Manually forward ports if needed
kubectl port-forward -n solo svc/network-node-0 50211:50211
3. Node Synchronization Issues
If nodes aren’t forming consensus:Details <click to expand/collapse>
# Check node status
solo consensus state download --deployment solo-deployment --node-aliases node1
# Look for gossip connectivity issues
kubectl logs -n solo network-node-0 | grep -i gossip
# Restart problematic nodes
solo consensus node refresh --node-aliases node1 --deployment solo-deployment
Getting Help
When you need assistance:Details <click to expand/collapse>
solo consensus diagnostics all --deployment solo-deployment and examine ~/.solo/logs/
Cleanup
When you’re done with your test network: To quickly clean up your Solo network and remove all resources (all Kind clusters!), you can use the following commands, be aware you will lose all your logs and data from prior runs: Example output: β Destroy JSON RPC Relay
Remove relay component from remote config
β Remove relay component from remote config
Details <click to expand/collapse>
*Fast clean up
Details <click to expand/collapse>
for cluster in $(kind get clusters);do kind delete cluster -n $cluster;done
rm -Rf ~/.solo
1. Destroy relay node
Details <click to expand/collapse>
solo relay node destroy -i node1 --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}"
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : relay node destroy --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Destroy JSON RPC Relay
*** Destroyed Relays ***
2. Destroy mirror node
Details <click to expand/collapse>
solo mirror node destroy --deployment "${SOLO_DEPLOYMENT}" --force
Example output:
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : mirror node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Using requested port 30212
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Destroy mirror-node
β Destroy mirror-node
Delete PVCs
β Delete PVCs
Uninstall mirror ingress controller
β Uninstall mirror ingress controller
Remove mirror node from remote config
β Remove mirror node from remote config
Stopping port-forwarder for port [30212]
3. Destroy explorer node
Details <click to expand/collapse>
solo explorer node destroy --deployment "${SOLO_DEPLOYMENT}" --force
Example output:
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : explorer node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Load remote config
β Load remote config
Destroy explorer
β Destroy explorer
Uninstall explorer ingress controller
β Uninstall explorer ingress controller
Remove explorer from remote config
β Remove explorer from remote config
*Destroy block node (Experimental)
Details <click to expand/collapse>
Block Node destroy should run prior to consensus network destroy, since consensus network destroy removes the remote config. To destroy the block node (if you deployed it), you can use the following command:
solo block node destroy --deployment "${SOLO_DEPLOYMENT} --cluster-ref kind-${SOLO_CLUSTER_NAME}"
Example output:
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : block node destroy --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Destroy block node
β Destroy block node
Disable block node component in remote config
β Disable block node component in remote config
4. Destroy network
Details <click to expand/collapse>
solo consensus network destroy --deployment "${SOLO_DEPLOYMENT}" --force
Example output:
******************************* Solo *********************************************
Version : 0.48.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus network destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: helm [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependency: kubectl [OS: linux, Release: 5.15.0-160-generic, Arch: x64]
β Check dependencies
Setup chart manager
β Setup chart manager
Initialize
Acquire lock
β Acquire lock - lock acquired successfully, attempt: 1/10
β Initialize
Running sub-tasks to destroy network
β Deleting the RemoteConfig configmap in namespace solo
Next Steps
Congratulations! You now have a working Hedera test network. Here are some suggestions for what to explore next: Remember, this is your personal Hedera playground. Experiment freely, break things, learn, and have fun building on Hedera! Happy coding with Solo! πDetails <click to expand/collapse>
http://localhost:5551
3 - Solo CLI User Manual
Solo Command Line User Manual
Solo has a series of commands to use, and some commands have subcommands. User can get help information by running with the following methods:
solo --help will return the help information for the solo command to show which commands
are available.
Version Information
Check the Solo version using:
solo --version
For machine-readable output formats (Kubernetes ecosystem standard), use the --output or -o flag:
solo --version -o json # JSON format: {"version": "0.46.1"}
solo --version -o yaml # YAML format: version: 0.46.1
solo --version -o wide # Plain text: 0.46.1
The --output flag can also be used with other Solo commands to suppress banners and produce machine-readable output, making it ideal for scripts and CI/CD pipelines.
solo command --help will return the help information for the specific command to show which options
solo ledger account --help
Manage Hedera accounts in solo network
Commands:
system init Initialize system accounts with new keys
account create Creates a new account with a new key and stores the key in th
e Kubernetes secrets, if you supply no key one will be genera
ted for you, otherwise you may supply either a ECDSA or ED255
19 private key
account update Updates an existing account with the provided info, if you wa
nt to update the private key, you can supply either ECDSA or
ED25519 but not both
account get Gets the account info including the current amount of HBAR
Options:
--dev Enable developer mode [boolean]
--force-port-forward Force port forward to access the network services
[boolean]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
solo command subcommand --help will return the help information for the specific subcommand to show which options
solo ledger account create --help
Creates a new account with a new key and stores the key in the Kubernetes secret
s, if you supply no key one will be generated for you, otherwise you may supply
either a ECDSA or ED25519 private key
Options:
--dev Enable developer mode [boolean]
--force-port-forward Force port forward to access the network services
[boolean]
--hbar-amount Amount of HBAR to add [number]
--create-amount Amount of new account to create [number]
--ecdsa-private-key ECDSA private key for the Hedera account [string]
-d, --deployment The name the user will reference locally to link to
a deployment [string]
--ed25519-private-key ED25519 private key for the Hedera account [string]
--generate-ecdsa-key Generate ECDSA private key for the Hedera account
[boolean]
--set-alias Sets the alias for the Hedera account when it is cr
eated, requires --ecdsa-private-key [boolean]
-c, --cluster-ref The cluster reference that will be used for referen
cing the Kubernetes cluster and stored in the local
and remote configuration for the deployment. For
commands that take multiple clusters they can be se
parated by commas. [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
For more information see: Solo CLI Commands
4 - Updated CLI Command Mappings
Updated CLI Command Mappings
The following tables provide a complete mapping of previous (< v0.44.0) CLI commands to their updated three-level structure. Entries marked as No changes retain their original form.
Init
| Old Command | New Command |
|---|---|
| init | No changes |
Block node
| Old Command | New Command |
|---|---|
| block node add | No changes |
| block node destroy | No changes |
| block node upgrade | No changes |
Account
| Old Command | New Command |
|---|---|
| account init | ledger system init |
| account update | ledger account update |
| account create | ledger account create |
| account get | ledger account info |
One Shot
| Old Command | New Command |
|---|---|
| one-shot single deploy | one shot deploy |
| one-shot single destroy | one shot destroy |
Cluster Reference
| Old Command | New Command |
|---|---|
| cluster-ref connect | cluster-ref config connect |
| cluster-ref disconnect | cluster-ref config disconnect |
| cluster-ref list | cluster-ref config list |
| cluster-ref info | cluster-ref config info |
| cluster-ref setup | cluster-ref config setup |
| cluster-ref reset | cluster-ref config reset |
Deployment
| Old Command | New Command |
|---|---|
| deployment add-cluster | deployment cluster attach |
| deployment list | deployment config list |
| deployment create | deployment config create |
| deployment delete | deployment config destroy |
Explorer
| Old Command | New Command |
|---|---|
| explorer deploy | explorer node add |
| explorer destroy | explorer node destroy |
Mirror Node
| Old Command | New Command |
|---|---|
| mirror-node deploy | mirror node add |
| mirror-node destroy | mirror node destroy |
Relay
| Old Command | New Command |
|---|---|
| relay deploy | relay node add |
| relay destroy | relay node destroy |
Network
| Old Command | New Command |
|---|---|
| network deploy | consensus network deploy |
| network destroy | consensus network destroy |
Node
| Old Command | New Command |
|---|---|
| node keys | keys consensus generate |
| node freeze | consensus network freeze |
| node upgrade | consensus network upgrade |
| node setup | consensus node setup |
| node start | consensus node start |
| node stop | consensus node stop |
| node upgrade | consensus node upgrade |
| node restart | consensus node restart |
| node refresh | consensus node refresh |
| node add | consensus node add |
| node update | consensus node update |
| node delete | consensus node destroy |
| node add-prepare | consensus dev-node-add prepare |
| node add-submit-transaction | consensus dev-node-add submit-transaction |
| node add-execute | consensus dev-node-add execute |
| node update-prepare | consensus dev-node-update prepare |
| node update-submit-transaction | consensus dev-node-update submit-transaction |
| node update-execute | consensus dev-node-update execute |
| node upgrade-prepare | consensus dev-node-upgrade prepare |
| node upgrade-submit-transaction | consensus dev-node-upgrade submit-transaction |
| node upgrade-execute | consensus dev-node-upgrade execute |
| node delete-prepare | consensus dev-node-delete prepare |
| node delete-submit-transaction | consensus dev-node-delete submit-transaction |
| node delete-execute | consensus dev-node-delete execute |
| node prepare-upgrade | consensus dev-freeze prepare-upgrade |
| node freeze-upgrade | consensus dev-freeze freeze-upgrade |
| node download-generated-files | consensus diagnostic configs |
| node logs | consensus diagnostics all |
| node states | consensus state download |
5 - Solo CLI Commands
Solo Command Reference
Table of Contents
Root Help Output
Select a command
Usage:
solo <command> [options]
Commands:
init Initialize local environment
block Block Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
cluster-ref Manages the relationship between Kubernetes context names and Solo cluster references which are an alias for a kubernetes context.
consensus Consensus Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
deployment Create, modify, and delete deployment configurations. Deployments are required for most of the other commands.
explorer Explorer Node operations for creating, modifying, and destroying resources.These commands require the presence of an existing deployment.
keys Consensus key generation operations
ledger System, Account, and Crypto ledger-based management operations. These commands require an operational set of consensus nodes and may require an operational mirror node.
mirror Mirror Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
relay RPC Relay Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
one-shot One Shot commands for new and returning users who need a preset environment type. These commands use reasonable defaults to provide a single command out of box experience.
rapid-fire Commands for performing load tests a Solo deployment
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
init
init
Initialize local environment
Options:
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-u, --user Optional user name used for [string]
local configuration. Only
accepts letters and numbers.
Defaults to the username
provided by the OS
-v, --version Show version number [boolean]
block
block
Block Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
Commands:
block node Create, manage, or destroy block node instances. Operates on a single block node instance at a time.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
block node
block node
Create, manage, or destroy block node instances. Operates on a single block node instance at a time.
Commands:
block node add Creates and configures a new block node instance for the specified deployment using the specified Kubernetes cluster. The cluster must be accessible and attached to the specified deployment.
block node destroy Destroys a single block node instance in the specified deployment. Requires access to all Kubernetes clusters attached to the deployment.
block node upgrade Upgrades a single block node instance in the specified deployment. Requires access to all Kubernetes clusters attached to the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
block node add
block node add
Creates and configures a new block node instance for the specified deployment using the specified Kubernetes cluster. The cluster must be accessible and attached to the specified deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--chart-version Block nodes chart version [string] [default: "v0.21.1"]
--block-node-chart-dir Block node local chart directory path (e.g. ~/hiero-block-node/charts) [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--domain-name Custom domain name [string]
--enable-ingress enable ingress on the component/pod [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
--image-tag The Docker image tag to override what is in the Helm Chart [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
block node destroy
block node destroy
Destroys a single block node instance in the specified deployment. Requires access to all Kubernetes clusters attached to the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--id The numeric identifier for the component [number]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
block node upgrade
block node upgrade
Upgrades a single block node instance in the specified deployment. Requires access to all Kubernetes clusters attached to the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--block-node-chart-dir Block node local chart directory path (e.g. ~/hiero-block-node/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--upgrade-version Version to be used for the upgrade [string]
--id The numeric identifier for the component [number]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
cluster-ref
cluster-ref
Manages the relationship between Kubernetes context names and Solo cluster references which are an alias for a kubernetes context.
Commands:
cluster-ref config List, create, manage, and remove associations between Kubernetes contexts and Solo cluster references.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
cluster-ref config
cluster-ref config
List, create, manage, and remove associations between Kubernetes contexts and Solo cluster references.
Commands:
cluster-ref config connect Creates a new internal Solo cluster name to a Kubernetes context or maps a Kubernetes context to an existing internal Solo cluster reference
cluster-ref config disconnect Removes the Kubernetes context associated with an internal Solo cluster reference.
cluster-ref config list Lists the configured Kubernetes context to Solo cluster reference mappings.
cluster-ref config info Displays the status information and attached deployments for a given Solo cluster reference mapping.
cluster-ref config setup Setup cluster with shared components
cluster-ref config reset Uninstall shared components from cluster
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
cluster-ref config connect
cluster-ref config connect
Creates a new internal Solo cluster name to a Kubernetes context or maps a Kubernetes context to an existing internal Solo cluster reference
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--context The Kubernetes context name to be used [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
cluster-ref config disconnect
cluster-ref config disconnect
Removes the Kubernetes context associated with an internal Solo cluster reference.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
cluster-ref config list
cluster-ref config list
Lists the configured Kubernetes context to Solo cluster reference mappings.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
cluster-ref config info
cluster-ref config info
Displays the status information and attached deployments for a given Solo cluster reference mapping.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
cluster-ref config setup
cluster-ref config setup
Setup cluster with shared components
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-s, --cluster-setup-namespace Cluster Setup Namespace [string] [default: "solo-setup"]
--minio Deploy minio operator [boolean] [default: true]
--prometheus-stack Deploy prometheus stack [boolean] [default: true]
--grafana-agent Deploy grafana agent [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
cluster-ref config reset
cluster-ref config reset
Uninstall shared components from cluster
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-s, --cluster-setup-namespace Cluster Setup Namespace [string] [default: "solo-setup"]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus
consensus
Consensus Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
Commands:
consensus network Ledger/network wide consensus operations such as freeze, upgrade, and deploy. Operates on the entire ledger and all consensus node instances.
consensus node List, create, manage, or destroy consensus node instances. Operates on a single consensus node instance at a time.
consensus state List, download, and upload consensus node state backups to/from individual consensus node instances.
consensus diagnostics Capture diagnostic information such as logs, signed states, and ledger/network/node configurations.
consensus dev-node-add Dev operations for adding consensus nodes.
consensus dev-node-update Dev operations for updating consensus nodes
consensus dev-node-upgrade Dev operations for upgrading consensus nodes
consensus dev-node-delete Dev operations for delete consensus nodes
consensus dev-freeze Dev operations for freezing consensus nodes
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus network
consensus network
Ledger/network wide consensus operations such as freeze, upgrade, and deploy. Operates on the entire ledger and all consensus node instances.
Commands:
consensus network deploy Installs and configures all consensus nodes for the deployment.
consensus network destroy Removes all consensus network components from the deployment.
consensus network freeze Initiates a network freeze for scheduled maintenance or upgrades
consensus network upgrade Upgrades the software version running on all consensus nodes.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus network deploy
consensus network deploy
Installs and configures all consensus nodes for the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--api-permission-properties api-permission.properties file for node [string] [default: "templates/api-permission.properties"]
--app Testing app name [string] [default: "HederaNode.jar"]
--application-env the application.env file for the node provides environment variables to the solo-container to be used when the hedera platform is started [string] [default: "templates/application.env"]
--application-properties application.properties file for node [string] [default: "templates/application.properties"]
--bootstrap-properties bootstrap.properties file for node [string] [default: "templates/bootstrap.properties"]
--genesis-throttles-file throttles.json file used during network genesis [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--load-balancer Enable load balancer for network node proxies [boolean] [default: false]
--log4j2-xml log4j2.xml file for node [string] [default: "templates/log4j2.xml"]
--pvcs Enable persistent volume claims to store data outside the pod, required for consensus node add [boolean] [default: false]
--profile-file Resource profile definition (e.g. custom-spec.yaml) [string] [default: "profiles/custom-spec.yaml"]
--profile Resource profile (local | tiny | small | medium | large) [string] [default: "local"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
--settings-txt settings.txt file for node [string] [default: "templates/settings.txt"]
-f, --values-file Comma separated chart values file paths for each cluster (e.g. values.yaml,cluster-1=./a/b/values1.yaml,cluster-2=./a/b/values2.yaml) [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
--grpc-tls-cert TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated) [string]
--grpc-web-tls-cert TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated) [string]
--grpc-tls-key TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma separated) [string]
--grpc-web-tls-key TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma separated) [string]
--haproxy-ips IP mapping where key = value is node alias and static ip for haproxy, (e.g.: --haproxy-ips node1=127.0.0.1,node2=127.0.0.1) [string]
--envoy-ips IP mapping where key = value is node alias and static ip for envoy proxy, (e.g.: --envoy-ips node1=127.0.0.1,node2=127.0.0.1) [string]
--storage-type storage type for saving stream files, available options are minio_only, aws_only, gcs_only, aws_and_gcs [default: "minio_only"]
--gcs-write-access-key gcs storage access key for write access [string]
--gcs-write-secrets gcs storage secret key for write access [string]
--gcs-endpoint gcs storage endpoint URL [string]
--gcs-bucket name of gcs storage bucket [string]
--gcs-bucket-prefix path prefix of google storage bucket [string]
--aws-write-access-key aws storage access key for write access [string]
--aws-write-secrets aws storage secret key for write access [string]
--aws-endpoint aws storage endpoint URL [string]
--aws-bucket name of aws storage bucket [string]
--aws-bucket-region name of aws bucket region [string]
--aws-bucket-prefix path prefix of aws storage bucket [string]
--backup-bucket name of bucket for backing up state files [string]
--backup-write-access-key backup storage access key for write access [string]
--backup-write-secrets backup storage secret key for write access [string]
--backup-endpoint backup storage endpoint URL [string]
--backup-region backup storage region [string] [default: "us-central1"]
--backup-provider backup storage service provider, GCS or AWS [string] [default: "GCS"]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
--block-node-cfg Configure block node routing for each consensus node. Maps consensus node names to block node IDs. Accepts: (1) JSON string: '{"node1":[1,3],"node2":[2]}' or (2) path to JSON file: 'block.json'. Example: node1 sends blocks to block nodes 1 and 3, node2 sends blocks to block node 2 [string]
--service-monitor Install ServiceMonitor custom resource for monitoring Network Node metrics [boolean] [default: false]
--pod-log Install PodLog custom resource for monitoring Network Node pod logs [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus network destroy
consensus network destroy
Removes all consensus network components from the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--delete-pvcs Delete the persistent volume claims. If both --delete-pvcs and --delete-secrets are set to true, the namespace will be deleted. [boolean] [default: false]
--delete-secrets Delete the network secrets. If both --delete-pvcs and --delete-secrets are set to true, the namespace will be deleted. [boolean] [default: false]
--enable-timeout enable time out for running a command [boolean] [default: false]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus network freeze
consensus network freeze
Initiates a network freeze for scheduled maintenance or upgrades
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus network upgrade
consensus network upgrade
Upgrades the software version running on all consensus nodes.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--upgrade-version Version to be used for the upgrade [string]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--upgrade-zip-file A zipped file used for network upgrade [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus node
consensus node
List, create, manage, or destroy consensus node instances. Operates on a single consensus node instance at a time.
Commands:
consensus node setup Setup node with a specific version of Hedera platform
consensus node start Start a node
consensus node stop Stop a node
consensus node restart Restart all nodes of the network
consensus node refresh Reset and restart a node
consensus node add Adds a node with a specific version of Hedera platform
consensus node update Update a node with a specific version of Hedera platform
consensus node destroy Delete a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus node setup
consensus node setup
Setup node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
--app Testing app name [string] [default: "HederaNode.jar"]
--app-config json config file of testing app [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--admin-public-keys Comma separated list of DER encoded ED25519 public keys and must match the order of the node aliases [string]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus node start
consensus node start
Start a node
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--app Testing app name [string] [default: "HederaNode.jar"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--state-file A zipped state file to be used for the network [string]
--stake-amounts The amount to be staked in the same order you list the node aliases with multiple node staked values comma separated [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus node stop
consensus node stop
Stop a node
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus node restart
consensus node restart
Restart all nodes of the network
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus node refresh
consensus node refresh
Reset and restart a node
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--app Testing app name [string] [default: "HederaNode.jar"]
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus node add
consensus node add
Adds a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--app Testing app name [string] [default: "HederaNode.jar"]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--pvcs Enable persistent volume claims to store data outside the pod, required for consensus node add [boolean] [default: false]
--grpc-tls-cert TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated) [string]
--grpc-web-tls-cert TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated) [string]
--grpc-tls-key TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma separated) [string]
--grpc-web-tls-key TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma separated) [string]
--gossip-endpoints Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external) [string]
--grpc-endpoints Comma separated gRPC endpoints of the node (at most 8) [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
--tls-keys Generate gRPC TLS keys for nodes [boolean] [default: false]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
--admin-key Admin key [string] [default: "302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137"]
--haproxy-ips IP mapping where key = value is node alias and static ip for haproxy, (e.g.: --haproxy-ips node1=127.0.0.1,node2=127.0.0.1) [string]
--envoy-ips IP mapping where key = value is node alias and static ip for envoy proxy, (e.g.: --envoy-ips node1=127.0.0.1,node2=127.0.0.1) [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus node update
consensus node update
Update a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--node-alias Node alias (e.g. node99) [string]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--gossip-endpoints Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external) [string]
--grpc-endpoints Comma separated gRPC endpoints of the node (at most 8) [string]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
--new-admin-key new admin key for the Hedera account [string]
--new-account-number new account number for node update transaction [string]
--tls-public-key path and file name of the public TLS key to be used [string]
--gossip-private-key path and file name of the private key for signing gossip in PEM key format to be used [string]
--gossip-public-key path and file name of the public key for signing gossip in PEM key format to be used [string]
--tls-private-key path and file name of the private TLS key to be used [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus node destroy
consensus node destroy
Delete a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--node-alias Node alias (e.g. node99) [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--app Testing app name [string] [default: "HederaNode.jar"]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--force Force actions even if those can be skipped [boolean] [default: false]
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus state
consensus state
List, download, and upload consensus node state backups to/from individual consensus node instances.
Commands:
consensus state download Downloads a signed state from consensus node/nodes.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus state download
consensus state download
Downloads a signed state from consensus node/nodes.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus diagnostics
consensus diagnostics
Capture diagnostic information such as logs, signed states, and ledger/network/node configurations.
Commands:
consensus diagnostics all Captures logs, configs, and diagnostic artifacts from all consensus nodes.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus diagnostics all
consensus diagnostics all
Captures logs, configs, and diagnostic artifacts from all consensus nodes.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-add
consensus dev-node-add
Dev operations for adding consensus nodes.
Commands:
consensus dev-node-add prepare Prepares the addition of a node with a specific version of Hedera platform
consensus dev-node-add submit-transactions Submits NodeCreateTransaction and Upgrade transactions to the network nodes
consensus dev-node-add execute Executes the addition of a previously prepared node
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-add prepare
consensus dev-node-add prepare
Prepares the addition of a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--output-dir Path to the directory where the command context will be saved to [string]
--app Testing app name [string] [default: "HederaNode.jar"]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--pvcs Enable persistent volume claims to store data outside the pod, required for consensus node add [boolean] [default: false]
--grpc-tls-cert TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated) [string]
--grpc-web-tls-cert TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated) [string]
--grpc-tls-key TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma separated) [string]
--grpc-web-tls-key TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma separated) [string]
--gossip-endpoints Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external) [string]
--grpc-endpoints Comma separated gRPC endpoints of the node (at most 8) [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
--tls-keys Generate gRPC TLS keys for nodes [boolean] [default: false]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
--admin-key Admin key [string] [default: "302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-add submit-transactions
consensus dev-node-add submit-transactions
Submits NodeCreateTransaction and Upgrade transactions to the network nodes
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--input-dir Path to the directory where the command context will be loaded from [string]
--app Testing app name [string] [default: "HederaNode.jar"]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--pvcs Enable persistent volume claims to store data outside the pod, required for consensus node add [boolean] [default: false]
--grpc-tls-cert TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated) [string]
--grpc-web-tls-cert TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated) [string]
--grpc-tls-key TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma separated) [string]
--grpc-web-tls-key TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma separated) [string]
--gossip-endpoints Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external) [string]
--grpc-endpoints Comma separated gRPC endpoints of the node (at most 8) [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
--tls-keys Generate gRPC TLS keys for nodes [boolean] [default: false]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-add execute
consensus dev-node-add execute
Executes the addition of a previously prepared node
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--input-dir Path to the directory where the command context will be loaded from [string]
--app Testing app name [string] [default: "HederaNode.jar"]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--pvcs Enable persistent volume claims to store data outside the pod, required for consensus node add [boolean] [default: false]
--grpc-tls-cert TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated) [string]
--grpc-web-tls-cert TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated) [string]
--grpc-tls-key TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma separated) [string]
--grpc-web-tls-key TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma separated) [string]
--gossip-endpoints Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external) [string]
--grpc-endpoints Comma separated gRPC endpoints of the node (at most 8) [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
--tls-keys Generate gRPC TLS keys for nodes [boolean] [default: false]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
--admin-key Admin key [string] [default: "302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137"]
--haproxy-ips IP mapping where key = value is node alias and static ip for haproxy, (e.g.: --haproxy-ips node1=127.0.0.1,node2=127.0.0.1) [string]
--envoy-ips IP mapping where key = value is node alias and static ip for envoy proxy, (e.g.: --envoy-ips node1=127.0.0.1,node2=127.0.0.1) [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-update
consensus dev-node-update
Dev operations for updating consensus nodes
Commands:
consensus dev-node-update prepare Prepare the deployment to update a node with a specific version of Hedera platform
consensus dev-node-update submit-transactions Submit transactions for updating a node with a specific version of Hedera platform
consensus dev-node-update execute Executes the updating of a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-update prepare
consensus dev-node-update prepare
Prepare the deployment to update a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--output-dir Path to the directory where the command context will be saved to [string]
--node-alias Node alias (e.g. node99) [string]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--gossip-endpoints Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external) [string]
--grpc-endpoints Comma separated gRPC endpoints of the node (at most 8) [string]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
--new-admin-key new admin key for the Hedera account [string]
--new-account-number new account number for node update transaction [string]
--tls-public-key path and file name of the public TLS key to be used [string]
--gossip-private-key path and file name of the private key for signing gossip in PEM key format to be used [string]
--gossip-public-key path and file name of the public key for signing gossip in PEM key format to be used [string]
--tls-private-key path and file name of the private TLS key to be used [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-update submit-transactions
consensus dev-node-update submit-transactions
Submit transactions for updating a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--input-dir Path to the directory where the command context will be loaded from [string]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--gossip-endpoints Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external) [string]
--grpc-endpoints Comma separated gRPC endpoints of the node (at most 8) [string]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-update execute
consensus dev-node-update execute
Executes the updating of a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--input-dir Path to the directory where the command context will be loaded from [string]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--gossip-endpoints Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external) [string]
--grpc-endpoints Comma separated gRPC endpoints of the node (at most 8) [string]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-upgrade
consensus dev-node-upgrade
Dev operations for upgrading consensus nodes
Commands:
consensus dev-node-upgrade prepare Prepare for upgrading network
consensus dev-node-upgrade submit-transactions Submit transactions for upgrading network
consensus dev-node-upgrade execute Executes the upgrading the network
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-upgrade prepare
consensus dev-node-upgrade prepare
Prepare for upgrading network
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-upgrade submit-transactions
consensus dev-node-upgrade submit-transactions
Submit transactions for upgrading network
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--input-dir Path to the directory where the command context will be loaded from [string]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--upgrade-zip-file A zipped file used for network upgrade [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-upgrade execute
consensus dev-node-upgrade execute
Executes the upgrading the network
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--input-dir Path to the directory where the command context will be loaded from [string]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--local-build-path path of hedera local repo [string]
--force Force actions even if those can be skipped [boolean] [default: false]
--upgrade-zip-file A zipped file used for network upgrade [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-delete
consensus dev-node-delete
Dev operations for delete consensus nodes
Commands:
consensus dev-node-delete prepare Prepares the deletion of a node with a specific version of Hedera platform
consensus dev-node-delete submit-transactions Submits transactions to the network nodes for deleting a node
consensus dev-node-delete execute Executes the deletion of a previously prepared node
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-delete prepare
consensus dev-node-delete prepare
Prepares the deletion of a node with a specific version of Hedera platform
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--node-alias Node alias (e.g. node99) [string]
--output-dir Path to the directory where the command context will be saved to [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--app Testing app name [string] [default: "HederaNode.jar"]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--force Force actions even if those can be skipped [boolean] [default: false]
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-delete submit-transactions
consensus dev-node-delete submit-transactions
Submits transactions to the network nodes for deleting a node
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--node-alias Node alias (e.g. node99) [string]
--input-dir Path to the directory where the command context will be loaded from [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--app Testing app name [string] [default: "HederaNode.jar"]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--force Force actions even if those can be skipped [boolean] [default: false]
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-node-delete execute
consensus dev-node-delete execute
Executes the deletion of a previously prepared node
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--node-alias Node alias (e.g. node99) [string]
--input-dir Path to the directory where the command context will be loaded from [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--app Testing app name [string] [default: "HederaNode.jar"]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--debug-node-alias Enable default jvm debug port (5005) for the given node id [string]
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--force Force actions even if those can be skipped [boolean] [default: false]
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--domain-names Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma separated [string]
-t, --release-tag Release tag to be used (e.g. v0.66.0) [string] [default: "v0.66.0"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-freeze
consensus dev-freeze
Dev operations for freezing consensus nodes
Commands:
consensus dev-freeze prepare-upgrade Prepare the network for a Freeze Upgrade operation
consensus dev-freeze freeze-upgrade Performs a Freeze Upgrade operation with on the network after it has been prepared with prepare-upgrade
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-freeze prepare-upgrade
consensus dev-freeze prepare-upgrade
Prepare the network for a Freeze Upgrade operation
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--skip-node-alias The node alias to skip, because of a NodeUpdateTransaction or it is down (e.g. node99) [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
consensus dev-freeze freeze-upgrade
consensus dev-freeze freeze-upgrade
Performs a Freeze Upgrade operation with on the network after it has been prepared with prepare-upgrade
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--skip-node-alias The node alias to skip, because of a NodeUpdateTransaction or it is down (e.g. node99) [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
deployment
deployment
Create, modify, and delete deployment configurations. Deployments are required for most of the other commands.
Commands:
deployment cluster View and manage Solo cluster references used by a deployment.
deployment config List, view, create, delete, and import deployments. These commands affect the local configuration only.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
deployment cluster
deployment cluster
View and manage Solo cluster references used by a deployment.
Commands:
deployment cluster attach Attaches a cluster reference to a deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
deployment cluster attach
deployment cluster attach
Attaches a cluster reference to a deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--enable-cert-manager Pass the flag to enable cert manager [boolean] [default: false]
--num-consensus-nodes Used to specify desired number of consensus nodes for pre-genesis deployments [number]
--dns-base-domain Base domain for the DNS is the suffix used to construct the fully qualified domain name (FQDN) [string] [default: "cluster.local"]
--dns-consensus-node-pattern Pattern to construct the prefix for the fully qualified domain name (FQDN) for the consensus node, the suffix is provided by the --dns-base-domain option (ex. network-{nodeAlias}-svc.{namespace}.svc) [string] [default: "network-{nodeAlias}-svc.{namespace}.svc"]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
deployment config
deployment config
List, view, create, delete, and import deployments. These commands affect the local configuration only.
Commands:
deployment config list Lists all local deployment configurations.
deployment config create Creates a new local deployment configuration.
deployment config delete Removes a local deployment configuration.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
deployment config list
deployment config list
Lists all local deployment configurations.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
deployment config create
deployment config create
Creates a new local deployment configuration.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-n, --namespace Namespace [string]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--realm Realm number. Requires network-node > v61.0 for non-zero values [number] [default: 0]
--shard Shard number. Requires network-node > v61.0 for non-zero values [number] [default: 0]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
deployment config delete
deployment config delete
Removes a local deployment configuration.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
explorer
explorer
Explorer Node operations for creating, modifying, and destroying resources.These commands require the presence of an existing deployment.
Commands:
explorer node List, create, manage, or destroy explorer node instances. Operates on a single explorer node instance at a time.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
explorer node
explorer node
List, create, manage, or destroy explorer node instances. Operates on a single explorer node instance at a time.
Commands:
explorer node add Adds and configures a new node instance.
explorer node destroy Deletes the specified node from the deployment.
explorer node upgrade Upgrades the specified node in the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
explorer node add
explorer node add
Adds and configures a new node instance.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--explorer-chart-dir Explorer local chart directory path (e.g. ~/hiero-mirror-node-explorer/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--enable-ingress enable ingress on the component/pod [boolean] [default: false]
--ingress-controller-value-file The value file to use for ingress controller, defaults to "" [string]
--enable-explorer-tls Enable Explorer TLS, defaults to false, requires certManager and certManagerCrds, which can be deployed through solo-cluster-setup chart or standalone [boolean] [default: false]
--explorer-tls-host-name The host name to use for the Explorer TLS, defaults to "explorer.solo.local" [string] [default: "explorer.solo.local"]
--explorer-static-ip The static IP address to use for the Explorer load balancer, defaults to "" [string]
--explorer-version Explorer chart version [string] [default: "25.1.1"]
-n, --namespace Namespace [string]
--profile-file Resource profile definition (e.g. custom-spec.yaml) [string] [default: "profiles/custom-spec.yaml"]
--profile Resource profile (local | tiny | small | medium | large) [string] [default: "local"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--tls-cluster-issuer-type The TLS cluster issuer type to use for hedera explorer, defaults to "self-signed", the available options are: "acme-staging", "acme-prod", or "self-signed" [string] [default: "self-signed"]
-f, --values-file Comma separated chart values file [string]
-s, --cluster-setup-namespace Cluster Setup Namespace [string] [default: "solo-setup"]
--domain-name Custom domain name [string]
--mirror-node-id The id of the mirror node which to connect [number]
--mirror-namespace Namespace to use for the Mirror Node deployment, a new one will be created if it does not exist [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
explorer node destroy
explorer node destroy
Deletes the specified node from the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
explorer node upgrade
explorer node upgrade
Upgrades the specified node in the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--explorer-chart-dir Explorer local chart directory path (e.g. ~/hiero-mirror-node-explorer/charts) [string]
--enable-ingress enable ingress on the component/pod [boolean] [default: false]
--ingress-controller-value-file The value file to use for ingress controller, defaults to "" [string]
--enable-explorer-tls Enable Explorer TLS, defaults to false, requires certManager and certManagerCrds, which can be deployed through solo-cluster-setup chart or standalone [boolean] [default: false]
--explorer-tls-host-name The host name to use for the Explorer TLS, defaults to "explorer.solo.local" [string] [default: "explorer.solo.local"]
--explorer-static-ip The static IP address to use for the Explorer load balancer, defaults to "" [string]
--explorer-version Explorer chart version [string] [default: "25.1.1"]
-n, --namespace Namespace [string]
--profile-file Resource profile definition (e.g. custom-spec.yaml) [string] [default: "profiles/custom-spec.yaml"]
--profile Resource profile (local | tiny | small | medium | large) [string] [default: "local"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--solo-chart-version Solo testing chart version [string] [default: "0.57.0"]
--tls-cluster-issuer-type The TLS cluster issuer type to use for hedera explorer, defaults to "self-signed", the available options are: "acme-staging", "acme-prod", or "self-signed" [string] [default: "self-signed"]
-f, --values-file Comma separated chart values file [string]
-s, --cluster-setup-namespace Cluster Setup Namespace [string] [default: "solo-setup"]
--domain-name Custom domain name [string]
--id The numeric identifier for the component [number]
--mirror-node-id The id of the mirror node which to connect [number]
--mirror-namespace Namespace to use for the Mirror Node deployment, a new one will be created if it does not exist [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
keys
keys
Consensus key generation operations
Commands:
keys consensus Generate unique cryptographic keys (gossip or grpc TLS keys) for the Consensus Node instances.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
keys consensus
keys consensus
Generate unique cryptographic keys (gossip or grpc TLS keys) for the Consensus Node instances.
Commands:
keys consensus generate Generates TLS keys required for consensus node communication.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
keys consensus generate
keys consensus generate
Generates TLS keys required for consensus node communication.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
--tls-keys Generate gRPC TLS keys for nodes [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
-n, --namespace Namespace [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
ledger
ledger
System, Account, and Crypto ledger-based management operations. These commands require an operational set of consensus nodes and may require an operational mirror node.
Commands:
ledger system Perform a full ledger initialization on a new deployment, rekey privileged/system accounts, or setup network staking parameters.
ledger account View, list, create, update, delete, and import ledger accounts.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
ledger system
ledger system
Perform a full ledger initialization on a new deployment, rekey privileged/system accounts, or setup network staking parameters.
Commands:
ledger system init Re-keys ledger system accounts and consensus node admin keys with uniquely generated ED25519 private keys and will stake consensus nodes.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
ledger system init
ledger system init
Re-keys ledger system accounts and consensus node admin keys with uniquely generated ED25519 private keys and will stake consensus nodes.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
ledger account
ledger account
View, list, create, update, delete, and import ledger accounts.
Commands:
ledger account update Updates an existing ledger account.
ledger account create Creates a new ledger account.
ledger account info Gets the account info including the current amount of HBAR
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
ledger account update
ledger account update
Updates an existing ledger account.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
--account-id The Hedera account id, e.g.: 0.0.1001 [string]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--hbar-amount Amount of HBAR to add [number] [default: 100]
--ecdsa-private-key Specify a hex-encoded ECDSA private key for the Hedera account [string]
--ed25519-private-key Specify a hex-encoded ED25519 private key for the Hedera account [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
ledger account create
ledger account create
Creates a new ledger account.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--hbar-amount Amount of HBAR to add [number] [default: 100]
--create-amount Amount of new account to create [number] [default: 1]
--ecdsa-private-key Specify a hex-encoded ECDSA private key for the Hedera account [string]
--private-key Show private key information [boolean] [default: false]
--ed25519-private-key Specify a hex-encoded ED25519 private key for the Hedera account [string]
--generate-ecdsa-key Generate ECDSA private key for the Hedera account [boolean] [default: false]
--set-alias Sets the alias for the Hedera account when it is created, requires --ecdsa-private-key [boolean] [default: false]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
ledger account info
ledger account info
Gets the account info including the current amount of HBAR
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
--account-id The Hedera account id, e.g.: 0.0.1001 [string]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--private-key Show private key information [boolean] [default: false]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
mirror
mirror
Mirror Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
Commands:
mirror node List, create, manage, or destroy mirror node instances. Operates on a single mirror node instance at a time.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
mirror node
mirror node
List, create, manage, or destroy mirror node instances. Operates on a single mirror node instance at a time.
Commands:
mirror node add Adds and configures a new node instance.
mirror node destroy Deletes the specified node from the deployment.
mirror node upgrade Upgrades the specified node from the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
mirror node add
mirror node add
Adds and configures a new node instance.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--mirror-node-chart-dir Mirror node local chart directory path (e.g. ~/hiero-mirror-node/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--enable-ingress enable ingress on the component/pod [boolean] [default: false]
--ingress-controller-value-file The value file to use for ingress controller, defaults to "" [string]
--mirror-static-ip static IP address for the mirror node [string]
--profile-file Resource profile definition (e.g. custom-spec.yaml) [string] [default: "profiles/custom-spec.yaml"]
--profile Resource profile (local | tiny | small | medium | large) [string] [default: "local"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--mirror-node-version Mirror node chart version [string] [default: "v0.141.0"]
--pinger Enable Pinger service in the Mirror node monitor [boolean] [default: false]
--use-external-database Set to true if you have an external database to use instead of the database that the Mirror Node Helm chart supplies [boolean] [default: false]
--operator-id Operator ID [string]
--operator-key Operator Key [string]
--storage-type storage type for saving stream files, available options are minio_only, aws_only, gcs_only, aws_and_gcs [default: "minio_only"]
--storage-read-access-key storage read access key for mirror node importer [string]
--storage-read-secrets storage read-secret key for mirror node importer [string]
--storage-endpoint storage endpoint URL for mirror node importer [string]
--storage-bucket name of storage bucket for mirror node importer [string]
--storage-bucket-prefix path prefix of storage bucket mirror node importer [string]
--storage-bucket-region region of storage bucket mirror node importer [string]
--external-database-host Use to provide the external database host if the '--use-external-database' is passed [string]
--external-database-owner-username Use to provide the external database owner's username if the '--use-external-database' is passed [string]
--external-database-owner-password Use to provide the external database owner's password if the '--use-external-database' is passed [string]
--external-database-read-username Use to provide the external database readonly user's username if the '--use-external-database' is passed [string]
--external-database-read-password Use to provide the external database readonly user's password if the '--use-external-database' is passed [string]
--domain-name Custom domain name [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
mirror node destroy
mirror node destroy
Deletes the specified node from the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--id The numeric identifier for the component [number]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
mirror node upgrade
mirror node upgrade
Upgrades the specified node from the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--mirror-node-chart-dir Mirror node local chart directory path (e.g. ~/hiero-mirror-node/charts) [string]
--enable-ingress enable ingress on the component/pod [boolean] [default: false]
--ingress-controller-value-file The value file to use for ingress controller, defaults to "" [string]
--mirror-static-ip static IP address for the mirror node [string]
--profile-file Resource profile definition (e.g. custom-spec.yaml) [string] [default: "profiles/custom-spec.yaml"]
--profile Resource profile (local | tiny | small | medium | large) [string] [default: "local"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--mirror-node-version Mirror node chart version [string] [default: "v0.141.0"]
--pinger Enable Pinger service in the Mirror node monitor [boolean] [default: false]
--use-external-database Set to true if you have an external database to use instead of the database that the Mirror Node Helm chart supplies [boolean] [default: false]
--operator-id Operator ID [string]
--operator-key Operator Key [string]
--storage-type storage type for saving stream files, available options are minio_only, aws_only, gcs_only, aws_and_gcs [default: "minio_only"]
--storage-read-access-key storage read access key for mirror node importer [string]
--storage-read-secrets storage read-secret key for mirror node importer [string]
--storage-endpoint storage endpoint URL for mirror node importer [string]
--storage-bucket name of storage bucket for mirror node importer [string]
--storage-bucket-prefix path prefix of storage bucket mirror node importer [string]
--storage-bucket-region region of storage bucket mirror node importer [string]
--external-database-host Use to provide the external database host if the '--use-external-database' is passed [string]
--external-database-owner-username Use to provide the external database owner's username if the '--use-external-database' is passed [string]
--external-database-owner-password Use to provide the external database owner's password if the '--use-external-database' is passed [string]
--external-database-read-username Use to provide the external database readonly user's username if the '--use-external-database' is passed [string]
--external-database-read-password Use to provide the external database readonly user's password if the '--use-external-database' is passed [string]
--domain-name Custom domain name [string]
--id The numeric identifier for the component [number]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
relay
relay
RPC Relay Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
Commands:
relay node List, create, manage, or destroy relay node instances. Operates on a single relay node instance at a time.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
relay node
relay node
List, create, manage, or destroy relay node instances. Operates on a single relay node instance at a time.
Commands:
relay node add Adds and configures a new node instance.
relay node destroy Deletes the specified node from the deployment.
relay node upgrade Upgrades the specified node from the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
relay node add
relay node add
Adds and configures a new node instance.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--relay-chart-dir Relay local chart directory path (e.g. ~/hiero-json-rpc-relay/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
--operator-id Operator ID [string]
--operator-key Operator Key [string]
--profile-file Resource profile definition (e.g. custom-spec.yaml) [string] [default: "profiles/custom-spec.yaml"]
--profile Resource profile (local | tiny | small | medium | large) [string] [default: "local"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--relay-release Relay release tag to be used (e.g. v0.48.0) [string] [default: "0.70.0"]
--replica-count Replica count [number] [default: 1]
-f, --values-file Comma separated chart values file [string]
--domain-name Custom domain name [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--mirror-node-id The id of the mirror node which to connect [number]
--mirror-namespace Namespace to use for the Mirror Node deployment, a new one will be created if it does not exist [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
relay node destroy
relay node destroy
Deletes the specified node from the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--id The numeric identifier for the component [number]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
relay node upgrade
relay node upgrade
Upgrades the specified node from the deployment.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--chart-dir Local chart directory path (e.g. ~/solo-charts/charts) [string]
--relay-chart-dir Relay local chart directory path (e.g. ~/hiero-json-rpc-relay/charts) [string]
-c, --cluster-ref The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment. For commands that take multiple clusters they can be separated by commas. [string]
-i, --node-aliases Comma separated node aliases (empty means all nodes) [string]
--operator-id Operator ID [string]
--operator-key Operator Key [string]
--profile-file Resource profile definition (e.g. custom-spec.yaml) [string] [default: "profiles/custom-spec.yaml"]
--profile Resource profile (local | tiny | small | medium | large) [string] [default: "local"]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--relay-release Relay release tag to be used (e.g. v0.48.0) [string] [default: "0.70.0"]
--replica-count Replica count [number] [default: 1]
-f, --values-file Comma separated chart values file [string]
--domain-name Custom domain name [string]
--cache-dir Local cache directory [string] [default: "/home/runner/.solo/cache"]
--id The numeric identifier for the component [number]
--mirror-node-id The id of the mirror node which to connect [number]
--mirror-namespace Namespace to use for the Mirror Node deployment, a new one will be created if it does not exist [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot
one-shot
One Shot commands for new and returning users who need a preset environment type. These commands use reasonable defaults to provide a single command out of box experience.
Commands:
one-shot single Creates a uniquely named deployment with a single consensus node, mirror node, block node, relay node, and explorer node.
one-shot multi Creates a uniquely named deployment with multiple consensus nodes, mirror node, block node, relay node, and explorer node.
one-shot falcon Creates a uniquely named deployment with optional chart values override using --values-file.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot single
one-shot single
Creates a uniquely named deployment with a single consensus node, mirror node, block node, relay node, and explorer node.
Commands:
one-shot single deploy Deploys all required components for the selected one shot configuration.
one-shot single destroy Removes the deployed resources for the selected one shot configuration.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot single deploy
one-shot single deploy
Deploys all required components for the selected one shot configuration.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--num-consensus-nodes Used to specify desired number of consensus nodes for pre-genesis deployments [number]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot single destroy
one-shot single destroy
Removes the deployed resources for the selected one shot configuration.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot multi
one-shot multi
Creates a uniquely named deployment with multiple consensus nodes, mirror node, block node, relay node, and explorer node.
Commands:
one-shot multi deploy Deploys all required components for the selected multiple node one shot configuration.
one-shot multi destroy Removes the deployed resources for the selected multiple node one shot configuration.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot multi deploy
one-shot multi deploy
Deploys all required components for the selected multiple node one shot configuration.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
--num-consensus-nodes Used to specify desired number of consensus nodes for pre-genesis deployments [number]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot multi destroy
one-shot multi destroy
Removes the deployed resources for the selected multiple node one shot configuration.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot falcon
one-shot falcon
Creates a uniquely named deployment with optional chart values override using --values-file.
Commands:
one-shot falcon deploy Deploys all required components for the selected one shot configuration (with optional values file).
one-shot falcon destroy Removes the deployed resources for the selected one shot configuration (with optional values file).
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot falcon deploy
one-shot falcon deploy
Deploys all required components for the selected one shot configuration (with optional values file).
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--num-consensus-nodes Used to specify desired number of consensus nodes for pre-genesis deployments [number]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
one-shot falcon destroy
one-shot falcon destroy
Removes the deployed resources for the selected one shot configuration (with optional values file).
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire
rapid-fire
Commands for performing load tests a Solo deployment
Commands:
rapid-fire hcs Run load tests using the network load generator with the HCSLoadTest class.
rapid-fire crypto-transfer Run load tests using the network load generator with the CryptoTransferLoadTest class
rapid-fire nft-transfer Run load tests using the network load generator with the NftTransferLoadTest class
rapid-fire token-transfer Run load tests using the network load generator with the TokenTransferLoadTest class
rapid-fire smart-contract Run load tests using the network load generator with the SmartContractLoadTest class
rapid-fire heli-swap Run load tests using the network load generator with the HeliSwapLoadTest class
rapid-fire longevity Run load tests using the network load generator with the LongevityLoadTest class
rapid-fire destroy Uninstall the Network Load Generator Helm chart and clean up resources.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire hcs
rapid-fire hcs
Run load tests using the network load generator with the HCSLoadTest class.
Commands:
rapid-fire hcs start Start a rapid-fire HCS load test using the HCSLoadTest class.
rapid-fire hcs stop Stop any running processes using the HCSLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire hcs start
rapid-fire hcs start
Start a rapid-fire HCS load test using the HCSLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--args All arguments to be passed to the NLG load test class. Value MUST be wrapped in 2 sets of different quotes. Example: '"-c 100 -a 40 -t 3600"' [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--javaHeap Max Java heap size in GB for the NLG load test class, defaults to 8 [number] [default: 8]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire hcs stop
rapid-fire hcs stop
Stop any running processes using the HCSLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire crypto-transfer
rapid-fire crypto-transfer
Run load tests using the network load generator with the CryptoTransferLoadTest class
Commands:
rapid-fire crypto-transfer start Start a rapid-fire crypto transfer load test using the CryptoTransferLoadTest class.
rapid-fire crypto-transfer stop Stop any running processes using the CryptoTransferLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire crypto-transfer start
rapid-fire crypto-transfer start
Start a rapid-fire crypto transfer load test using the CryptoTransferLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--args All arguments to be passed to the NLG load test class. Value MUST be wrapped in 2 sets of different quotes. Example: '"-c 100 -a 40 -t 3600"' [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--javaHeap Max Java heap size in GB for the NLG load test class, defaults to 8 [number] [default: 8]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire crypto-transfer stop
rapid-fire crypto-transfer stop
Stop any running processes using the CryptoTransferLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire nft-transfer
rapid-fire nft-transfer
Run load tests using the network load generator with the NftTransferLoadTest class
Commands:
rapid-fire nft-transfer start Start a rapid-fire NFT transfer load test using the NftTransferLoadTest class.
rapid-fire nft-transfer stop Stop any running processes using the NftTransferLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire nft-transfer start
rapid-fire nft-transfer start
Start a rapid-fire NFT transfer load test using the NftTransferLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--args All arguments to be passed to the NLG load test class. Value MUST be wrapped in 2 sets of different quotes. Example: '"-c 100 -a 40 -t 3600"' [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--javaHeap Max Java heap size in GB for the NLG load test class, defaults to 8 [number] [default: 8]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire nft-transfer stop
rapid-fire nft-transfer stop
Stop any running processes using the NftTransferLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire token-transfer
rapid-fire token-transfer
Run load tests using the network load generator with the TokenTransferLoadTest class
Commands:
rapid-fire token-transfer start Start a rapid-fire token transfer load test using the TokenTransferLoadTest class.
rapid-fire token-transfer stop Stop any running processes using the TokenTransferLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire token-transfer start
rapid-fire token-transfer start
Start a rapid-fire token transfer load test using the TokenTransferLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--args All arguments to be passed to the NLG load test class. Value MUST be wrapped in 2 sets of different quotes. Example: '"-c 100 -a 40 -t 3600"' [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--javaHeap Max Java heap size in GB for the NLG load test class, defaults to 8 [number] [default: 8]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire token-transfer stop
rapid-fire token-transfer stop
Stop any running processes using the TokenTransferLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire smart-contract
rapid-fire smart-contract
Run load tests using the network load generator with the SmartContractLoadTest class
Commands:
rapid-fire smart-contract start Start a rapid-fire smart contract load test using the SmartContractLoadTest class.
rapid-fire smart-contract stop Stop any running processes using the SmartContractLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire smart-contract start
rapid-fire smart-contract start
Start a rapid-fire smart contract load test using the SmartContractLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--args All arguments to be passed to the NLG load test class. Value MUST be wrapped in 2 sets of different quotes. Example: '"-c 100 -a 40 -t 3600"' [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--javaHeap Max Java heap size in GB for the NLG load test class, defaults to 8 [number] [default: 8]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire smart-contract stop
rapid-fire smart-contract stop
Stop any running processes using the SmartContractLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire heli-swap
rapid-fire heli-swap
Run load tests using the network load generator with the HeliSwapLoadTest class
Commands:
rapid-fire heli-swap start Start a rapid-fire HeliSwap load test using the HeliSwapLoadTest class.
rapid-fire heli-swap stop Stop any running processes using the HeliSwapLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire heli-swap start
rapid-fire heli-swap start
Start a rapid-fire HeliSwap load test using the HeliSwapLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--args All arguments to be passed to the NLG load test class. Value MUST be wrapped in 2 sets of different quotes. Example: '"-c 100 -a 40 -t 3600"' [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--javaHeap Max Java heap size in GB for the NLG load test class, defaults to 8 [number] [default: 8]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire heli-swap stop
rapid-fire heli-swap stop
Stop any running processes using the HeliSwapLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire longevity
rapid-fire longevity
Run load tests using the network load generator with the LongevityLoadTest class
Commands:
rapid-fire longevity start Start a rapid-fire longevity load test using the LongevityLoadTest class.
rapid-fire longevity stop Stop any running processes using the LongevityLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire longevity start
rapid-fire longevity start
Start a rapid-fire longevity load test using the LongevityLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--args All arguments to be passed to the NLG load test class. Value MUST be wrapped in 2 sets of different quotes. Example: '"-c 100 -a 40 -t 3600"' [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-f, --values-file Comma separated chart values file [string]
--javaHeap Max Java heap size in GB for the NLG load test class, defaults to 8 [number] [default: 8]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire longevity stop
rapid-fire longevity stop
Stop any running processes using the LongevityLoadTest class.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire destroy
rapid-fire destroy
Uninstall the Network Load Generator Helm chart and clean up resources.
Commands:
rapid-fire destroy all Uninstall the Network Load Generator Helm chart and remove all related resources.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
rapid-fire destroy all
rapid-fire destroy all
Uninstall the Network Load Generator Helm chart and remove all related resources.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access the network services [boolean] [default: true]
-d, --deployment The name the user will reference locally to link to a deployment [string]
--force Force actions even if those can be skipped [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for confirmation [boolean] [default: false]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
6 - FAQ
How can I set up a Solo network in a single command?
You can run one of the following commands depending on your needs:
Single Node Deployment (recommended for development):
npx @hashgraph/solo:@latest one-shot single deploy
Multiple Node Deployment (for testing consensus scenarios):
npx @hashgraph/solo:@latest one-shot multiple deploy
Falcon Deployment (with custom configuration file):
npx @hashgraph/solo:@latest one-shot falcon deploy --values-file falcon-values.yaml
The falcon deployment allows you to configure all network components (consensus nodes, mirror node, explorer, relay, and block node) through a single YAML configuration file.
More documentation can be found here:
How can I tear down a Solo network in a single command?
You can run one of the following commands depending on how you deployed:
Single Node Teardown:
npx @hashgraph/solo:@latest one-shot single destroy
Multiple Node Teardown:
npx @hashgraph/solo:@latest one-shot multiple destroy
Falcon Deployment Teardown:
npx @hashgraph/solo:@latest one-shot falcon destroy
How can I avoid using genesis keys ?
You can run solo ledger system init anytime after solo consensus node start.
Where can I find the default account keys ?
By default, Solo leverages the Hiero Consensus Node well known ED25519 private genesis key: 302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137, the genesis public key is: 302a300506032b65700321000aa8e21064c61eab86e2a9c164565b4e7a9a4146106e0a6cd03a8c395a110e92.
Unless changed it is the private key for the default operator account 0.0.2 of the consensus network.
It is defined in Hiero source code Link
What is the difference between ECDSA keys and ED25519 keys?
See https://docs.hedera.com/hedera/core-concepts/keys-and-signatures for a detailed answer.
Where can I find the EVM compatible private key?
You will need to use ECDSA keys for EVM tooling compatibility. If you take the privateKeyRaw provided by Solo and prefix it with 0x you will have the private key used by Ethereum compatible tools.
How do I get the key for an account?
Use the following command to get account balance and private key of the account 0.0.1007:
# get account info of 0.0.1007 and also show the private key
solo ledger account info --account-id 0.0.1007 --deployment solo-deployment --private-key
The output would be similar to the following:
{
"accountId": "0.0.1007",
"privateKey": "302e020100300506032b657004220420411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7",
"privateKeyRaw": "411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7"
"publicKey": "302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013",
"balance": 100
}
How to handle error “failed to setup chart repositories”
If during the installation of solo-charts you see the error similar to the following:
failed to setup chart repositories,
repository name (hedera-json-rpc-relay) already exists
You need to remove the old helm repo manually, first run command helm repo list to
see the list of helm repos, and then run helm repo remove <repo-name> to remove the repo.
For example:
helm repo list
NAME URL
haproxy-ingress https://haproxy-ingress.github.io/charts
haproxytech https://haproxytech.github.io/helm-charts
metrics-server https://kubernetes-sigs.github.io/metrics-server/
metallb https://metallb.github.io/metallb
mirror https://hashgraph.github.io/hedera-mirror-node/charts
hedera-json-rpc-relay https://hashgraph.github.io/hedera-json-rpc-relay/charts
Next run the command to remove the repo:
helm repo remove hedera-json-rpc-relay
7 - Using Solo with Mirror Node
Using Solo with mirror node
User can deploy a Solo network with Mirror Node by running the following command:
export SOLO_CLUSTER_NAME=solo-cluster
export SOLO_NAMESPACE=solo-e2e
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster-setup
export SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup --cluster-setup-namespace "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 2
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2
solo mirror node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --enable-ingress --pinger
solo explorer node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME}
The --pinger flag in solo mirror node add starts a pinging service that sends transactions to the network at regular intervals. This is needed because the record file is not imported into the mirror node until the next one is created.
Then you can access the Explorer at http://localhost:8080
Or you can use Task tool to deploy Solo network with Mirror Node with a single command link
Next, you can try to create a few accounts with Solo and see the transactions in the Explorer.
solo ledger account create --deployment solo-deployment --hbar-amount 100
solo ledger account create --deployment solo-deployment --hbar-amount 100
Or you can use Hedera JavaScript SDK examples to create topic, submit message and subscribe to the topic.
If you need to access mirror node service directly, use the following command to enable port forwarding, or just use localhost:8081 as it should have all the mirror node services exposed to this port:
kubectl port-forward svc/mirror-1-grpc -n "${SOLO_NAMESPACE}" 5600:5600 &
grpcurl -plaintext "${GRPC_IP:-127.0.0.1}:5600" list
kubectl port-forward svc/mirror-1-rest -n "${SOLO_NAMESPACE}" svc/mirror-1-rest 5551:80 &
curl -s "http://${REST_IP:-127.0.0.1}:5551/api/v1/transactions?limit=1"
kubectl port-forward service/mirror-1-restjava -n "${SOLO_NAMESPACE}" 8084:80 &
curl -s "http://${REST_IP:-127.0.0.1}:8084/api/v1/accounts/0.0.2/allowances/nfts"
8 - Using Solo with Hiero JavaScript SDK
Using Solo with the Hiero JavaScript SDK
First, please follow solo repository README to install solo and Docker Desktop. You also need to install the Taskfile tool following the instructions here.
Then we start with launching a local Solo network with the following commands:
# launch a local Solo network with mirror node and hedera explorer
cd scripts
task default-with-mirror
Then create a new test account with the following command:
npm run solo-test -- ledger account create --deployment solo-deployment --hbar-amount 100
The output would be similar to the following:
*** new account created ***
-------------------------------------------------------------------------------
{
"accountId": "0.0.1007",
"publicKey": "302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013",
"balance": 100
}
Then use the following command to get private key of the account 0.0.1007:
npm run solo-test -- ledger account info --account-id 0.0.1007 --deployment solo-deployment --private-key
The output would be similar to the following:
{
"accountId": "0.0.1007",
"privateKey": "302e020100300506032b657004220420411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7",
"privateKeyRaw": "411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7"
"publicKey": "302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013",
"balance": 100
}
Next step please clone the Hiero Javascript SDK repository https://github.com/hiero-ledger/hiero-sdk-js.
At the root of the project hiero-sdk-js, create a file .env and add the following content:
# Hiero Operator Account ID
export OPERATOR_ID="0.0.1007"
# Hiero Operator Private Key
export OPERATOR_KEY="302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013"
# Hiero Network
export HEDERA_NETWORK="local-node"
Make sure to assign the value of accountId to OPERATOR_ID and the value of privateKey to OPERATOR_KEY.
Then try the following command to run the test
node examples/create-account.js
The output should be similar to the following:
private key = 302e020100300506032b6570042204208a3c1093c4df779c4aa980d20731899e0b509c7a55733beac41857a9dd3f1193
public key = 302a300506032b6570032100c55adafae7e85608ea893d0e2c77e2dae3df90ba8ee7af2f16a023ba2258c143
account id = 0.0.1009
Or try the topic creation example:
node scripts/create-topic.js
The output should be similar to the following:
topic id = 0.0.1008
topic sequence number = 1
You can use Hiero Explorer to check transactions and topics created in the Solo network: http://localhost:8080/localnet/dashboard
Finally, after done with using solo, using the following command to tear down the Solo network:
task clean
Retrieving Logs
You can find log for running solo command under the directory ~/.solo/logs/
The file solo.log contains the logs for the solo command. The file hashgraph-sdk.log contains the logs from Solo client when sending transactions to network nodes.
9 - Hiero Consensus Node Platform Developer
Use Solo with a Local Built Hiero Consensus Node Testing Application
First, please clone Hiero Consensus Node repo https://github.com/hiero-ledger/hiero-consensus-node/ and build the code
with ./gradlew assemble. If you need to run multiple nodes with different versions or releases, please duplicate the repo or build directories in
multiple directories, checkout to the respective version and build the code.
Then you can start the custom-built platform testing application with the following command:
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
# option 1) if all nodes are running the same version of Hiero app
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data/
# option 2) if each node is running different version of Hiero app, please provide different paths to the local repositories
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path node1=../hiero-consensus-node/hedera-node/data/,node1=<path2>,node3=<path3>
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
It is possible that different nodes are running different versions of Hiero app, as long as in the above setup command, each node0, or node1 is given different paths to the local repositories.
If need to provide customized configuration files for Hedera application, please use the following flags with consensus network deploy command:
--settings-txt- to provide custom settings.txt file--api-permission-properties- to provide custom api-permission.properties file--bootstrap-properties- to provide custom bootstrap.properties file--application-properties- to provide custom application.properties file--block-node-cfg- to configure block node routing for each consensus node
For example:
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --settings-txt <path-to-settings-txt>
Block Node Routing Configuration
For network delay testing and simulating different network topologies, you can configure how each consensus node sends blocks to specific block nodes using the --block-node-cfg flag:
# Using JSON string directly
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" \
-i node1,node2,node3 \
--block-node-cfg '{"node1":[1,3],"node2":[2],"node3":[1,2]}'
# Or using a JSON file
echo '{"node1":[1,3],"node2":[2],"node3":[1,2]}' > block-config.json
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" \
-i node1,node2,node3 \
--block-node-cfg block-config.json
This configuration maps consensus node names to arrays of block node IDs. For example:
node1sends blocks to block nodes 1 and 3node2sends blocks to block node 2node3sends blocks to block nodes 1 and 2
10 - Hiero Consensus Node Execution Developer
Hiero Consensus Node Execution Developer
Once the nodes are up, you may now expose various services (using k9s (shift-f) or kubectl port-forward) and access. Below are most used services that you may expose.
- where the ’node name’ for Node ID = 0, is
node1(node${ nodeId + 1 }) - Node services:
network-<node name>-svc - HAProxy:
haproxy-<node name>-svc# enable port forwarding for haproxy # node1 grpc port accessed by localhost:50211 kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 51211:50211 & # node2 grpc port accessed by localhost:51211 kubectl port-forward svc/haproxy-node2-svc -n "${SOLO_NAMESPACE}" 52211:50211 & # node3 grpc port accessed by localhost:52211 kubectl port-forward svc/haproxy-node3-svc -n "${SOLO_NAMESPACE}" 53211:50211 & - Envoy Proxy:
envoy-proxy-<node name>-svc# enable port forwarding for envoy proxy kubectl port-forward svc/envoy-proxy-node1-svc -n "${SOLO_NAMESPACE}" 8181:8080 & kubectl port-forward svc/envoy-proxy-node2-svc -n "${SOLO_NAMESPACE}" 8281:8080 & kubectl port-forward svc/envoy-proxy-node3-svc -n "${SOLO_NAMESPACE}" 8381:8080 & - Hiero explorer:
solo-deployment-hiero-explorer# enable port forwarding for hiero explorer, can be access at http://localhost:8080/ # check to see if it is already enabled, port forwarding for explorer should be handled by solo automatically # kubectl port-forward svc/hiero-explorer-1 -n "${SOLO_NAMESPACE}" 8080:8080 & - JSON RPC Relays
You can deploy JSON RPC Relays for one or more nodes as below:
# deploy relay node first
solo relay node add -i node1 --deployment "${SOLO_DEPLOYMENT}"
# enable relay for node1
# check to see if it is already enabled, port forwarding for relay should be handled by solo automatically
# kubectl port-forward svc/relay-1 -n "${SOLO_NAMESPACE}" 7546:7546 &
11 - Attach JVM Debugger and Retrieve Logs
How to Debug a Hiero Consensus Node
1. Using k9s to access running consensus node logs
Running the command k9s -A in terminal, and select one of the network nodes:

Next, select the root-container and press the key s to enter the shell of the container.

Once inside the shell, you can change to directory cd /opt/hgcapp/services-hedera/HapiApp2.0/
to view all hedera related logs and properties files.
[root@network-node1-0 hgcapp]# cd /opt/hgcapp/services-hedera/HapiApp2.0/
[root@network-node1-0 HapiApp2.0]# pwd
/opt/hgcapp/services-hedera/HapiApp2.0
[root@network-node1-0 HapiApp2.0]# ls -ltr data/config/
total 0
lrwxrwxrwx 1 root root 27 Dec 4 02:05 bootstrap.properties -> ..data/bootstrap.properties
lrwxrwxrwx 1 root root 29 Dec 4 02:05 application.properties -> ..data/application.properties
lrwxrwxrwx 1 root root 32 Dec 4 02:05 api-permission.properties -> ..data/api-permission.properties
[root@network-node1-0 HapiApp2.0]# ls -ltr output/
total 1148
-rw-r--r-- 1 hedera hedera 0 Dec 4 02:06 hgcaa.log
-rw-r--r-- 1 hedera hedera 0 Dec 4 02:06 queries.log
drwxr-xr-x 2 hedera hedera 4096 Dec 4 02:06 transaction-state
drwxr-xr-x 2 hedera hedera 4096 Dec 4 02:06 state
-rw-r--r-- 1 hedera hedera 190 Dec 4 02:06 swirlds-vmap.log
drwxr-xr-x 2 hedera hedera 4096 Dec 4 16:01 swirlds-hashstream
-rw-r--r-- 1 hedera hedera 1151446 Dec 4 16:07 swirlds.log
Alternatively, you can use the following command to download hgcaa.log and swirlds.log for further analysis.
# download logs as zip file from node1 and save in default ~/.solo/logs/<namespace>/<timestamp/
solo consensus diagnostics all --deployment solo-deployment
2. Using IntelliJ remote debug with Solo
NOTE: the hiero-consensus-node path referenced ‘../hiero-consensus-node/hedera-node/data’ may need to be updated based on what directory you are currently in. This also assumes that you have done an assemble/build and the directory contents are up-to-date.
Set up an Intellij run/debug configuration for remote JVM debug as shown in the below screenshot:

If you are working on a Hiero Consensus Node testing application, you should use the following configuration in Intellij:

Set up a breakpoint if necessary.
From Solo repo directory, run the following command from a terminal to launch a three node network, assume we are trying to attach debug to node2.
Make sure the path following local-build-path points to the correct directory.
Example 1: attach jvm debugger to a Hiero Consensus Node
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo # to avoid name collision issues if you ran previously with the same deployment name
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --debug-node-alias node2
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --debug-node-alias node2
Once you see the following message, you can launch the JVM debugger from Intellij
β― Check all nodes are ACTIVE
Check node: node1,
Check node: node2, Please attach JVM debugger now.
Check node: node3,
? JVM debugger setup for node2. Continue when debugging is complete? (y/N)
The Hiero Consensus Node application should stop at the breakpoint you set:
After done with debugging, continue to run the application from Intellij. Then select y to continue the Solo command line operation.

Example 2: attach a JVM debugger with the consensus node add operation
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --pvcs true
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node add --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys --debug-node-alias node4 --local-build-path ../hiero-consensus-node/hedera-node/data --pvcs true
Example 3: attach a JVM debugger with the consensus node update operation
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node update --deployment "${SOLO_DEPLOYMENT}" --node-alias node2 --debug-node-alias node2 --local-build-path ../hiero-consensus-node/hedera-node/data --new-account-number 0.0.7 --gossip-public-key ./s-public-node2.pem --gossip-private-key ./s-private-node2.pem --release-tag v0.59.5
Example 4: attach a JVM debugger with the node delete operation
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node destroy --deployment "${SOLO_DEPLOYMENT}" --node-alias node2 --debug-node-alias node3 --local-build-path ../hiero-consensus-node/hedera-node/data
3. Save and reuse network state files
With the following command you can save the network state to a file.
# must stop hedera node operation first
solo consensus node stop --deployment "${SOLO_DEPLOYMENT}"
# download state file to default location at ~/.solo/logs/<namespace>
solo consensus state download -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"
By default, the state files are saved under ~/.solo directory
βββ logs
βββ solo-e2e
βΒ Β βββ network-node1-0-state.zip
βΒ Β βββ network-node2-0-state.zip
βββ solo.log
Later, user can use the following command to upload the state files to the network and restart Hiero Consensus Nodes.
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node stop --deployment "${SOLO_DEPLOYMENT}"
solo consensus node states -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"
# start network with pre-existing state files
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" --state-file network-node1-0-state.zip
12 - Using Network Load Generator with Solo
Using Network Load Generator with Solo
The Network Load Generator (NLG) is a benchmarking tool designed to stress test Hiero networks by generating configurable transaction loads. To use the Network Load Generator with Solo, follow these steps:
- Create a Solo network:
npx @hashgraph/solo:@latest one-shot single deploy
- Use the
rapid-firecommands to install the NLG chart and start a load test:
@hashgraph/solo:@latest rapid-fire crypto-transfer start --deployment my-deployment --args '"-c 3 -a 10 -t 60"'
- In a separate terminal, you can start a different load test:
@hashgraph/solo:@latest rapid-fire nft-transfer start --deployment my-deployment --args '"-c 3 -a 10 -t 60"'
- To stop the load test early use the
stopcommand:
@hashgraph/solo:@latest rapid-fire nft-transfer stop --deployment my-deployment
- To stop all running load tests and uninstall the NLG chart, use the
destroycommand:
@hashgraph/solo:@latest rapid-fire destroy all --deployment my-deployment
See this example for more details: examples/network-load-generator/README.md
A full list of all available rapid-fire commands can be found in Solo CLI Commands
Argument list for every NLG class
| Class | Argument |
|---|---|
| CryptoTransferLoadTest | [-c |
| TokenTransferLoadTest | [-c |
| NftTransferLoadTest | [-c |
| SmartContractLoadTest | [-c |
| HeliSwapLoadTest | [-c |
| LongevityLoadTest | [-c |
13 - Using Environment Variables
Environment Variables Used in Solo
User can configure the following environment variables to customize the behavior of Solo.
Table of environment variables
| Environment Variable | Description | Default Value |
|---|---|---|
SOLO_HOME | Path to the Solo cache and log files | ~/.solo |
SOLO_CACHE_DIR | Path to the Solo cache directory | ~/.solo/cache |
SOLO_LOG_LEVEL | Logging level for Solo operations (trace, debug, info, warn, error) | info |
SOLO_CHAIN_ID | Chain id of solo network | 298 |
DEFAULT_START_ID_NUMBER | First node account ID of solo test network | 0.0.3 |
SOLO_NODE_INTERNAL_GOSSIP_PORT | Internal gossip port number used by hedera network | 50111 |
SOLO_NODE_EXTERNAL_GOSSIP_PORT | External port number used by hedera network | 50111 |
SOLO_NODE_DEFAULT_STAKE_AMOUNT | Default stake amount for node | 500 |
SOLO_OPERATOR_ID | Operator account ID for solo network | 0.0.2 |
SOLO_OPERATOR_KEY | Operator private key for solo network | 302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137 |
SOLO_OPERATOR_PUBLIC_KEY | Operator public key for solo network | 302a300506032b65700321000aa8e21064c61eab86e2a9c164565b4e7a9a4146106e0a6cd03a8c395a110e92 |
FREEZE_ADMIN_ACCOUNT | Freeze admin account ID for solo network | 0.0.58 |
GENESIS_KEY | Genesis private key for solo network | 302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137 |
LOCAL_NODE_START_PORT | Local node start port for solo network | 30212 |
NODE_CLIENT_MIN_BACKOFF | The minimum amount of time to wait between retries. | 1000 |
NODE_CLIENT_MAX_BACKOFF | The maximum amount of time to wait between retries. | 1000 |
NODE_CLIENT_REQUEST_TIMEOUT | The period of time a transaction or query request will retry from a “busy” network response | 600000 |
NODE_COPY_CONCURRENT | The number of concurrent threads to use when copying files to the node. | 4 |
PODS_RUNNING_MAX_ATTEMPTS | The maximum number of attempts to check if pods are running. | 900 |
PODS_RUNNING_DELAY | The interval between attempts to check if pods are running, in the unit of milliseconds. | 1000 |
NETWORK_NODE_ACTIVE_MAX_ATTEMPTS | The maximum number of attempts to check if network nodes are active. | 300 |
NETWORK_NODE_ACTIVE_DELAY | The interval between attempts to check if network nodes are active, in the unit of milliseconds. | 1000 |
NETWORK_NODE_ACTIVE_TIMEOUT | The period of time to wait for network nodes to become active, in the unit of milliseconds. | 1000 |
NETWORK_PROXY_MAX_ATTEMPTS | The maximum number of attempts to check if network proxy is running. | 300 |
NETWORK_PROXY_DELAY | The interval between attempts to check if network proxy is running, in the unit of milliseconds. | 2000 |
BLOCK_NODE_ACTIVE_MAX_ATTEMPTS | The maximum number of attempts to check if block nodes are active. | 100 |
BLOCK_NODE_ACTIVE_DELAY | The interval between attempts to check if block nodes are active, in the unit of milliseconds. | 60 |
BLOCK_NODE_ACTIVE_TIMEOUT | The period of time to wait for block nodes to become active, in the unit of milliseconds. | 60 |
PODS_READY_MAX_ATTEMPTS | The maximum number of attempts to check if pods are ready. | 300 |
PODS_READY_DELAY | The interval between attempts to check if pods are ready, in the unit of milliseconds. | 2000 |
RELAY_PODS_RUNNING_MAX_ATTEMPTS | The maximum number of attempts to check if relay pods are running. | 900 |
RELAY_PODS_RUNNING_DELAY | The interval between attempts to check if relay pods are running, in the unit of milliseconds. | 1000 |
RELAY_PODS_READY_MAX_ATTEMPTS | The maximum number of attempts to check if relay pods are ready. | 100 |
RELAY_PODS_READY_DELAY | The interval between attempts to check if relay pods are ready, in the unit of milliseconds. | 1000 |
NETWORK_DESTROY_WAIT_TIMEOUT | The period of time to wait for network to be destroyed, in the unit of milliseconds. | 120 |
SOLO_LEASE_ACQUIRE_ATTEMPTS | The number of attempts to acquire a lock before failing. | 10 |
SOLO_LEASE_DURATION | The default duration in seconds for which a lock is held before expiration. | 20 |
ACCOUNT_UPDATE_BATCH_SIZE | The number of accounts to update in a single batch operation. | 10 |
NODE_CLIENT_PING_INTERVAL | The interval in milliseconds between node health pings. | 30000 |
NODE_CLIENT_SDK_PING_MAX_RETRIES | The maximum number of retries for node health pings. | 5 |
NODE_CLIENT_SDK_PING_RETRY_INTERVAL | The interval in milliseconds between node health ping retries. | 10000 |
GRPC_PORT | The gRPC port used for local node communication. | 50211 |
LOCAL_BUILD_COPY_RETRY | The number of times to retry local build copy operations. | 3 |
LOAD_BALANCER_CHECK_DELAY_SECS | The delay in seconds between load balancer status checks. | 5 |
LOAD_BALANCER_CHECK_MAX_ATTEMPTS | The maximum number of attempts to check load balancer status. | 60 |
JSON_RPC_RELAY_CHART_URL | The URL for the JSON-RPC relay Helm chart repository. | https://hiero-ledger.github.io/hiero-json-rpc-relay/charts |
MIRROR_NODE_CHART_URL | The URL for the Hedera mirror node Helm chart repository. | https://hashgraph.github.io/hedera-mirror-node/charts |
NODE_CLIENT_MAX_ATTEMPTS | The maximum number of attempts for node client operations. | 600 |
EXPLORER_CHART_URL | The URL for the Hedera Explorer Helm chart repository. | oci://ghcr.io/hiero-ledger/hiero-mirror-node-explorer/hiero-explorer-chart |
INGRESS_CONTROLLER_CHART_URL | The URL for the ingress controller Helm chart repository. | https://haproxy-ingress.github.io/charts |
BLOCK_NODE_VERSION | The release version of the block node to use. | v0.18.0 |
CONSENSUS_NODE_VERSION | The release version of the consensus node to use. | v0.65.1 |
SOLO_CHART_VERSION | The release version of the Solo charts to use. | v0.56.0 |
MIRROR_NODE_VERSION | The release version of the mirror node to use. | v0.138.0 |
EXPLORER_VERSION | The release version of the explorer to use. | v25.1.1 |
RELAY_VERSION | The release version of the JSON RPC Relay to use. | v0.70.0 |
INGRESS_CONTROLLER_VERSION | The release version of the consensus node to use. | v0.14.5 |
MINIO_OPERATOR_VERSION | The release version of the MinIO Operator to use. | 7.1.1 |
PROMETHEUS_STACK_VERSION | The release version of the Prometheus Stack to use. | 52.0.1 |
GRAFANA_AGENT_VERSION | The release version of the Grafana Agent to use. | 0.27.1 |
ONE_SHOT_WITH_BLOCK_NODE | If one-shot should deploy with block node. | false |
MIRROR_NODE_PINGER_TPS | The transactions per second to set the Mirror Node monitor pinger to, 0 means disable. | 5 |
NETWORK_LOAD_GENERATOR_CHART_URL | The url for the NLG chart | oci://swirldslabs.jfrog.io/load-generator-helm-release-local |
NETWORK_LOAD_GENERATOR_PODS_RUNNING_MAX_ATTEMPTS | The maximum number of attempts to check NLG status. | 900 |
NETWORK_LOAD_GENERATOR_POD_RUNNING_DELAY | The interval between attempts to check if nlg pod is running, in the unit of milliseconds. | 1000 |
NETWORK_LOAD_GENERATOR_CHART_VERSION | The release version of the NLG chart to use. | v0.7.0 |
14 -
Legacy Releases
| Solo Version | Node.js | Kind | Solo Chart | Hedera | Kubernetes | Kubectl | Helm | k9s | Docker Resources | Release Date | End of Support |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.46.0 (LTS) | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.56.0 | v0.65.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-10-02 | 2026-01-02 |
| 0.45.0 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.56.0 | v0.65.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-09-24 | 2025-10-24 |
| 0.44.0 (LTS) | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.56.0 | v0.64.2+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-09-16 | 2025-12-16 |
| 0.43.0 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.54.5 | v0.63.9+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-08-15 | 2025-09-15 |
| 0.42.0 (LTS) | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.54.5 | v0.63.9+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-08-11 | 2025-11-11 |
| 0.41.0 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.54.4 | v0.62.10+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-07-24 | 2025-08-24 |
| 0.40.1 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.54.4 | v0.61.7+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-07-17 | 2025-08-17 |
| 0.40.0 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.54.4 | v0.61.7+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-07-16 | 2025-08-16 |
| 0.39.0 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.54.3 | v0.61.7+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-07-03 | 2025-08-03 |
| 0.38.0 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.54.3 | v0.58.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-06-26 | 2025-07-26 |
| 0.37.1 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.53.0 | v0.58.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-06-03 | 2025-07-03 |
| 0.37.0 | >= 20.19.0 (lts/iron) | >= v0.26.0 | v0.53.0 | v0.58.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-06-02 | 2025-07-02 |
| 0.36.1 | >= 20.18.0 (lts/iron) | >= v0.26.0 | v0.53.0 | v0.58.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-05-28 | 2025-06-28 |
| 0.36.0 | >= 20.18.0 (lts/iron) | >= v0.26.0 | v0.52.0 | v0.58.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-05-23 | 2025-06-23 |
| 0.35.0 | >= 20.18.0 (lts/iron) | >= v0.26.0 | v0.44.0 | v0.58.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-02-20 | 2025-03-20 |
| 0.34.0 | >= 20.18.0 (lts/iron) | >= v0.26.0 | v0.42.10 | v0.58.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-01-24 | 2025-02-24 |
| 0.33.0 | >= 20.18.0 (lts/iron) | >= v0.26.0 | v0.38.2 | v0.58.1 - <= v0.59.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2025-01-13 | 2025-02-13 |
| 0.32.0 | >= 20.18.0 (lts/iron) | >= v0.26.0 | v0.38.2 | v0.58.1 - <= v0.59.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2024-12-31 | 2025-01-31 |
| 0.31.4 | >= 20.18.0 (lts/iron) | >= v0.26.0 | v0.31.4 | v0.54.0 β <= v0.57.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2024-10-23 | 2024-11-23 |
| 0.30.0 | >= 20.14.0 (lts/hydrogen) | >= v0.26.0 | v0.30.0 | v0.54.0 β <= v0.57.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2024-09-17 | 2024-10-17 |
| 0.29.0 | >= 20.14.0 (lts/hydrogen) | >= v0.26.0 | v0.30.0 | v0.53.0 β <= v0.57.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 12GB, CPU >= 4 | 2024-09-06 | 2024-10-06 |