This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

The documentation section provides a comprehensive guide to using Solo to launch a Hiero Consensus Node network, including setup instructions, usage guides, and information for developers. It covers everything from installation to advanced features and troubleshooting.

The documentation section provides a comprehensive guide to using Solo to launch a Hiero Consensus Node network, including setup instructions, usage guides, and information for developers. It covers everything from installation to advanced features and troubleshooting.

1 - Getting Started

Getting started with Solo

πŸ“ Solo has a new quick-start command! check it out: Solo User Guide, Solo CLI Commands

Solo

NPM Version GitHub License node-lts Build Application Codacy Grade Codacy Coverage codecov OpenSSF Scorecard CII Best Practices

An opinionated CLI tool to deploy and manage standalone test networks.

Releases and Requirements

Solo releases are supported for one month after their release date, after which they are no longer maintained. It is recommended to upgrade to the latest version to benefit from new features and improvements. Every quarter a version will be designated as LTS (Long-Term Support) and will be supported for three months.

Current Releases

Solo VersionNode.jsKindSolo ChartHederaKubernetesKubectlHelmk9sDocker ResourcesRelease DateEnd of Support
0.43.0>= 20.18.0 (lts/iron)>= v0.26.0v0.54.5v0.63.9+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-08-152025-09-15
0.42.0 (LTS)>= 20.18.0 (lts/iron)>= v0.26.0v0.54.5v0.63.9+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-08-112025-11-11

To see a list of legacy releases, please check the legacy versions documentation page.

Hardware Requirements

To run a one-node network, you will need to set up Docker Desktop with at least 12GB of memory and 4 CPUs.

alt text

Setup

  • Install Node. You may also use nvm to manage different Node versions locally, some examples:
# install specific nodejs version
# nvm install <version>

# install nodejs version 20.18.0
nvm install v20.18.0

# lists available node versions already installed
nvm ls

# switch to selected node version
# nvm use <version>
nvm use v20.18.0

Install Solo

  • Run npm install -g @hashgraph/solo

Documentation

Getting Started

Contributing

Contributions are welcome. Please see the contributing guide to see how you can get involved.

Code of Conduct

This project is governed by the Contributor Covenant Code of Conduct. By participating, you are expected to uphold this code of conduct.

License

Apache License 2.0

2 - Solo User Guide

Learn how to set up your first Hedera test network using Solo. This step-by-step guide covers installation, deployment, and your first transaction.

πŸ“ For less than 16 GB of memory to dedicate to Docker please skip the block node add and destroy steps.

πŸ“ There should be a table of contents on the right side of your screen if your browser width is large enough

Introduction

Welcome to the world of Hedera development! If you’re looking to build and test applications on the Hedera network but don’t want to spend HBAR on testnet or mainnet transactions, you’ve come to the right place. Solo is your gateway to running your own local Hedera test network, giving you complete control over your development environment.

Solo is an opinionated command-line interface (CLI) tool designed to deploy and manage standalone Hedera test networks. Think of it as your personal Hedera sandbox where you can experiment, test features, and develop applications without any external dependencies or costs. Whether you’re building smart contracts, testing consensus mechanisms, or developing DApps, Solo provides the infrastructure you need.

By the end of this tutorial, you’ll have your own Hedera test network running locally, complete with consensus nodes, mirror nodes, and all the infrastructure needed to submit transactions and test your applications. Let’s dive in!

Prerequisites

Before we begin, let’s ensure your system meets the requirements and has all the necessary software installed. Don’t worry if this seems like a lot – we’ll walk through each step together.

System Requirements(for a bare minimum install running 1 node)

First, check that your computer meets these minimum specifications:

  • Memory: At least 12GB of RAM (16GB recommended for smoother performance)
  • CPU: Minimum 4 cores (8 cores recommended)
  • Storage: At least 20GB of free disk space
  • Operating System: macOS, Linux, or Windows with WSL2

Required Software

You’ll need to install a few tools before we can set up Solo. Here’s what you need and how to get it:

1. Node.js (β‰₯20.18.0)

Details <click to expand/collapse>

Solo is built on Node.js, so you’ll need version 20.18.0 or higher. We recommend using Node Version Manager (nvm) for easy version management:

# Install nvm (macOS/Linux)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash

# Install nvm (Windows - use nvm-windows)# Download from: https://github.com/coreybutler/nvm-windows# Install Node.js
nvm install 20.18.0
nvm use 20.18.0

# Verify installation
node --version

2. Docker Desktop

Details <click to expand/collapse>

Docker is essential for running the containerized Hedera network components:

  • macOS/Windows: Download Docker Desktop from docker.com
  • Linux: Follow the installation guide for your distribution at docs.docker.com

After installation, ensure Docker is running:

docker --version
docker ps

Preparing Your Environment

Now that we have all prerequisites in place, let’s install Solo and set up our environment.

One thing to consider, old installs can really hamper your ability to get a new install up and running. If you have an old install of Solo, or if you are having issues with the install, please run the following commands to clean up your environment before proceeding.

1. Installing Solo

Details <click to expand/collapse>

Open your terminal and install Solo globally using npm:

npm install -g @hashgraph/solo

# Verify the installation
solo --version

You should see output showing the latest version which should match our NPM package version: https://www.npmjs.com/package/@hashgraph/solo


*Cleaning up an old install

Details <click to expand/collapse>

The team is presently working on a number of fixes and automation that will relegate the need for this, but currently as deployed Solo can be finnicky with artifacts from prior installs. A quick command to prep your station for a new install is a good idea.

for cluster in $(kind get clusters);do kind delete cluster -n $cluster;done
rm -Rf ~/.solo

2. Setting up your environmental variables

Details <click to expand/collapse>

You need to declare some environmental variables. Keep note that unless you intentionally include these in your zsh config when you close your terminal you may lose them.

*throughout the remainder of this walkthrough for simplicity sake I will assume in commands these are the values in your .env

export SOLO_CLUSTER_NAME=solo
export SOLO_NAMESPACE=solo
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
export SOLO_DEPLOYMENT=solo-deployment

3. Create a cluster

Details <click to expand/collapse>
kind create cluster -n "${SOLO_CLUSTER_NAME}"

Example output:

Creating cluster "solo-e2e" ...
  Ensuring node image (kindest/node:v1.32.2) πŸ–Ό  ...
 βœ“ Ensuring node image (kindest/node:v1.32.2) πŸ–Ό
  Preparing nodes πŸ“¦   ...
 βœ“ Preparing nodes πŸ“¦
  Writing configuration πŸ“œ  ...
 βœ“ Writing configuration πŸ“œ
  Starting control-plane πŸ•ΉοΈ  ...
 βœ“ Starting control-plane πŸ•ΉοΈ
  Installing CNI πŸ”Œ  ...
 βœ“ Installing CNI πŸ”Œ
  Installing StorageClass πŸ’Ύ  ...
 βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-solo-e2e"
You can now use your cluster with:

kubectl cluster-info --context kind-solo-e2e

Have a nice day! πŸ‘‹

*Connecting to a remote cluster

Details <click to expand/collapse>
  • You may use a remote Kubernetes cluster. In this case, ensure Kubernetes context is set up correctly.
kubectl config get-contexts
kubectl config use-context <context-name>

Quick Start Deployment

For a simple setup with a single node with a mirror node, explorer, and JSON RPC relay, you can follow these quick steps. This is ideal for testing and development purposes.

solo quick-start single deploy

When you’re finished, you can tear down your Solo network just as easily:

solo quick-start single destroy

Step-by-Step Solo Network Deployment

If you have a more complex setup in mind, such as multiple nodes or specific configurations, follow these detailed steps to deploy your Solo network.

1. Initialize solo:

Details <click to expand/collapse>

Reset the .solo directory before initializing Solo. This step is crucial to ensure a clean setup without any leftover artifacts from previous installations. See: *Cleaning up an old install

solo init

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: init
**********************************************************************************
 Setup home directory and cache
βœ” Setup home directory and cache
 Check dependencies
 Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
 Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
βœ” Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
βœ” Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
βœ” Check dependencies
 Create local configuration
βœ” Create local configuration
 Setup chart manager
βœ” Setup chart manager
 Copy templates in '/home/runner/.solo/cache'

***************************************************************************************
Note: solo stores various artifacts (config, logs, keys etc.) in its home directory: /home/runner/.solo
If a full reset is needed, delete the directory or relevant sub-directories before running 'solo init'.
***************************************************************************************
βœ” Copy templates in '/home/runner/.solo/cache'

2. Connect the cluster and create a deployment

Details <click to expand/collapse>

This command will create a deployment in the specified clusters, and generate the LocalConfig and RemoteConfig used by k8s.

The deployment will:

  • Create a namespace (usually matching the deployment name)
  • Set up ConfigMaps and secrets
  • Deploy network infrastructure
  • Create persistent volumes if needed

πŸ“ notice that the --cluster-ref value is kind-solo, when you created the Kind cluster it created a cluster reference in the Kubernetes config with the name kind-solo. If you used a different name, replace kind-solo with your cluster name, but prefixing with kind-. If you are working with a remote cluster, you can use the name of your cluster reference which can be gathered with the command: kubectl config get-contexts. πŸ“ Note: Solo stores various artifacts (config, logs, keys etc.) in its home directory: ~/.solo. If you need a full reset, delete this directory before running solo init ag

# connect to the cluster you created in a previous command
solo cluster-ref config connect --cluster-ref kind-${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}

#create the deployment
solo deployment config create -n "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: cluster-ref config connect --cluster-ref kind-solo --context kind-solo
**********************************************************************************
 Initialize
βœ” Initialize
 Validating cluster ref: 
βœ” kind-solo
 Test connection to cluster: 
βœ” Test connection to cluster: kind-solo
 Associate a context with a cluster reference: 
βœ” Associate a context with a cluster reference: kind-solo
solo-deployment_CREATE_OUTPUT

3. Add a cluster to the deployment you created

Details <click to expand/collapse>

*This command is the first command that will specify how many nodes you want to add to your deployment. For the sake of resource

# Add a cluster to the deployment you created
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --num-consensus-nodes 1
# If the command line command is unresponsive there's also a handy cluster add configurator you can run `solo deployment cluster attach` without any arguments to get a guided setup.

Example output:

solo-deployment_ADD_CLUSTER_OUTPUT

4. Generate keys

Details <click to expand/collapse>

You need to generate keys for your nodes, or in this case single node.

solo keys consensus generate --gossip-keys --tls-keys --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: keys consensus generate --gossip-keys --tls-keys --deployment solo-deployment
**********************************************************************************
 Initialize
βœ” Initialize
 Generate gossip keys
 Backup old files
βœ” Backup old files
 Gossip key for node: node1
βœ” Gossip key for node: node1
βœ” Generate gossip keys
 Generate gRPC TLS Keys
 Backup old files
 TLS key for node: node1
βœ” Backup old files
βœ” TLS key for node: node1
βœ” Generate gRPC TLS Keys
 Finalize
βœ” Finalize

PEM key files are generated in ~/.solo/cache/keys directory.

hedera-node1.crt    hedera-node3.crt    s-private-node1.pem s-public-node1.pem  unused-gossip-pem
hedera-node1.key    hedera-node3.key    s-private-node2.pem s-public-node2.pem  unused-tls
hedera-node2.crt    hedera-node4.crt    s-private-node3.pem s-public-node3.pem
hedera-node2.key    hedera-node4.key    s-private-node4.pem s-public-node4.pem

5. Setup cluster with shared components

Details <click to expand/collapse>
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: cluster-ref config setup --cluster-setup-namespace solo-cluster
**********************************************************************************
 Initialize
βœ” Initialize
 Prepare chart values
βœ” Prepare chart values
 Install 'solo-cluster-setup' chart
 - Installed solo-cluster-setup chart, version: 0.56.0
βœ” Install 'solo-cluster-setup' chart

Deploying Helm chart with network components

Now comes the exciting part – deploying your Hedera test network!

*Deploy a block node (experimental)

Details <click to expand/collapse>

⚠️ Block Node is experimental in Solo. It requires a minimum of 16 GB of memory allocated to Docker. If you have less than 16 GB of memory, skip this step.

As mentioned in the warning, Block Node uses a lot of memory. In addition, it requires a version of Consensus Node to be at least v0.62.3. You will need to augment the solo consensus network deploy & solo consensus node setup command with the --release-tag v0.62.6 option to ensure that the Consensus Node is at the correct version. *note: v0.62.6 is the latest patch for v0.62

solo block node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-"${SOLO_CLUSTER_NAME}" --release-tag v0.62.6

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: block node add --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
 Initialize
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Prepare release name
βœ” Prepare release name
 Prepare chart values
βœ” Prepare chart values
 Deploy block node
 - Installed block-node-0 chart, version: v0.14.0
βœ” Deploy block node
 Check block node pod is running
βœ” Check block node pod is running
 Check software
βœ” Check software
 Check block node pod is ready
βœ” Check block node pod is ready
 Check block node readiness
βœ” Check block node readiness - [1/100] success
 Add block node component in remote config
βœ” Add block node component in remote config

1. Deploy the network

Details <click to expand/collapse>

Deploying the network runs risks of timeouts as images are downloaded, and pods are starting. If you experience a failure double check the resources you’ve allocated in docker engine and give it another try.

solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: consensus network deploy --deployment solo-deployment
**********************************************************************************
 Initialize
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Copy gRPC TLS Certificates
 Copy gRPC TLS Certificates KIPPED: Copy gRPC TLS Certificates]
 Check if cluster setup chart is installed
βœ” Check if cluster setup chart is installed
 Prepare staging directory
 Copy Gossip keys to staging
βœ” Copy Gossip keys to staging
 Copy gRPC TLS keys to staging
βœ” Copy gRPC TLS keys to staging
βœ” Prepare staging directory
 Copy node keys to secrets
 Copy TLS keys
 Node: node1, cluster: kind-solo
 Copy Gossip keys
βœ” Copy Gossip keys
βœ” Node: node1, cluster: kind-solo
βœ” Copy TLS keys
βœ” Copy node keys to secrets
 Install chart 'solo-deployment'
 - Installed solo-deployment chart, version: 0.56.0
βœ” Install chart 'solo-deployment'
 Check for load balancer
 Check for load balancer KIPPED: Check for load balancer]
 Redeploy chart with external IP address config
 Redeploy chart with external IP address config KIPPED: Redeploy chart with external IP address config]
 Check node pods are running
 Check Node: node1, Cluster: kind-solo
βœ” Check Node: node1, Cluster: kind-solo
βœ” Check node pods are running
 Check proxy pods are running
 Check HAProxy for: node1, cluster: kind-solo
 Check Envoy Proxy for: node1, cluster: kind-solo
βœ” Check Envoy Proxy for: node1, cluster: kind-solo
βœ” Check HAProxy for: node1, cluster: kind-solo
βœ” Check proxy pods are running
 Check auxiliary pods are ready
 Check MinIO
βœ” Check MinIO
βœ” Check auxiliary pods are ready
 Add node and proxies to remote config
βœ” Add node and proxies to remote config
 Copy block-nodes.json
βœ” Copy block-nodes.json

2. Set up a node with Hedera platform software

Details <click to expand/collapse>

This step downloads the hedera platform code and sets up your node/nodes.

# consensus node setup
export CONSENSUS_NODE_VERSION=v0.63.9 # or whatever version you are trying to deploy starting with a `v`
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" --release-tag "${CONSENSUS_NODE_VERSION}"

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: consensus node setup --deployment solo-deployment
**********************************************************************************
 Load configuration
βœ” Load configuration
 Initialize
βœ” Initialize
 Validate nodes states
 Validating state for node node1
βœ” Validating state for node node1 - valid state: requested
βœ” Validate nodes states
 Identify network pods
 Check network pod: node1
βœ” Check network pod: node1
βœ” Identify network pods
 Fetch platform software into network nodes
 Update node: node1 [ platformVersion = v0.63.9, context = kind-solo ]
βœ” Update node: node1 [ platformVersion = v0.63.9, context = kind-solo ]
βœ” Fetch platform software into network nodes
 Setup network nodes
 Node: node1
 Copy configuration files
βœ” Copy configuration files
 Set file permissions
βœ” Set file permissions
βœ” Node: node1
βœ” Setup network nodes
 setup network node folders
βœ” setup network node folders
 Change node state to configured in remote config
βœ” Change node state to configured in remote config

3. Start the nodes up!

Details <click to expand/collapse>

Now that everything is set up you need to start them.

# start your node/nodes
solo consensus node start --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: consensus node start --deployment solo-deployment
**********************************************************************************
 Load configuration
βœ” Load configuration
 Initialize
βœ” Initialize
 Validate nodes states
 Validating state for node node1
βœ” Validating state for node node1 - valid state: configured
βœ” Validate nodes states
 Identify existing network nodes
 Check network pod: node1
βœ” Check network pod: node1
βœ” Identify existing network nodes
 Upload state files network nodes
 Upload state files network nodes KIPPED: Upload state files network nodes]
 Starting nodes
 Start node: node1
βœ” Start node: node1
βœ” Starting nodes
 Enable port forwarding for debug port and/or GRPC port
Using requested port 50211
βœ” Enable port forwarding for debug port and/or GRPC port
 Check all nodes are ACTIVE
 Check network pod: node1 
βœ” Check network pod: node1  - status ACTIVE, attempt: 17/300
βœ” Check all nodes are ACTIVE
 Check node proxies are ACTIVE
 Check proxy for node: node1
βœ” Check proxy for node: node1
βœ” Check node proxies are ACTIVE
 Change node state to started in remote config
βœ” Change node state to started in remote config
 Add node stakes
 Adding stake for node: node1
Using requested port 30212
βœ” Adding stake for node: node1
βœ” Add node stakes
 set gRPC Web endpoint
βœ” set gRPC Web endpoint
Stopping port-forwarder for port [30212]

4. Deploy a mirror node

Details <click to expand/collapse>

This is the most memory intensive step from a resource perspective. If you have issues at this step try checking your local resource utilization and make sure there’s memory available for docker (close all unessential applications). Likewise, you can consider lowering your swap in docker settings to ease the swap demand, and try again.

The --pinger flag starts a pinging service that sends transactions to the network at regular intervals. This is needed because the record file is not imported into the mirror node until the next one is created.

# Deploy with explicit configuration
solo mirror node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --enable-ingress --pinger

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: mirror node add --deployment solo-deployment --cluster-ref kind-solo --enable-ingress --quiet-mode
**********************************************************************************
 Initialize
Using requested port 30212
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Enable mirror-node
 Prepare address book
βœ” Prepare address book
 Install mirror ingress controller
 - Installed haproxy-ingress chart, version: 0.14.5
βœ” Install mirror ingress controller
 Deploy mirror-node
 - Installed mirror chart, version: v0.136.0
βœ” Deploy mirror-node
βœ” Enable mirror-node
 Check pods are ready
 Check Postgres DB
 Check REST API
 Check GRPC
 Check Monitor
 Check Web3
 Check Importer
βœ” Check Postgres DB
βœ” Check Web3
βœ” Check GRPC
βœ” Check REST API
βœ” Check Monitor
βœ” Check Importer
βœ” Check pods are ready
 Seed DB data
 Insert data in public.file_data
βœ” Insert data in public.file_data
βœ” Seed DB data
 Add mirror node to remote config
βœ” Add mirror node to remote config
 Enable port forwarding for mirror ingress controller
Using requested port 8081
βœ” Enable port forwarding for mirror ingress controller
Stopping port-forwarder for port [30212]

5. Deploy the explorer

Details <click to expand/collapse>

Watch the deployment progress:

# deploy explorer
solo explorer node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: explorer node add --deployment solo-deployment --cluster-ref kind-solo --quiet-mode
**********************************************************************************
 Initialize
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Load remote config
βœ” Load remote config
 Install cert manager
 Install cert manager KIPPED: Install cert manager]
 Install explorer
 - Installed hiero-explorer chart, version: 25.1.1
βœ” Install explorer
 Install explorer ingress controller
 Install explorer ingress controller KIPPED: Install explorer ingress controller]
 Check explorer pod is ready
βœ” Check explorer pod is ready
 Check haproxy ingress controller pod is ready
 Check haproxy ingress controller pod is ready KIPPED: Check haproxy ingress controller pod is ready]
 Add explorer to remote config
βœ” Add explorer to remote config
 Enable port forwarding for explorer
Using requested port 8080
βœ” Enable port forwarding for explorer

6. Deploy a JSON RPC relay

Details <click to expand/collapse>

The JSON RPC relay allows you to interact with your Hedera network using standard JSON RPC calls. This is useful for integrating with existing tools and libraries.

#deploy a solo JSON RPC relay
solo relay node add -i node1 --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: relay node add --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
 Initialize
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Check chart is installed
βœ” Check chart is installed
 Prepare chart values
Using requested port 30212
βœ” Prepare chart values
 Deploy JSON RPC Relay
 - Installed relay-node1 chart, version: 0.70.0
βœ” Deploy JSON RPC Relay
 Check relay is running
βœ” Check relay is running
 Check relay is ready
βœ” Check relay is ready
 Add relay component in remote config
βœ” Add relay component in remote config
 Enable port forwarding for relay node
Using requested port 7546
βœ” Enable port forwarding for relay node
Stopping port-forwarder for port [30212]

*Check Pod Status

Details <click to expand/collapse>

Here is a command if you want to check the status of your Solo Kubernetes pods:

# Check pod status
kubectl get pods -n solo

Working with Your Network

Network Endpoints

Details <click to expand/collapse>

At this time Solo doesn’t automatically set up port forwarding for you, so you’ll need to do that manually.

The port forwarding is now automatic for many endpoints. However, you can set up your own using kubectl port-forward command:

# Consensus Service for node1 (node ID = 0): localhost:50211
# should be automatic: kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 50211:50211 > /dev/null 2>&1 &
# Explorer UI: http://localhost:8080
# should be automatic: kubectl port-forward svc/hiero-explorer -n "${SOLO_NAMESPACE}" 8080:8080 > /dev/null 2>&1 &
# Mirror Node gRPC, REST, REST Java, Web3 will be automatic on `localhost:8081` if you passed `--enable-ingress` to the `solo mirror node add` command
# Mirror Node gRPC: localhost:5600
kubectl port-forward svc/mirror-grpc -n "${SOLO_NAMESPACE}" 5600:5600 > /dev/null 2>&1 &
# Mirror Node REST API: http://localhost:5551
kubectl port-forward svc/mirror-rest -n "${SOLO_NAMESPACE}" 5551:80 > /dev/null 2>&1 &
# Mirror Node REST Java API http://localhost:8084
kubectl port-forward svc/mirror-restjava -n "${SOLO_NAMESPACE}" 8084:80 > /dev/null 2>&1 &
# JSON RPC Relay: localhost:7546
# should be automatic: kubectl port-forward svc/relay-node1-hedera-json-rpc-relay -n "${SOLO_NAMESPACE}" 7546:7546 > /dev/null 2>&1 &

Managing Your Network

Stopping and Starting Nodes

Details <click to expand/collapse>

You can control individual nodes or the entire network:

# Stop all nodes
solo consensus node stop --deployment solo-deployment

# Stop a specific node
solo consensus node stop --node-id node-0 --deployment solo-deployment

# Restart nodes
solo consensus node restart --deployment solo-deployment

# Start nodes again
solo consensus node start --deployment solo-deployment

Viewing Logs

Details <click to expand/collapse>

Access Solo and Consensus Node logs for troubleshooting:

# Download logs from all nodes

# Logs are saved to ~/.solo/logs/<namespace>/<pod-name>/# You can also use kubectl directly:
solo consensus diagnostics all --deployment solo-deployment

Updating the Network

Details <click to expand/collapse>

To update nodes to a new Hedera version, you need to upgrade by one minor version higher at a time:

solo consensus network upgrade --deployment solo-deployment --upgrade-version v0.62.6

Updating a single node

Details <click to expand/collapse>

To update a single node to a new Hedera version, you need to update by one minor version higher at a time:

solo consensus node update --deployment solo-deployment --node-alias node1 --release-tag v0.62.6

It is possible to update a single node to a new Hedera version through a process with separated steps. This is only useful in very specific cases, such as when testing the updating process.

solo consensus dev-node-update prepare --deployment solo-deployment --node-alias node1 --release-tag v0.62.6 --output-dir context
solo consensus dev-node-update submit-transaction --deployment solo-deployment --input-dir context
solo consensus dev-node-update execute --deployment solo-deployment --input-dir context

Adding a new node to the network

Details <click to expand/collapse>

Adding a new node to an existing Solo network:

TODO solo consensus node add

It is possible to add a new node through a process with separated steps. This is only useful in very specific cases, such as when testing the node adding process.

solo consensus dev-node-add prepare --gossip-keys true --tls-keys true --deployment solo-deployment --pvcs true --admin-key ***** --node-alias node1 --output-dir context
solo consensus dev-node-add submit-transaction --deployment solo-deployment --input-dir context
solo consensus dev-node-add execute --deployment solo-deployment --input-dir context

Deleting a node from the network

Details <click to expand/collapse>

This command is used to delete a node from an existing Solo network:

TODO solo consensus node destroy

It is possible to delete a node through a process with separated steps. This is only useful in very specific cases, such as when testing the delete process.

solo consensus dev-node-delete prepare --deployment solo-deployment --node-alias node1 --output-dir context
solo consensus dev-node-delete submit-transaction --deployment solo-deployment --input-dir context
solo consensus dev-node-delete execute --deployment solo-deployment --input-dir context

Troubleshooting: Common Issues and Solutions

1. Pods Not Starting

Details <click to expand/collapse>

If pods remain in Pending or CrashLoopBackOff state:

# Check pod events
kubectl describe pod -n solo network-node-0

# Common fixes:# - Increase Docker resources (memory/CPU)# - Check disk space# - Restart Docker and kind cluster

2. Connection Refused Errors

Details <click to expand/collapse>

If you can’t connect to network endpoints:

# Check service endpoints
kubectl get svc -n solo

# Manually forward ports if needed
kubectl port-forward -n solo svc/network-node-0 50211:50211

3. Node Synchronization Issues

Details <click to expand/collapse>

If nodes aren’t forming consensus:

# Check node status
solo consensus state download --deployment solo-deployment --node-aliases node1

# Look for gossip connectivity issues
kubectl logs -n solo network-node-0 | grep -i gossip

# Restart problematic nodes
solo consensus node refresh --node-aliases node1 --deployment solo-deployment

Getting Help

Details <click to expand/collapse>

When you need assistance:

  1. Check the logs: Use solo consensus diagnostics all --deployment solo-deployment and examine ~/.solo/logs/
  2. Documentation: Visit https://solo.hiero.org/main/docs/
  3. GitHub Issues: Report bugs at https://github.com/hiero-ledger/solo/issues
  4. Community Support: Join the Hedera Discord community: https://discord.gg/Ysruf53q

Cleanup

Details <click to expand/collapse>

When you’re done with your test network:

*Fast clean up

Details <click to expand/collapse>

To quickly clean up your Solo network and remove all resources (all Kind clusters!), you can use the following commands, be aware you will lose all your logs and data from prior runs:

for cluster in $(kind get clusters);do kind delete cluster -n $cluster;done
rm -Rf ~/.solo

1. Destroy relay node

Details <click to expand/collapse>
solo relay node destroy -i node1 --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}"

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: relay node destroy --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
 Initialize
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Destroy JSON RPC Relay
βœ” Destroy JSON RPC Relay
 Remove relay component from remote config
βœ” Remove relay component from remote config

2. Destroy mirror node

Details <click to expand/collapse>
solo mirror node destroy --deployment "${SOLO_DEPLOYMENT}" --force

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: mirror node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
 Initialize
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Destroy mirror-node
βœ” Destroy mirror-node
 Delete PVCs
βœ” Delete PVCs
 Uninstall mirror ingress controller
βœ” Uninstall mirror ingress controller
 Remove mirror node from remote config
βœ” Remove mirror node from remote config

3. Destroy explorer node

Details <click to expand/collapse>
solo explorer node destroy --deployment "${SOLO_DEPLOYMENT}" --force

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: explorer node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
 Initialize
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Load remote config
βœ” Load remote config
 Destroy explorer
βœ” Destroy explorer
 Uninstall explorer ingress controller
βœ” Uninstall explorer ingress controller
 Remove explorer from remote config
βœ” Remove explorer from remote config

*Destroy block node (Experimental)

Details <click to expand/collapse>

Block Node destroy should run prior to consensus network destroy, since consensus network destroy removes the remote config. To destroy the block node (if you deployed it), you can use the following command:

solo block node destroy --deployment "${SOLO_DEPLOYMENT} --cluster-ref kind-${SOLO_CLUSTER_NAME}"

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: block node destroy --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
 Initialize
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Look-up block node
βœ” Look-up block node
 Destroy block node
βœ” Destroy block node
 Disable block node component in remote config
βœ” Disable block node component in remote config

4. Destroy network

Details <click to expand/collapse>
solo consensus network destroy --deployment "${SOLO_DEPLOYMENT}" --force

Example output:

******************************* Solo *********************************************
Version			: 0.43.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: consensus network destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
 Initialize
 Acquire lock
βœ” Acquire lock - lock acquired successfully, attempt: 1/10
βœ” Initialize
 Remove deployment from local configuration
βœ” Remove deployment from local configuration
 Running sub-tasks to destroy network
βœ” Deleting the RemoteConfig configmap in namespace solo


Next Steps

Details <click to expand/collapse>

Congratulations! You now have a working Hedera test network. Here are some suggestions for what to explore next:

  1. Deploy Smart Contracts: Test your Solidity contracts on the local network
  2. Mirror Node Queries: Explore the REST API at http://localhost:5551
  3. Multi-Node Testing: Add more nodes to test scalability
  4. Network Upgrades: Practice upgrading the Hedera platform version
  5. Integration Testing: Connect your applications to the local network

Remember, this is your personal Hedera playground. Experiment freely, break things, learn, and have fun building on Hedera!

Happy coding with Solo! πŸš€


3 - Solo CLI User Manual

Solo CLI is a command line interface for the Hiero Consensus Node network. It allows users to interact with the network, manage accounts, and perform various operations.

Solo Command Line User Manual

Solo has a series of commands to use, and some commands have subcommands. User can get help information by running with the following methods:

solo --help will return the help information for the solo command to show which commands are available.

solo command --help will return the help information for the specific command to show which options

solo ledger account --help

Manage Hedera accounts in solo network

Commands:
  system init     Initialize system accounts with new keys
  account create   Creates a new account with a new key and stores the key in th
                   e Kubernetes secrets, if you supply no key one will be genera
                   ted for you, otherwise you may supply either a ECDSA or ED255
                   19 private key
  account update   Updates an existing account with the provided info, if you wa
                   nt to update the private key, you can supply either ECDSA or
                   ED25519 but not both

  account get      Gets the account info including the current amount of HBAR

Options:
      --dev                 Enable developer mode                      [boolean]
      --force-port-forward  Force port forward to access the network services
                                                                       [boolean]
  -h, --help                Show help                                  [boolean]
  -v, --version             Show version number                        [boolean]

solo command subcommand --help will return the help information for the specific subcommand to show which options

solo ledger account create --help

Creates a new account with a new key and stores the key in the Kubernetes secret
s, if you supply no key one will be generated for you, otherwise you may supply
either a ECDSA or ED25519 private key

Options:
      --dev                  Enable developer mode                     [boolean]
      --force-port-forward   Force port forward to access the network services
                                                                       [boolean]
      --hbar-amount          Amount of HBAR to add                      [number]
      --create-amount        Amount of new account to create            [number]
      --ecdsa-private-key    ECDSA private key for the Hedera account   [string]
  -d, --deployment           The name the user will reference locally to link to
                              a deployment                              [string]
      --ed25519-private-key  ED25519 private key for the Hedera account [string]
      --generate-ecdsa-key   Generate ECDSA private key for the Hedera account
                                                                       [boolean]
      --set-alias            Sets the alias for the Hedera account when it is cr
                             eated, requires --ecdsa-private-key       [boolean]
  -c, --cluster-ref          The cluster reference that will be used for referen
                             cing the Kubernetes cluster and stored in the local
                              and remote configuration for the deployment.  For
                             commands that take multiple clusters they can be se
                             parated by commas.                         [string]
  -h, --help                 Show help                                 [boolean]
  -v, --version              Show version number                       [boolean]

For more information see: Solo CLI Commands

4 - Solo CLI Commands

This document provides a comprehensive reference for the Solo CLI commands, including their options and usage.

Solo Command Reference

Table of Contents

Root Help Output

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts --help

Select a command
Usage:
  solo <command> [options]

Commands:
  init         Initialize local environment
  block        Block Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
  cluster-ref  Manages the relationship between Kubernetes context names and Solo cluster references which are an alias for a kubernetes context.
  consensus    Consensus Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
  deployment   Create, modify, and delete deployment configurations. Deployments are required for most of the other commands.
  explorer     Explorer Node operations for creating, modifying, and destroying resources.These commands require the presence of an existing deployment.
  keys         Consensus key generation operations
  ledger       System, Account, and Crypto ledger-based management operations. These commands require an operational set of consensus nodes and may require an operational mirror node.
  mirror       Mirror Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
  relay        RPC Relay Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.
  quick-start  Quick start commands for new and returning users who need a preset environment type. These commands use reasonable defaults to provide a single command out of box experience.

Options:

     --dev                 Enable developer mode           [boolean] [default: false]
     --force-port-forward  Force port forward to access    [boolean] [default: true] 
                           the network services                                      
-v,  --version             Show version number             [boolean]

init

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts init --help

 init

Initialize local environment

Options:

     --cache-dir           Local cache directory           [string] [default: "/home/runner/.solo/cache"]
     --dev                 Enable developer mode           [boolean] [default: false]                    
     --force-port-forward  Force port forward to access    [boolean] [default: true]                     
                           the network services                                                          
-q,  --quiet-mode          Quiet mode, do not prompt for   [boolean] [default: false]                    
                           confirmation                                                                  
-u,  --user                Optional user name used for     [string]                                      
                           local configuration. Only                                                     
                           accepts letters and numbers.                                                  
                           Defaults to the username                                                      
                           provided by the OS                                                            
-v,  --version             Show version number             [boolean]

block

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts block --help

 block

Block Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.

Commands:
  block node   Create, manage, or destroy block node instances. Operates on a single block node instance at a time.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

block node

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts block node --help

 block node

Create, manage, or destroy block node instances. Operates on a single block node instance at a time.

Commands:
  block node add       Creates and configures a new block node instance for the specified deployment using the specified Kubernetes cluster. The cluster must be accessible and attached to the specified deployment.
  block node destroy   Destroys a single block node instance in the specified deployment. Requires access to all Kubernetes clusters attached to the deployment.
  block node upgrade   Upgrades a single block node instance in the specified deployment. Requires access to all Kubernetes clusters attached to the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

block node add

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts block node add --help

 block node add

Creates and configures a new block node instance for the specified deployment using the specified Kubernetes cluster. The cluster must be accessible and attached to the specified deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --chart-version       Block nodes chart version  [string] [default: "v0.14.0"]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --domain-name         Custom domain name  [string]
      --enable-ingress      enable ingress on the component/pod  [boolean] [default: false]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -f, --values-file         Comma separated chart values file  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
      --image-tag           The Docker image tag to override what is in the Helm Chart  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

block node destroy

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts block node destroy --help

 block node destroy

Destroys a single block node instance in the specified deployment. Requires access to all Kubernetes clusters attached to the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

block node upgrade

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts block node upgrade --help

 block node upgrade

Upgrades a single block node instance in the specified deployment. Requires access to all Kubernetes clusters attached to the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -f, --values-file         Comma separated chart values file  [string]
      --upgrade-version     Version to be used for the upgrade  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

cluster-ref

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts cluster-ref --help

 cluster-ref

Manages the relationship between Kubernetes context names and Solo cluster references which are an alias for a kubernetes context.

Commands:
  cluster-ref config   List, create, manage, and remove associations between Kubernetes contexts and Solo cluster references.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

cluster-ref config

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts cluster-ref config --help

 cluster-ref config

List, create, manage, and remove associations between Kubernetes contexts and Solo cluster references.

Commands:
  cluster-ref config connect      Creates a new internal Solo cluster name to a Kubernetes context or maps a Kubernetes context to an existing internal Solo cluster reference
  cluster-ref config disconnect   Removes the Kubernetes context associated with an internal Solo cluster reference.
  cluster-ref config list         Lists the configured Kubernetes context to Solo cluster reference mappings.
  cluster-ref config info         Displays the status information and attached deployments for a given Solo cluster reference mapping.
  cluster-ref config setup        Setup cluster with shared components
  cluster-ref config reset        Uninstall shared components from cluster

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

cluster-ref config connect

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts cluster-ref config connect --help

 cluster-ref config connect

Creates a new internal Solo cluster name to a Kubernetes context or maps a Kubernetes context to an existing internal Solo cluster reference

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --context             The Kubernetes context name to be used  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

cluster-ref config disconnect

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts cluster-ref config disconnect --help

 cluster-ref config disconnect

Removes the Kubernetes context associated with an internal Solo cluster reference.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

cluster-ref config list

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts cluster-ref config list --help

 cluster-ref config list

Lists the configured Kubernetes context to Solo cluster reference mappings.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

cluster-ref config info

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts cluster-ref config info --help

 cluster-ref config info

Displays the status information and attached deployments for a given Solo cluster reference mapping.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

cluster-ref config setup

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts cluster-ref config setup --help

 cluster-ref config setup

Setup cluster with shared components

Options:
      --dev                      Enable developer mode  [boolean] [default: false]
      --force-port-forward       Force port forward to access the network services  [boolean] [default: true]
      --chart-dir                Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -c, --cluster-ref              The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -s, --cluster-setup-namespace  Cluster Setup Namespace  [string] [default: "solo-setup"]
      --minio                    Deploy minio operator  [boolean] [default: true]
      --prometheus-stack         Deploy prometheus stack  [boolean] [default: false]
  -q, --quiet-mode               Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --solo-chart-version       Solo testing chart version  [string] [default: "0.56.0"]
  -h, --help                     Show help  [boolean]
  -v, --version                  Show version number  [boolean]

cluster-ref config reset

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts cluster-ref config reset --help

 cluster-ref config reset

Uninstall shared components from cluster

Options:
      --dev                      Enable developer mode  [boolean] [default: false]
      --force-port-forward       Force port forward to access the network services  [boolean] [default: true]
  -c, --cluster-ref              The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -s, --cluster-setup-namespace  Cluster Setup Namespace  [string] [default: "solo-setup"]
      --force                    Force actions even if those can be skipped  [boolean] [default: false]
  -q, --quiet-mode               Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                     Show help  [boolean]
  -v, --version                  Show version number  [boolean]

consensus

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus --help

 consensus

Consensus Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.

Commands:
  consensus network            Ledger/network wide consensus operations such as freeze, upgrade, and deploy. Operates on the entire ledger and all consensus node instances.
  consensus node               List, create, manage, or destroy consensus node instances. Operates on a single consensus node instance at a time.
  consensus state              List, download, and upload consensus node state backups to/from individual consensus node instances.
  consensus diagnostics        Capture diagnostic information such as logs, signed states, and ledger/network/node configurations.
  consensus dev-node-add       Dev operations for adding consensus nodes.
  consensus dev-node-update    Dev operations for updating consensus nodes
  consensus dev-node-upgrade   Dev operations for upgrading consensus nodes
  consensus dev-node-delete    Dev operations for delete consensus nodes
  consensus dev-freeze         Dev operations for freezing consensus nodes

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus network

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus network --help

 consensus network

Ledger/network wide consensus operations such as freeze, upgrade, and deploy. Operates on the entire ledger and all consensus node instances.

Commands:
  consensus network deploy    Installs and configures all consensus nodes for the deployment.
  consensus network destroy   Removes all consensus network components from the deployment.
  consensus network freeze    Initiates a network freeze for scheduled maintenance or upgrades
  consensus network upgrade   Upgrades the software version running on all consensus nodes.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus network deploy

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus network deploy --help

 consensus network deploy

Installs and configures all consensus nodes for the deployment.

Options:
      --dev                        Enable developer mode  [boolean] [default: false]
      --force-port-forward         Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment                 The name the user will reference locally to link to a deployment  [string]
      --api-permission-properties  api-permission.properties file for node  [string] [default: "templates/api-permission.properties"]
      --app                        Testing app name  [string] [default: "HederaNode.jar"]
      --application-env            the application.env file for the node provides environment variables to the solo-container to be used when the hedera platform is started  [string] [default: "templates/application.env"]
      --application-properties     application.properties file for node  [string] [default: "templates/application.properties"]
      --bootstrap-properties       bootstrap.properties file for node  [string] [default: "templates/bootstrap.properties"]
      --genesis-throttles-file     throttles.json file used during network genesis  [string]
      --cache-dir                  Local cache directory  [string] [default: "/home/runner/.solo/cache"]
  -l, --ledger-id                  Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --chart-dir                  Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --prometheus-svc-monitor     Enable prometheus service monitor for the network nodes  [boolean] [default: false]
      --solo-chart-version         Solo testing chart version  [string] [default: "0.56.0"]
      --debug-node-alias           Enable default jvm debug port (5005) for the given node id  [string]
      --load-balancer              Enable load balancer for network node proxies  [boolean] [default: false]
      --log4j2-xml                 log4j2.xml file for node  [string] [default: "templates/log4j2.xml"]
      --pvcs                       Enable persistent volume claims to store data outside the pod, required for consensus node add  [boolean] [default: false]
      --profile-file               Resource profile definition (e.g. custom-spec.yaml)  [string] [default: "profiles/custom-spec.yaml"]
      --profile                    Resource profile (local | tiny | small | medium | large)  [string] [default: "local"]
  -q, --quiet-mode                 Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -t, --release-tag                Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
      --settings-txt               settings.txt file for node  [string] [default: "templates/settings.txt"]
  -f, --values-file                Comma separated chart values file paths for each cluster (e.g. values.yaml,cluster-1=./a/b/values1.yaml,cluster-2=./a/b/values2.yaml)  [string]
  -i, --node-aliases               Comma separated node aliases (empty means all nodes)  [string]
      --grpc-tls-cert              TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)  [string]
      --grpc-web-tls-cert          TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)  [string]
      --grpc-tls-key               TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)  [string]
      --grpc-web-tls-key           TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)  [string]
      --haproxy-ips                IP mapping where key = value is node alias and static ip for haproxy, (e.g.: --haproxy-ips node1=127.0.0.1,node2=127.0.0.1)  [string]
      --envoy-ips                  IP mapping where key = value is node alias and static ip for envoy proxy, (e.g.: --envoy-ips node1=127.0.0.1,node2=127.0.0.1)  [string]
      --storage-type               storage type for saving stream files, available options are minio_only, aws_only, gcs_only, aws_and_gcs  [default: "minio_only"]
      --gcs-write-access-key       gcs storage access key for write access  [string]
      --gcs-write-secrets          gcs storage secret key for write access  [string]
      --gcs-endpoint               gcs storage endpoint URL  [string]
      --gcs-bucket                 name of gcs storage bucket  [string]
      --gcs-bucket-prefix          path prefix of google storage bucket  [string]
      --aws-write-access-key       aws storage access key for write access  [string]
      --aws-write-secrets          aws storage secret key for write access  [string]
      --aws-endpoint               aws storage endpoint URL  [string]
      --aws-bucket                 name of aws storage bucket  [string]
      --aws-bucket-region          name of aws bucket region  [string]
      --aws-bucket-prefix          path prefix of aws storage bucket  [string]
      --backup-bucket              name of bucket for backing up state files  [string]
      --backup-write-access-key    backup storage access key for write access  [string]
      --backup-write-secrets       backup storage secret key for write access  [string]
      --backup-endpoint            backup storage endpoint URL  [string]
      --backup-region              backup storage region  [string] [default: "us-central1"]
      --backup-provider            backup storage service provider, GCS or AWS  [string] [default: "GCS"]
      --domain-names               Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -h, --help                       Show help  [boolean]
  -v, --version                    Show version number  [boolean]

consensus network destroy

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus network destroy --help

 consensus network destroy

Removes all consensus network components from the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --delete-pvcs         Delete the persistent volume claims. If both --delete-pvcs and --delete-secrets are set to true, the namespace will be deleted.  [boolean] [default: false]
      --delete-secrets      Delete the network secrets. If both --delete-pvcs and --delete-secrets are set to true, the namespace will be deleted.  [boolean] [default: false]
      --enable-timeout      enable time out for running a command  [boolean] [default: false]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus network freeze

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus network freeze --help

 consensus network freeze

Initiates a network freeze for scheduled maintenance or upgrades

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus network upgrade

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus network upgrade --help

 consensus network upgrade

Upgrades the software version running on all consensus nodes.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --upgrade-version     Version to be used for the upgrade  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --upgrade-zip-file    A zipped file used for network upgrade  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus node

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus node --help

 consensus node

List, create, manage, or destroy consensus node instances. Operates on a single consensus node instance at a time.

Commands:
  consensus node setup     Setup node with a specific version of Hedera platform
  consensus node start     Start a node
  consensus node stop      Stop a node
  consensus node restart   Restart all nodes of the network
  consensus node refresh   Reset and restart a node
  consensus node add       Adds a node with a specific version of Hedera platform
  consensus node update    Update a node with a specific version of Hedera platform
  consensus node destroy   Delete a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus node setup

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus node setup --help

 consensus node setup

Setup node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
      --app-config          json config file of testing app  [string]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --admin-public-keys   Comma separated list of DER encoded ED25519 public keys and must match the order of the node aliases  [string]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus node start

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus node start --help

 consensus node start

Start a node

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --state-file          A zipped state file to be used for the network  [string]
      --stake-amounts       The amount to be staked in the same order you list the node aliases with multiple node staked values comma seperated  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus node stop

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus node stop --help

 consensus node stop

Stop a node

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus node restart

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus node restart --help

 consensus node restart

Restart all nodes of the network

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus node refresh

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus node refresh --help

 consensus node refresh

Reset and restart a node

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
      --local-build-path    path of hedera local repo  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus node add

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus node add --help

 consensus node add

Adds a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
  -l, --ledger-id           Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --pvcs                Enable persistent volume claims to store data outside the pod, required for consensus node add  [boolean] [default: false]
      --grpc-tls-cert       TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)  [string]
      --grpc-web-tls-cert   TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)  [string]
      --grpc-tls-key        TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)  [string]
      --grpc-web-tls-key    TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)  [string]
      --gossip-endpoints    Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external)  [string]
      --grpc-endpoints      Comma separated gRPC endpoints of the node (at most 8)  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --gossip-keys         Generate gossip keys for nodes  [boolean] [default: false]
      --tls-keys            Generate gRPC TLS keys for nodes  [boolean] [default: false]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
      --admin-key           Admin key  [string] [default: "302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137"]
      --haproxy-ips         IP mapping where key = value is node alias and static ip for haproxy, (e.g.: --haproxy-ips node1=127.0.0.1,node2=127.0.0.1)  [string]
      --envoy-ips           IP mapping where key = value is node alias and static ip for envoy proxy, (e.g.: --envoy-ips node1=127.0.0.1,node2=127.0.0.1)  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus node update

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus node update --help

 consensus node update

Update a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
      --local-build-path    path of hedera local repo  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus node destroy

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus node destroy --help

 consensus node destroy

Delete a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --node-alias          Node alias (e.g. node99)  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
  -l, --ledger-id           Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus state

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus state --help

 consensus state

List, download, and upload consensus node state backups to/from individual consensus node instances.

Commands:
  consensus state download   Downloads a signed state from consensus node/nodes.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus state download

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus state download --help

 consensus state download

Downloads a signed state from consensus node/nodes.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus diagnostics

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus diagnostics --help

 consensus diagnostics

Capture diagnostic information such as logs, signed states, and ledger/network/node configurations.

Commands:
  consensus diagnostics config   Collects configuration files from consensus nodes.
  consensus diagnostics all      Captures logs, configs, and diagnostic artifacts from all consensus nodes.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus diagnostics config

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus diagnostics config --help

 consensus diagnostics config

Collects configuration files from consensus nodes.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus diagnostics all

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus diagnostics all --help

 consensus diagnostics all

Captures logs, configs, and diagnostic artifacts from all consensus nodes.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-add

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-add --help

 consensus dev-node-add

Dev operations for adding consensus nodes.

Commands:
  consensus dev-node-add prepare               Prepares the addition of a node with a specific version of Hedera platform
  consensus dev-node-add submit-transactions   Submits NodeCreateTransaction and Upgrade transactions to the network nodes
  consensus dev-node-add execute               Executes the addition of a previously prepared node

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-add prepare

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-add prepare --help

 consensus dev-node-add prepare

Prepares the addition of a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --output-dir          Path to the directory where the command context will be saved to  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
  -l, --ledger-id           Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --pvcs                Enable persistent volume claims to store data outside the pod, required for consensus node add  [boolean] [default: false]
      --grpc-tls-cert       TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)  [string]
      --grpc-web-tls-cert   TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)  [string]
      --grpc-tls-key        TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)  [string]
      --grpc-web-tls-key    TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)  [string]
      --gossip-endpoints    Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external)  [string]
      --grpc-endpoints      Comma separated gRPC endpoints of the node (at most 8)  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --gossip-keys         Generate gossip keys for nodes  [boolean] [default: false]
      --tls-keys            Generate gRPC TLS keys for nodes  [boolean] [default: false]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
      --admin-key           Admin key  [string] [default: "302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-add submit-transactions

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-add submit-transactions --help

 consensus dev-node-add submit-transactions

Submits NodeCreateTransaction and Upgrade transactions to the network nodes

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --input-dir           Path to the directory where the command context will be loaded from  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
  -l, --ledger-id           Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --pvcs                Enable persistent volume claims to store data outside the pod, required for consensus node add  [boolean] [default: false]
      --grpc-tls-cert       TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)  [string]
      --grpc-web-tls-cert   TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)  [string]
      --grpc-tls-key        TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)  [string]
      --grpc-web-tls-key    TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)  [string]
      --gossip-endpoints    Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external)  [string]
      --grpc-endpoints      Comma separated gRPC endpoints of the node (at most 8)  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --gossip-keys         Generate gossip keys for nodes  [boolean] [default: false]
      --tls-keys            Generate gRPC TLS keys for nodes  [boolean] [default: false]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-add execute

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-add execute --help

 consensus dev-node-add execute

Executes the addition of a previously prepared node

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --input-dir           Path to the directory where the command context will be loaded from  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
  -l, --ledger-id           Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --pvcs                Enable persistent volume claims to store data outside the pod, required for consensus node add  [boolean] [default: false]
      --grpc-tls-cert       TLS Certificate path for the gRPC (e.g. "node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)  [string]
      --grpc-web-tls-cert   TLS Certificate path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)  [string]
      --grpc-tls-key        TLS Certificate key path for the gRPC (e.g. "node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)  [string]
      --grpc-web-tls-key    TLC Certificate key path for gRPC Web (e.g. "node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)  [string]
      --gossip-endpoints    Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external)  [string]
      --grpc-endpoints      Comma separated gRPC endpoints of the node (at most 8)  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --gossip-keys         Generate gossip keys for nodes  [boolean] [default: false]
      --tls-keys            Generate gRPC TLS keys for nodes  [boolean] [default: false]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
      --haproxy-ips         IP mapping where key = value is node alias and static ip for haproxy, (e.g.: --haproxy-ips node1=127.0.0.1,node2=127.0.0.1)  [string]
      --envoy-ips           IP mapping where key = value is node alias and static ip for envoy proxy, (e.g.: --envoy-ips node1=127.0.0.1,node2=127.0.0.1)  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-update

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-update --help

 consensus dev-node-update

Dev operations for updating consensus nodes

Commands:
  consensus dev-node-update prepare               Prepare the deployment to update a node with a specific version of Hedera platform
  consensus dev-node-update submit-transactions   Submit transactions for updating a node with a specific version of Hedera platform
  consensus dev-node-update execute               Executes the updating of a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-update prepare

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-update prepare --help

 consensus dev-node-update prepare

Prepare the deployment to update a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --output-dir          Path to the directory where the command context will be saved to  [string]
      --node-alias          Node alias (e.g. node99)  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --gossip-endpoints    Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external)  [string]
      --grpc-endpoints      Comma separated gRPC endpoints of the node (at most 8)  [string]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
      --new-admin-key       new admin key for the Hedera account  [string]
      --new-account-number  new account number for node update transaction  [string]
      --tls-public-key      path and file name of the public TLS key to be used  [string]
      --gossip-private-key  path and file name of the private key for signing gossip in PEM key format to be used  [string]
      --gossip-public-key   path and file name of the public key for signing gossip in PEM key format to be used  [string]
      --tls-private-key     path and file name of the private TLS key to be used  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-update submit-transactions

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-update submit-transactions --help

 consensus dev-node-update submit-transactions

Submit transactions for updating a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --input-dir           Path to the directory where the command context will be loaded from  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --gossip-endpoints    Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external)  [string]
      --grpc-endpoints      Comma separated gRPC endpoints of the node (at most 8)  [string]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-update execute

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-update execute --help

 consensus dev-node-update execute

Executes the updating of a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --input-dir           Path to the directory where the command context will be loaded from  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --gossip-endpoints    Comma separated gossip endpoints of the node(e.g. first one is internal, second one is external)  [string]
      --grpc-endpoints      Comma separated gRPC endpoints of the node (at most 8)  [string]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-upgrade

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-upgrade --help

 consensus dev-node-upgrade

Dev operations for upgrading consensus nodes

Commands:
  consensus dev-node-upgrade prepare               Prepare for upgrading network
  consensus dev-node-upgrade submit-transactions   Submit transactions for upgrading network
  consensus dev-node-upgrade execute               Executes the upgrading the network

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-upgrade prepare

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-upgrade prepare --help

 consensus dev-node-upgrade prepare

Prepare for upgrading network

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-upgrade submit-transactions

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-upgrade submit-transactions --help

 consensus dev-node-upgrade submit-transactions

Submit transactions for upgrading network

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --input-dir           Path to the directory where the command context will be loaded from  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --upgrade-zip-file    A zipped file used for network upgrade  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-upgrade execute

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-upgrade execute --help

 consensus dev-node-upgrade execute

Executes the upgrading the network

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --input-dir           Path to the directory where the command context will be loaded from  [string]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --upgrade-zip-file    A zipped file used for network upgrade  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-delete

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-delete --help

 consensus dev-node-delete

Dev operations for delete consensus nodes

Commands:
  consensus dev-node-delete prepare               Prepares the deletion of a node with a specific version of Hedera platform
  consensus dev-node-delete submit-transactions   Submits transactions to the network nodes for deleting a node
  consensus dev-node-delete execute               Executes the deletion of a previously prepared node

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-delete prepare

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-delete prepare --help

 consensus dev-node-delete prepare

Prepares the deletion of a node with a specific version of Hedera platform

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --node-alias          Node alias (e.g. node99)  [string]
      --output-dir          Path to the directory where the command context will be saved to  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
  -l, --ledger-id           Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-delete submit-transactions

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-delete submit-transactions --help

 consensus dev-node-delete submit-transactions

Submits transactions to the network nodes for deleting a node

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --node-alias          Node alias (e.g. node99)  [string]
      --input-dir           Path to the directory where the command context will be loaded from  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
  -l, --ledger-id           Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-node-delete execute

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-node-delete execute --help

 consensus dev-node-delete execute

Executes the deletion of a previously prepared node

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --node-alias          Node alias (e.g. node99)  [string]
      --input-dir           Path to the directory where the command context will be loaded from  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --app                 Testing app name  [string] [default: "HederaNode.jar"]
  -l, --ledger-id           Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --debug-node-alias    Enable default jvm debug port (5005) for the given node id  [string]
      --endpoint-type       Endpoint type (IP or FQDN)  [string] [default: "FQDN"]
      --solo-chart-version  Solo testing chart version  [string] [default: "0.56.0"]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
      --local-build-path    path of hedera local repo  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --domain-names        Custom domain names for consensus nodes mapping for the(e.g. node0=domain.name where key is node alias and value is domain name)with multiple nodes comma seperated  [string]
  -t, --release-tag         Release tag to be used (e.g. v0.63.9)  [string] [default: "v0.63.9"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-freeze

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-freeze --help

 consensus dev-freeze

Dev operations for freezing consensus nodes

Commands:
  consensus dev-freeze prepare-upgrade   Prepare the network for a Freeze Upgrade operation
  consensus dev-freeze freeze-upgrade    Performs a Freeze Upgrade operation with on the network after it has been prepared with prepare-upgrade

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-freeze prepare-upgrade

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-freeze prepare-upgrade --help

 consensus dev-freeze prepare-upgrade

Prepare the network for a Freeze Upgrade operation

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --skip-node-alias     The node alias to skip, because of a NodeUpdateTransaction or it is down (e.g. node99)  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

consensus dev-freeze freeze-upgrade

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts consensus dev-freeze freeze-upgrade --help

 consensus dev-freeze freeze-upgrade

Performs a Freeze Upgrade operation with on the network after it has been prepared with prepare-upgrade

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --skip-node-alias     The node alias to skip, because of a NodeUpdateTransaction or it is down (e.g. node99)  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

deployment

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts deployment --help

 deployment

Create, modify, and delete deployment configurations. Deployments are required for most of the other commands.

Commands:
  deployment cluster   View and manage Solo cluster references used by a deployment.
  deployment config    List, view, create, delete, and import deployments. These commands affect the local configuration only.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

deployment cluster

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts deployment cluster --help

 deployment cluster

View and manage Solo cluster references used by a deployment.

Commands:
  deployment cluster attach   Attaches a cluster reference to a deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

deployment cluster attach

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts deployment cluster attach --help

 deployment cluster attach

Attaches a cluster reference to a deployment.

Options:
      --dev                         Enable developer mode  [boolean] [default: false]
      --force-port-forward          Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment                  The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref                 The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -q, --quiet-mode                  Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --enable-cert-manager         Pass the flag to enable cert manager  [boolean] [default: false]
      --num-consensus-nodes         Used to specify desired number of consensus nodes for pre-genesis deployments  [number]
      --dns-base-domain             Base domain for the DNS is the suffix used to construct the fully qualified domain name (FQDN)  [string] [default: "cluster.local"]
      --dns-consensus-node-pattern  Pattern to construct the prefix for the fully qualified domain name (FQDN) for the consensus node, the suffix is provided by the --dns-base-domain option (ex. network-{nodeAlias}-svc.{namespace}.svc)  [string] [default: "network-{nodeAlias}-svc.{namespace}.svc"]
  -h, --help                        Show help  [boolean]
  -v, --version                     Show version number  [boolean]

deployment config

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts deployment config --help

 deployment config

List, view, create, delete, and import deployments. These commands affect the local configuration only.

Commands:
  deployment config list     Lists all local deployment configurations.
  deployment config create   Creates a new local deployment configuration.
  deployment config delete   Removes a local deployment configuration.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

deployment config list

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts deployment config list --help

 deployment config list

Lists all local deployment configurations.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

deployment config create

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts deployment config create --help

 deployment config create

Creates a new local deployment configuration.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -n, --namespace           Namespace  [string]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --realm               Realm number. Requires network-node > v61.0 for non-zero values  [number] [default: 0]
      --shard               Shard number. Requires network-node > v61.0 for non-zero values  [number] [default: 0]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

deployment config delete

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts deployment config delete --help

 deployment config delete

Removes a local deployment configuration.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

explorer

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts explorer --help

 explorer

Explorer Node operations for creating, modifying, and destroying resources.These commands require the presence of an existing deployment.

Commands:
  explorer node   List, create, manage, or destroy explorer node instances. Operates on a single explorer node instance at a time.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

explorer node

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts explorer node --help

 explorer node

List, create, manage, or destroy explorer node instances. Operates on a single explorer node instance at a time.

Commands:
  explorer node add       Adds and configures a new node instance.
  explorer node destroy   Deletes the specified node from the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

explorer node add

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts explorer node add --help

 explorer node add

Adds and configures a new node instance.

Options:
      --dev                            Enable developer mode  [boolean] [default: false]
      --force-port-forward             Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment                     The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref                    The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --cache-dir                      Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --chart-dir                      Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --enable-ingress                 enable ingress on the component/pod  [boolean] [default: false]
      --ingress-controller-value-file  The value file to use for ingress controller, defaults to ""  [string]
      --enable-explorer-tls            Enable Explorer TLS, defaults to false, requires certManager and certManagerCrds, which can be deployed through solo-cluster-setup chart or standalone  [boolean] [default: false]
      --explorer-tls-host-name         The host name to use for the Explorer TLS, defaults to "explorer.solo.local"  [string] [default: "explorer.solo.local"]
      --explorer-static-ip             The static IP address to use for the Explorer load balancer, defaults to ""  [string]
      --explorer-version               Explorer chart version  [string] [default: "25.1.1"]
      --mirror-namespace               Namespace to use for the Mirror Node deployment, a new one will be created if it does not exist  [string]
  -n, --namespace                      Namespace  [string]
      --profile-file                   Resource profile definition (e.g. custom-spec.yaml)  [string] [default: "profiles/custom-spec.yaml"]
      --profile                        Resource profile (local | tiny | small | medium | large)  [string] [default: "local"]
  -q, --quiet-mode                     Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --solo-chart-version             Solo testing chart version  [string] [default: "0.56.0"]
      --tls-cluster-issuer-type        The TLS cluster issuer type to use for hedera explorer, defaults to "self-signed", the available options are: "acme-staging", "acme-prod", or "self-signed"  [string] [default: "self-signed"]
  -f, --values-file                    Comma separated chart values file  [string]
  -s, --cluster-setup-namespace        Cluster Setup Namespace  [string] [default: "solo-setup"]
      --domain-name                    Custom domain name  [string]
  -h, --help                           Show help  [boolean]
  -v, --version                        Show version number  [boolean]

explorer node destroy

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts explorer node destroy --help

 explorer node destroy

Deletes the specified node from the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

keys

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts keys --help

 keys

Consensus key generation operations

Commands:
  keys consensus   Generate unique cryptographic keys (gossip or grpc TLS keys) for the Consensus Node instances.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

keys consensus

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts keys consensus --help

 keys consensus

Generate unique cryptographic keys (gossip or grpc TLS keys) for the Consensus Node instances.

Commands:
  keys consensus generate   Generates TLS keys required for consensus node communication.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

keys consensus generate

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts keys consensus generate --help

 keys consensus generate

Generates TLS keys required for consensus node communication.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --gossip-keys         Generate gossip keys for nodes  [boolean] [default: false]
      --tls-keys            Generate gRPC TLS keys for nodes  [boolean] [default: false]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
  -n, --namespace           Namespace  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

ledger

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts ledger --help

 ledger

System, Account, and Crypto ledger-based management operations. These commands require an operational set of consensus nodes and may require an operational mirror node.

Commands:
  ledger system    Perform a full ledger initialization on a new deployment, rekey privileged/system accounts, or setup network staking parameters.
  ledger account   View, list, create, update, delete, and import ledger accounts.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

ledger system

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts ledger system --help

 ledger system

Perform a full ledger initialization on a new deployment, rekey privileged/system accounts, or setup network staking parameters.

Commands:
  ledger system init   Rekeys system accounts and stake consensus nodes.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

ledger system init

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts ledger system init --help

 ledger system init

Rekeys system accounts and stake consensus nodes.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

ledger account

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts ledger account --help

 ledger account

View, list, create, update, delete, and import ledger accounts.

Commands:
  ledger account update   Updates an existing ledger account.
  ledger account create   Creates a new ledger account.
  ledger account info     Gets the account info including the current amount of HBAR

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

ledger account update

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts ledger account update --help

 ledger account update

Updates an existing ledger account.

Options:
      --dev                  Enable developer mode  [boolean] [default: false]
      --force-port-forward   Force port forward to access the network services  [boolean] [default: true]
      --account-id           The Hedera account id, e.g.: 0.0.1001  [string]
  -d, --deployment           The name the user will reference locally to link to a deployment  [string]
      --hbar-amount          Amount of HBAR to add  [number] [default: 100]
      --ecdsa-private-key    ECDSA private key for the Hedera account  [string]
      --ed25519-private-key  ED25519 private key for the Hedera account  [string]
  -c, --cluster-ref          The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -h, --help                 Show help  [boolean]
  -v, --version              Show version number  [boolean]

ledger account create

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts ledger account create --help

 ledger account create

Creates a new ledger account.

Options:
      --dev                  Enable developer mode  [boolean] [default: false]
      --force-port-forward   Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment           The name the user will reference locally to link to a deployment  [string]
      --hbar-amount          Amount of HBAR to add  [number] [default: 100]
      --create-amount        Amount of new account to create  [number] [default: 1]
      --ecdsa-private-key    ECDSA private key for the Hedera account  [string]
      --private-key          Show private key information  [boolean] [default: false]
      --ed25519-private-key  ED25519 private key for the Hedera account  [string]
      --generate-ecdsa-key   Generate ECDSA private key for the Hedera account  [boolean] [default: false]
      --set-alias            Sets the alias for the Hedera account when it is created, requires --ecdsa-private-key  [boolean] [default: false]
  -c, --cluster-ref          The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -h, --help                 Show help  [boolean]
  -v, --version              Show version number  [boolean]

ledger account info

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts ledger account info --help

 ledger account info

Gets the account info including the current amount of HBAR

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
      --account-id          The Hedera account id, e.g.: 0.0.1001  [string]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --private-key         Show private key information  [boolean] [default: false]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

mirror

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts mirror --help

 mirror

Mirror Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.

Commands:
  mirror node   List, create, manage, or destroy mirror node instances. Operates on a single mirror node instance at a time.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

mirror node

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts mirror node --help

 mirror node

List, create, manage, or destroy mirror node instances. Operates on a single mirror node instance at a time.

Commands:
  mirror node add       Adds and configures a new node instance.
  mirror node destroy   Deletes the specified node from the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

mirror node add

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts mirror node add --help

 mirror node add

Adds and configures a new node instance.

Options:
      --dev                               Enable developer mode  [boolean] [default: false]
      --force-port-forward                Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment                        The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref                       The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --cache-dir                         Local cache directory  [string] [default: "/home/runner/.solo/cache"]
      --chart-dir                         Local chart directory path (e.g. ~/solo-charts/charts  [string]
      --enable-ingress                    enable ingress on the component/pod  [boolean] [default: false]
      --ingress-controller-value-file     The value file to use for ingress controller, defaults to ""  [string]
      --mirror-static-ip                  static IP address for the mirror node  [string]
      --profile-file                      Resource profile definition (e.g. custom-spec.yaml)  [string] [default: "profiles/custom-spec.yaml"]
      --profile                           Resource profile (local | tiny | small | medium | large)  [string] [default: "local"]
  -q, --quiet-mode                        Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -f, --values-file                       Comma separated chart values file  [string]
      --mirror-node-version               Mirror node chart version  [string] [default: "v0.136.0"]
      --pinger                            Enable Pinger service in the Mirror node monitor  [boolean] [default: false]
      --use-external-database             Set to true if you have an external database to use instead of the database that the Mirror Node Helm chart supplies  [boolean] [default: false]
      --operator-id                       Operator ID  [string]
      --operator-key                      Operator Key  [string]
      --storage-type                      storage type for saving stream files, available options are minio_only, aws_only, gcs_only, aws_and_gcs  [default: "minio_only"]
      --storage-read-access-key           storage read access key for mirror node importer  [string]
      --storage-read-secrets              storage read-secret key for mirror node importer  [string]
      --storage-endpoint                  storage endpoint URL for mirror node importer  [string]
      --storage-bucket                    name of storage bucket for mirror node importer  [string]
      --storage-bucket-prefix             path prefix of storage bucket mirror node importer  [string]
      --storage-bucket-region             region of storage bucket mirror node importer  [string]
      --external-database-host            Use to provide the external database host if the '--use-external-database' is passed  [string]
      --external-database-owner-username  Use to provide the external database owner's username if the '--use-external-database' is passed  [string]
      --external-database-owner-password  Use to provide the external database owner's password if the '--use-external-database' is passed  [string]
      --external-database-read-username   Use to provide the external database readonly user's username if the '--use-external-database' is passed  [string]
      --external-database-read-password   Use to provide the external database readonly user's password if the '--use-external-database' is passed  [string]
      --domain-name                       Custom domain name  [string]
  -h, --help                              Show help  [boolean]
  -v, --version                           Show version number  [boolean]

mirror node destroy

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts mirror node destroy --help

 mirror node destroy

Deletes the specified node from the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

relay

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts relay --help

 relay

RPC Relay Node operations for creating, modifying, and destroying resources. These commands require the presence of an existing deployment.

Commands:
  relay node   List, create, manage, or destroy relay node instances. Operates on a single relay node instance at a time.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

relay node

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts relay node --help

 relay node

List, create, manage, or destroy relay node instances. Operates on a single relay node instance at a time.

Commands:
  relay node add       Adds and configures a new node instance.
  relay node destroy   Deletes the specified node from the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

relay node add

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts relay node add --help

 relay node add

Adds and configures a new node instance.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -l, --ledger-id           Ledger ID (a.k.a. Chain ID)  [string] [default: "298"]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
      --operator-id         Operator ID  [string]
      --operator-key        Operator Key  [string]
      --profile-file        Resource profile definition (e.g. custom-spec.yaml)  [string] [default: "profiles/custom-spec.yaml"]
      --profile             Resource profile (local | tiny | small | medium | large)  [string] [default: "local"]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --relay-release       Relay release tag to be used (e.g. v0.48.0)  [string] [default: "0.70.0"]
      --replica-count       Replica count  [number] [default: 1]
  -f, --values-file         Comma separated chart values file  [string]
      --domain-name         Custom domain name  [string]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

relay node destroy

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts relay node destroy --help

 relay node destroy

Deletes the specified node from the deployment.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --chart-dir           Local chart directory path (e.g. ~/solo-charts/charts  [string]
  -i, --node-aliases        Comma separated node aliases (empty means all nodes)  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

quick-start

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts quick-start --help

 quick-start

Quick start commands for new and returning users who need a preset environment type. These commands use reasonable defaults to provide a single command out of box experience.

Commands:
  quick-start single   Creates a uniquely named deployment with a single consensus node, mirror node, block node, relay node, and explorer node.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

quick-start single

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts quick-start single --help

 quick-start single

Creates a uniquely named deployment with a single consensus node, mirror node, block node, relay node, and explorer node.

Commands:
  quick-start single deploy    Deploys all required components for the selected quick start configuration.
  quick-start single destroy   Removes the deployed resources for the selected quick start configuration.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

quick-start single deploy

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts quick-start single deploy --help

 quick-start single deploy

Deploys all required components for the selected quick start configuration.

Options:
      --dev                      Enable developer mode  [boolean] [default: false]
      --force-port-forward       Force port forward to access the network services  [boolean] [default: true]
      --cache-dir                Local cache directory  [string] [default: "/home/runner/.solo/cache"]
  -c, --cluster-ref              The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
  -s, --cluster-setup-namespace  Cluster Setup Namespace  [string] [default: "solo-setup"]
      --context                  The Kubernetes context name to be used  [string]
  -d, --deployment               The name the user will reference locally to link to a deployment  [string]
  -n, --namespace                Namespace  [string]
      --num-consensus-nodes      Used to specify desired number of consensus nodes for pre-genesis deployments  [number]
      --predefined-accounts      Create predefined accounts on network creation  [boolean] [default: true]
  -q, --quiet-mode               Quiet mode, do not prompt for confirmation  [boolean] [default: false]
  -h, --help                     Show help  [boolean]
  -v, --version                  Show version number  [boolean]

quick-start single destroy

> @hashgraph/solo@0.43.0 solo-test
> tsx --no-deprecation --no-warnings solo.ts quick-start single destroy --help

 quick-start single destroy

Removes the deployed resources for the selected quick start configuration.

Options:
      --dev                 Enable developer mode  [boolean] [default: false]
      --force-port-forward  Force port forward to access the network services  [boolean] [default: true]
      --cache-dir           Local cache directory  [string] [default: "/home/runner/.solo/cache"]
  -c, --cluster-ref         The cluster reference that will be used for referencing the Kubernetes cluster and stored in the local and remote configuration for the deployment.  For commands that take multiple clusters they can be separated by commas.  [string]
      --context             The Kubernetes context name to be used  [string]
  -d, --deployment          The name the user will reference locally to link to a deployment  [string]
  -n, --namespace           Namespace  [string]
  -q, --quiet-mode          Quiet mode, do not prompt for confirmation  [boolean] [default: false]
      --force               Force actions even if those can be skipped  [boolean] [default: false]
  -h, --help                Show help  [boolean]
  -v, --version             Show version number  [boolean]

5 - FAQ

Frequently asked questions about the Solo CLI tool.

How can I set up a Solo network in a single command?

You can run npx @hashgraph/solo:@latest quick-start single deploy

More documentation can be found here:

How cain I tear down a Solo network in a single command?

You can run npx @hashgraph/solo:@latest quick-start single destroy

How can I avoid using genesis keys ?

You can run solo ledger system init anytime after solo consensus node start

Where can I find the default account keys ?

The default genesis key is 302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137 It is the key for default operator account 0.0.2 of the consensus network. It is defined in Hiero source code Link

How do I get the key for an account?

Use the following command to get account balance and private key of the account 0.0.1007:

# get account info of 0.0.1007 and also show the private key
solo ledger account info --account-id 0.0.1007 --deployment solo-deployment  --private-key

The output would be similar to the following:

{
 "accountId": "0.0.1007",
 "privateKey": "302e020100300506032b657004220420411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7",
 "privateKeyRaw": "411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7"
 "publicKey": "302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013",
 "balance": 100
}

How to handle error “failed to setup chart repositories”

If during the installation of solo-charts you see the error similar to the following:

failed to setup chart repositories,
repository name (hedera-json-rpc-relay) already exists

You need to remove the old helm repo manually, first run command helm repo list to see the list of helm repos, and then run helm repo remove <repo-name> to remove the repo. For example:

helm repo list

NAME                 	URL                                                       
haproxy-ingress      	https://haproxy-ingress.github.io/charts                  
haproxytech          	https://haproxytech.github.io/helm-charts                 
metrics-server       	https://kubernetes-sigs.github.io/metrics-server/         
metallb              	https://metallb.github.io/metallb                         
mirror               	https://hashgraph.github.io/hedera-mirror-node/charts     
hedera-json-rpc-relay	https://hashgraph.github.io/hedera-json-rpc-relay/charts

Next run the command to remove the repo:

helm repo remove hedera-json-rpc-relay

6 - Using Solo with Mirror Node

This document describes how to use Solo with Mirror Node.

Using Solo with mirror node

User can deploy a Solo network with Mirror Node by running the following command:

export SOLO_CLUSTER_NAME=solo-cluster
export SOLO_NAMESPACE=solo-e2e
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster-setup
export SOLO_DEPLOYMENT=solo-deployment

rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup --cluster-setup-namespace "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 2
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2
solo consensus node setup     --deployment "${SOLO_DEPLOYMENT}" -i node1,node2
solo consensus node start     --deployment "${SOLO_DEPLOYMENT}" -i node1,node2
solo mirror node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --enable-ingress --pinger
solo explorer node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME}

The --pinger flag in solo mirror node add starts a pinging service that sends transactions to the network at regular intervals. This is needed because the record file is not imported into the mirror node until the next one is created.

Then you can access the Explorer at http://localhost:8080

Or you can use Task tool to deploy Solo network with Mirror Node with a single command link

Next, you can try to create a few accounts with Solo and see the transactions in the Explorer.

solo ledger account create --deployment solo-deployment --hbar-amount 100
solo ledger account create --deployment solo-deployment --hbar-amount 100

Or you can use Hedera JavaScript SDK examples to create topic, submit message and subscribe to the topic.

If you need to access mirror node service directly, use the following command to enable port forwarding, or just use localhost:8081 as it should have all the mirror node services exposed to this port:

kubectl port-forward svc/mirror-grpc -n "${SOLO_NAMESPACE}" 5600:5600 &
grpcurl -plaintext "${GRPC_IP:-127.0.0.1}:5600" list

kubectl port-forward svc/mirror-rest -n "${SOLO_NAMESPACE}" svc/mirror-rest 5551:80 &
curl -s "http://${REST_IP:-127.0.0.1}:5551/api/v1/transactions?limit=1"

kubectl port-forward service/mirror-restjava -n "${SOLO_NAMESPACE}" 8084:80 &
curl -s "http://${REST_IP:-127.0.0.1}:8084/api/v1/accounts/0.0.2/allowances/nfts"

7 - Using Solo with Hiero JavaScript SDK

This page describes how to use Solo with Hiero JavaScript SDK. It includes instructions for setting up a local Solo network, creating test accounts, and running example scripts.

Using Solo with the Hiero JavaScript SDK

First, please follow solo repository README to install solo and Docker Desktop. You also need to install the Taskfile tool following the instructions here.

Then we start with launching a local Solo network with the following commands:

# launch a local Solo network with mirror node and hedera explorer
cd scripts
task default-with-mirror

Then create a new test account with the following command:

npm run solo-test -- ledger account create --deployment solo-deployment --hbar-amount 100

The output would be similar to the following:

 *** new account created ***
-------------------------------------------------------------------------------
{
 "accountId": "0.0.1007",
 "publicKey": "302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013",
 "balance": 100
}

Then use the following command to get private key of the account 0.0.1007:

 npm run solo-test -- ledger account info --account-id 0.0.1007 --deployment solo-deployment --private-key

The output would be similar to the following:

{
 "accountId": "0.0.1007",
 "privateKey": "302e020100300506032b657004220420411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7",
 "privateKeyRaw": "411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7"
 "publicKey": "302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013",
 "balance": 100
}

Next step please clone the Hiero Javascript SDK repository https://github.com/hiero-ledger/hiero-sdk-js. At the root of the project hiero-sdk-js, create a file .env and add the following content:

# Hiero Operator Account ID
export OPERATOR_ID="0.0.1007"

# Hiero Operator Private Key
export OPERATOR_KEY="302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013"

# Hiero Network
export HEDERA_NETWORK="local-node"

Make sure to assign the value of accountId to OPERATOR_ID and the value of privateKey to OPERATOR_KEY.

Then try the following command to run the test

node examples/create-account.js 

The output should be similar to the following:

private key = 302e020100300506032b6570042204208a3c1093c4df779c4aa980d20731899e0b509c7a55733beac41857a9dd3f1193
public key = 302a300506032b6570032100c55adafae7e85608ea893d0e2c77e2dae3df90ba8ee7af2f16a023ba2258c143
account id = 0.0.1009

Or try the topic creation example:

node scripts/create-topic.js

The output should be similar to the following:

topic id = 0.0.1008
topic sequence number = 1

You can use Hiero Explorer to check transactions and topics created in the Solo network: http://localhost:8080/localnet/dashboard

Finally, after done with using solo, using the following command to tear down the Solo network:

task clean

Retrieving Logs

You can find log for running solo command under the directory ~/.solo/logs/

The file solo.log contains the logs for the solo command. The file hashgraph-sdk.log contains the logs from Solo client when sending transactions to network nodes.

8 - Hiero Consensus Node Platform Developer

This page provides information for developers who want to build and run Hiero Consensus Node testing application locally.

Use Solo with a Local Built Hiero Consensus Node Testing Application

First, please clone Hiero Consensus Node repo https://github.com/hiero-ledger/hiero-consensus-node/ and build the code with ./gradlew assemble. If you need to run multiple nodes with different versions or releases, please duplicate the repo or build directories in multiple directories, checkout to the respective version and build the code.

Then you can start the custom-built platform testing application with the following command:

SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment

rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}" 
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"

solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3

solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3 
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 

# option 1) if all nodes are running the same version of Hiero app
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data/

# option 2) if each node is running different version of Hiero app, please provide different paths to the local repositories
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path node1=../hiero-consensus-node/hedera-node/data/,node1=<path2>,node3=<path3>

solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 

It is possible that different nodes are running different versions of Hiero app, as long as in the above setup command, each node0, or node1 is given different paths to the local repositories.

If need to provide customized configuration files for Hedera application, please use the following flags with consensus network deploy command:

  • --settings-txt - to provide custom settings.txt file
  • --api-permission-properties - to provide custom api-permission.properties file
  • --bootstrap-properties - to provide custom bootstrap.properties file
  • --application-properties - to provide custom application.properties file

For example:

solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --settings-txt <path-to-settings-txt> 

9 - Hiero Consensus Node Execution Developer

Use port-forwarding to access Hiero Consensus Node network services.

Hiero Consensus Node Execution Developer

Once the nodes are up, you may now expose various services (using k9s (shift-f) or kubectl port-forward) and access. Below are most used services that you may expose.

  • where the ’node name’ for Node ID = 0, is node1 (node${ nodeId + 1 })
  • Node services: network-<node name>-svc
  • HAProxy: haproxy-<node name>-svc
    # enable port forwarding for haproxy
    # node1 grpc port accessed by localhost:50211
    kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 51211:50211 &
    # node2 grpc port accessed by localhost:51211
    kubectl port-forward svc/haproxy-node2-svc -n "${SOLO_NAMESPACE}" 52211:50211 &
    # node3 grpc port accessed by localhost:52211
    kubectl port-forward svc/haproxy-node3-svc -n "${SOLO_NAMESPACE}" 53211:50211 &
    
  • Envoy Proxy: envoy-proxy-<node name>-svc
    # enable port forwarding for envoy proxy
    kubectl port-forward svc/envoy-proxy-node1-svc -n "${SOLO_NAMESPACE}" 8181:8080 &
    kubectl port-forward svc/envoy-proxy-node2-svc -n "${SOLO_NAMESPACE}" 8281:8080 &
    kubectl port-forward svc/envoy-proxy-node3-svc -n "${SOLO_NAMESPACE}" 8381:8080 &
    
  • Hiero explorer: solo-deployment-hiero-explorer
    # enable port forwarding for hiero explorer, can be access at http://localhost:8080/
    # check to see if it is already enabled, port forwarding for explorer should be handled by solo automatically
    # kubectl port-forward svc/hiero-explorer -n "${SOLO_NAMESPACE}" 8080:8080 &
    
  • JSON RPC Relays

You can deploy JSON RPC Relays for one or more nodes as below:

# deploy relay node first
solo relay node add -i node1 --deployment "${SOLO_DEPLOYMENT}"

# enable relay for node1
# check to see if it is already enabled, port forwarding for relay should be handled by solo automatically
# kubectl port-forward svc/relay-node1-hedera-json-rpc-relay -n "${SOLO_NAMESPACE}" 7546:7546 &

10 - Attach JVM Debugger and Retrieve Logs

This document describes how to attach a JVM debugger to a running Hiero Consensus Node and retrieve logs for debugging purposes. It also provides instructions on how to save and reuse network state files.

How to Debug a Hiero Consensus Node

1. Using k9s to access running consensus node logs

Running the command k9s -A in terminal, and select one of the network nodes:

alt text

Next, select the root-container and press the key s to enter the shell of the container.

alt text

Once inside the shell, you can change to directory cd /opt/hgcapp/services-hedera/HapiApp2.0/ to view all hedera related logs and properties files.

[root@network-node1-0 hgcapp]# cd /opt/hgcapp/services-hedera/HapiApp2.0/
[root@network-node1-0 HapiApp2.0]# pwd
/opt/hgcapp/services-hedera/HapiApp2.0
[root@network-node1-0 HapiApp2.0]# ls -ltr data/config/
total 0
lrwxrwxrwx 1 root root 27 Dec  4 02:05 bootstrap.properties -> ..data/bootstrap.properties
lrwxrwxrwx 1 root root 29 Dec  4 02:05 application.properties -> ..data/application.properties
lrwxrwxrwx 1 root root 32 Dec  4 02:05 api-permission.properties -> ..data/api-permission.properties
[root@network-node1-0 HapiApp2.0]# ls -ltr output/
total 1148
-rw-r--r-- 1 hedera hedera       0 Dec  4 02:06 hgcaa.log
-rw-r--r-- 1 hedera hedera       0 Dec  4 02:06 queries.log
drwxr-xr-x 2 hedera hedera    4096 Dec  4 02:06 transaction-state
drwxr-xr-x 2 hedera hedera    4096 Dec  4 02:06 state
-rw-r--r-- 1 hedera hedera     190 Dec  4 02:06 swirlds-vmap.log
drwxr-xr-x 2 hedera hedera    4096 Dec  4 16:01 swirlds-hashstream
-rw-r--r-- 1 hedera hedera 1151446 Dec  4 16:07 swirlds.log

Alternatively, you can use the following command to download hgcaa.log and swirlds.log for further analysis.

# download logs as zip file from node1 and save in default ~/.solo/logs/<namespace>/<timestamp/
solo consensus diagnostics all --deployment solo-deployment

2. Using IntelliJ remote debug with Solo

NOTE: the hiero-consensus-node path referenced ‘../hiero-consensus-node/hedera-node/data’ may need to be updated based on what directory you are currently in. This also assumes that you have done an assemble/build and the directory contents are up-to-date.

Set up an Intellij run/debug configuration for remote JVM debug as shown in the below screenshot:

alt text

If you are working on a Hiero Consensus Node testing application, you should use the following configuration in Intellij:

alt text

Set up a breakpoint if necessary.

From Solo repo directory, run the following command from a terminal to launch a three node network, assume we are trying to attach debug to node2. Make sure the path following local-build-path points to the correct directory.

Example 1: attach jvm debugger to a Hiero Consensus Node

SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment

rm -Rf ~/.solo # to avoid name collision issues if you ran previously with the same deployment name
kind delete cluster -n "${SOLO_CLUSTER_NAME}" 
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"

solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3

solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --debug-node-alias node2
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --debug-node-alias node2

Once you see the following message, you can launch the JVM debugger from Intellij

❯ Check all nodes are ACTIVE
  Check node: node1,
  Check node: node2,  Please attach JVM debugger now.
  Check node: node3,

The Hiero Consensus Node application should stop at the breakpoint you set:

alt text alt text

Example 2: attach a JVM debugger with the consensus node add operation

SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment

rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}" 
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"

solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3

solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --pvcs true
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3

solo consensus node add --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys --debug-node-alias node4 --local-build-path ../hiero-consensus-node/hedera-node/data --pvcs true

Example 3: attach a JVM debugger with the consensus node update operation

SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment

rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}" 
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"

solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3

solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3

solo consensus node update --deployment "${SOLO_DEPLOYMENT}" --node-alias node2  --debug-node-alias node2 --local-build-path ../hiero-consensus-node/hedera-node/data --new-account-number 0.0.7 --gossip-public-key ./s-public-node2.pem --gossip-private-key ./s-private-node2.pem --release-tag v0.59.5

Example 4: attach a JVM debugger with the node delete operation

SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment

rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}" 
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"

solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3

solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3

solo consensus node destroy --deployment "${SOLO_DEPLOYMENT}" --node-alias node2  --debug-node-alias node3 --local-build-path ../hiero-consensus-node/hedera-node/data

3. Save and reuse network state files

With the following command you can save the network state to a file.

# must stop hedera node operation first
solo consensus node stop --deployment "${SOLO_DEPLOYMENT}"

# download state file to default location at ~/.solo/logs/<namespace>
solo consensus state download -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"

By default, the state files are saved under ~/.solo directory

└── logs
    β”œβ”€β”€ solo-e2e
    β”‚Β Β  β”œβ”€β”€ network-node1-0-state.zip
    β”‚Β Β  └── network-node2-0-state.zip
    └── solo.log

Later, user can use the following command to upload the state files to the network and restart Hiero Consensus Nodes.

SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment

rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}" 
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"

solo cluster-ref config connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment config create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo keys consensus generate --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3

solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo consensus node stop --deployment "${SOLO_DEPLOYMENT}"

solo consensus node states -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"

# start network with pre-existing state files
solo consensus node start --deployment "${SOLO_DEPLOYMENT}" --state-file network-node1-0-state.zip

11 - Using Environment Variables

Environment variables are used to customize the behavior of Solo. This document provides a list of environment variables that can be configured to change the default behavior.

Environment Variables Used in Solo

User can configure the following environment variables to customize the behavior of Solo.

Table of environment variables

Environment VariableDescriptionDefault Value
SOLO_HOMEPath to the Solo cache and log files~/.solo
SOLO_CACHE_DIRPath to the Solo cache directory~/.solo/cache
SOLO_CHAIN_IDChain id of solo network298
DEFAULT_START_ID_NUMBERFirst node account ID of solo test network0.0.3
SOLO_NODE_INTERNAL_GOSSIP_PORTInternal gossip port number used by hedera network50111
SOLO_NODE_EXTERNAL_GOSSIP_PORTExternal port number used by hedera network50111
SOLO_NODE_DEFAULT_STAKE_AMOUNTDefault stake amount for node500
SOLO_OPERATOR_IDOperator account ID for solo network0.0.2
SOLO_OPERATOR_KEYOperator private key for solo network302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137
SOLO_OPERATOR_PUBLIC_KEYOperator public key for solo network302a300506032b65700321000aa8e21064c61eab86e2a9c164565b4e7a9a4146106e0a6cd03a8c395a110e92
FREEZE_ADMIN_ACCOUNTFreeze admin account ID for solo network0.0.58
GENESIS_KEYGenesis private key for solo network302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137
LOCAL_NODE_START_PORTLocal node start port for solo network30212
NODE_CLIENT_MIN_BACKOFFThe minimum amount of time to wait between retries.1000
NODE_CLIENT_MAX_BACKOFFThe maximum amount of time to wait between retries.1000
NODE_CLIENT_REQUEST_TIMEOUTThe period of time a transaction or query request will retry from a “busy” network response600000
NODE_COPY_CONCURRENTThe number of concurrent threads to use when copying files to the node.4
PODS_RUNNING_MAX_ATTEMPTSThe maximum number of attempts to check if pods are running.900
PODS_RUNNING_DELAYThe interval between attempts to check if pods are running, in the unit of milliseconds.1000
NETWORK_NODE_ACTIVE_MAX_ATTEMPTSThe maximum number of attempts to check if network nodes are active.300
NETWORK_NODE_ACTIVE_DELAYThe interval between attempts to check if network nodes are active, in the unit of milliseconds.1000
NETWORK_NODE_ACTIVE_TIMEOUTThe period of time to wait for network nodes to become active, in the unit of milliseconds.1000
NETWORK_PROXY_MAX_ATTEMPTSThe maximum number of attempts to check if network proxy is running.300
NETWORK_PROXY_DELAYThe interval between attempts to check if network proxy is running, in the unit of milliseconds.2000
PODS_READY_MAX_ATTEMPTSThe maximum number of attempts to check if pods are ready.300
PODS_READY_DELAYThe interval between attempts to check if pods are ready, in the unit of milliseconds.2000
RELAY_PODS_RUNNING_MAX_ATTEMPTSThe maximum number of attempts to check if relay pods are running.900
RELAY_PODS_RUNNING_DELAYThe interval between attempts to check if relay pods are running, in the unit of milliseconds.1000
RELAY_PODS_READY_MAX_ATTEMPTSThe maximum number of attempts to check if relay pods are ready.100
RELAY_PODS_READY_DELAYThe interval between attempts to check if relay pods are ready, in the unit of milliseconds.1000
NETWORK_DESTROY_WAIT_TIMEOUTThe period of time to wait for network to be destroyed, in the unit of milliseconds.120
SOLO_LEASE_ACQUIRE_ATTEMPTSThe number of attempts to acquire a lock before failing.10
SOLO_LEASE_DURATIONThe default duration in seconds for which a lock is held before expiration.20
ACCOUNT_UPDATE_BATCH_SIZEThe number of accounts to update in a single batch operation.10
NODE_CLIENT_PING_INTERVALThe interval in milliseconds between node health pings.30000
NODE_CLIENT_SDK_PING_MAX_RETRIESThe maximum number of retries for node health pings.5
NODE_CLIENT_SDK_PING_RETRY_INTERVALThe interval in milliseconds between node health ping retries.10000
GRPC_PORTThe gRPC port used for local node communication.50211
LOCAL_BUILD_COPY_RETRYThe number of times to retry local build copy operations.3
LOAD_BALANCER_CHECK_DELAY_SECSThe delay in seconds between load balancer status checks.5
LOAD_BALANCER_CHECK_MAX_ATTEMPTSThe maximum number of attempts to check load balancer status.60
JSON_RPC_RELAY_CHART_URLThe URL for the JSON-RPC relay Helm chart repository.https://hiero-ledger.github.io/hiero-json-rpc-relay/charts
MIRROR_NODE_CHART_URLThe URL for the Hedera mirror node Helm chart repository.https://hashgraph.github.io/hedera-mirror-node/charts
NODE_CLIENT_MAX_ATTEMPTSThe maximum number of attempts for node client operations.600
EXPLORER_CHART_URLThe URL for the Hedera Explorer Helm chart repository.oci://ghcr.io/hiero-ledger/hiero-mirror-node-explorer/hiero-explorer-chart
INGRESS_CONTROLLER_CHART_URLThe URL for the ingress controller Helm chart repository.https://haproxy-ingress.github.io/charts
BLOCK_NODE_VERSIONThe release version of the block node to use.v0.14.0
CONSENSUS_NODE_VERSIONThe release version of the consensus node to use.v0.63.9

12 -

Updated CLI Command Mappings

The following tables provide a complete mapping of previous CLI commands to their updated three-level structure. Entries marked as No changes retain their original form.

Init

Old CommandNew Command
initNo changes

Block node

Old CommandNew Command
block node addNo changes
block node destroyNo changes
block node upgradeNo changes

Account

Old CommandNew Command
account initledger system init
account updateledger account update
account createledger account create
account getledger account info

Quick Start

Old CommandNew Command
quick-start single deployone shot deploy
quick-start single destroyone shot destroy

Cluster Reference

Old CommandNew Command
cluster-ref connectcluster-ref config connect
cluster-ref disconnectcluster-ref config disconnect
cluster-ref listcluster-ref config list
cluster-ref infocluster-ref config info
cluster-ref setupcluster-ref config setup
cluster-ref resetcluster-ref config reset

Deployment

Old CommandNew Command
deployment add-clusterdeployment cluster attach
deployment listdeployment config list
deployment createdeployment config create
deployment deletedeployment config destroy

Explorer

Old CommandNew Command
explorer deployexplorer node add
explorer destroyexplorer node destroy

Mirror Node

Old CommandNew Command
mirror-node deploymirror node add
mirror-node destroymirror node destroy

Relay

Old CommandNew Command
relay deployrelay node add
relay destroyrelay node destroy

Network

Old CommandNew Command
network deployconsensus network deploy
network destroyconsensus network destroy

Node

Old CommandNew Command
node keyskeys consensus generate
node freezeconsensus network freeze
node upgradeconsensus network upgrade
node setupconsensus node setup
node startconsensus node start
node stopconsensus node stop
node upgradeconsensus node upgrade
node restartconsensus node restart
node refreshconsensus node refresh
node addconsensus node add
node updateconsensus node update
node deleteconsensus node destroy
node add-prepareconsensus dev-node-add prepare
node add-submit-transactionconsensus dev-node-add submit-transaction
node add-executeconsensus dev-node-add execute
node update-prepareconsensus dev-node-update prepare
node update-submit-transactionconsensus dev-node-update submit-transaction
node update-executeconsensus dev-node-update execute
node upgrade-prepareconsensus dev-node-upgrade prepare
node upgrade-submit-transactionconsensus dev-node-upgrade submit-transaction
node upgrade-executeconsensus dev-node-upgrade execute
node delete-prepareconsensus dev-node-delete prepare
node delete-submit-transactionconsensus dev-node-delete submit-transaction
node delete-executeconsensus dev-node-delete execute
node prepare-upgradeconsensus dev-freeze prepare-upgrade
node freeze-upgradeconsensus dev-freeze freeze-upgrade
node download-generated-filesconsensus diagnostic configs
node logsconsensus diagnostics all
node statesconsensus state download

13 -

Legacy Releases

Solo VersionNode.jsKindSolo ChartHederaKubernetesKubectlHelmk9sDocker ResourcesRelease DateEnd of Support
0.41.0>= 20.18.0 (lts/iron)>= v0.26.0v0.54.4v0.62.10+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-07-242025-08-24
0.40.1>= 20.18.0 (lts/iron)>= v0.26.0v0.54.4v0.61.7+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-07-172025-08-17
0.40.0>= 20.18.0 (lts/iron)>= v0.26.0v0.54.4v0.61.7+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-07-162025-08-16
0.39.0>= 20.18.0 (lts/iron)>= v0.26.0v0.54.3v0.61.7+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-07-032025-08-03
0.38.0>= 20.18.0 (lts/iron)>= v0.26.0v0.54.3v0.58.1+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-06-262025-07-26
0.37.1>= 20.18.0 (lts/iron)>= v0.26.0v0.53.0v0.58.1+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-06-032025-07-03
0.37.0>= 20.18.0 (lts/iron)>= v0.26.0v0.53.0v0.58.1+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-06-022025-07-02
0.36.1>= 20.18.0 (lts/iron)>= v0.26.0v0.53.0v0.58.1+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-05-282025-06-28
0.36.0>= 20.18.0 (lts/iron)>= v0.26.0v0.52.0v0.58.1+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-05-232025-06-23
0.35.0>= 20.18.0 (lts/iron)>= v0.26.0v0.44.0v0.58.1+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-02-202025-03-20
0.34.0>= 20.18.0 (lts/iron)>= v0.26.0v0.42.10v0.58.1+>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-01-242025-02-24
0.33.0>= 20.18.0 (lts/iron)>= v0.26.0v0.38.2v0.58.1 - <= v0.59.0>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42025-01-132025-02-13
0.32.0>= 20.18.0 (lts/iron)>= v0.26.0v0.38.2v0.58.1 - <= v0.59.0>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42024-12-312025-01-31
0.31.4>= 20.18.0 (lts/iron)>= v0.26.0v0.31.4v0.54.0 – <= v0.57.0>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42024-10-232024-11-23
0.30.0>= 20.14.0 (lts/hydrogen)>= v0.26.0v0.30.0v0.54.0 – <= v0.57.0>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42024-09-172024-10-17
0.29.0>= 20.14.0 (lts/hydrogen)>= v0.26.0v0.30.0v0.53.0 – <= v0.57.0>= v1.27.3>= v1.27.3v3.14.2>= v0.27.4Memory >= 12GB, CPU >= 42024-09-062024-10-06