The documentation section provides a comprehensive guide to using Solo to launch a Hiero Consensus Node network, including setup instructions, usage guides, and information for developers. It covers everything from installation to advanced features and troubleshooting.
This is the multi-page printable view of this section. Click here to print.
Documentation
- 1: Getting Started
- 2: Step By Step Guide
- 3: Solo CLI User Manual
- 4: Solo CLI Commands
- 5: FAQ
- 6: Using Solo with Mirror Node
- 7: Using Solo with Hiero JavaScript SDK
- 8: Hiero Consensus Node Platform Developer
- 9: Hiero Consensus Node Execution Developer
- 10: Attach JVM Debugger and Retrieve Logs
- 11: Using Environment Variables
1 - Getting Started
[!WARNING]
Any version of Solo prior to
v0.35.3
will fail on Apple M3/M4 chipsets due to a known issue with Java 21 and these chipsets.
Solo
An opinionated CLI tool to deploy and manage standalone test networks.
Requirements
Solo Version | Node.js | Kind | Solo Chart | Hedera | Kubernetes | Kubectl | Helm | k9s | Docker Resources |
---|---|---|---|---|---|---|---|---|---|
0.29.0 | >= 20.14.0 (lts/hydrogen) | >= v1.29.1 | v0.30.0 | v0.53.0 – <= v0.57.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 |
0.30.0 | >= 20.14.0 (lts/hydrogen) | >= v1.29.1 | v0.30.0 | v0.54.0 – <= v0.57.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 |
0.31.4 | >= 20.18.0 (lts/iron) | >= v1.29.1 | v0.31.4 | v0.54.0 – <= v0.57.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 |
0.32.0 | >= 20.18.0 (lts/iron) | >= v1.29.1 | v0.38.2 | v0.58.1 - <= v0.59.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 |
0.33.0 | >= 20.18.0 (lts/iron) | >= v1.29.1 | v0.38.2 | v0.58.1 - <= v0.59.0 | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 |
0.34.0 | >= 20.18.0 (lts/iron) | >= v1.29.1 | v0.42.10 | v0.58.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 |
0.35.0 | >= 20.18.0 (lts/iron) | >= v1.29.1 | v0.44.0 | v0.58.1+ | >= v1.27.3 | >= v1.27.3 | v3.14.2 | >= v0.27.4 | Memory >= 8GB, CPU >= 4 |
Hardware Requirements
To run a three-node network, you will need to set up Docker Desktop with at least 8GB of memory and 4 CPUs.
Setup
# install specific nodejs version
# nvm install <version>
# install nodejs version 20.18.0
nvm install v20.18.0
# lists available node versions already installed
nvm ls
# swith to selected node version
# nvm use <version>
nvm use v20.18.0
Install Solo
- Run
npm install -g @hashgraph/solo
Documentation
Contributing
Contributions are welcome. Please see the contributing guide to see how you can get involved.
Code of Conduct
This project is governed by the Contributor Covenant Code of Conduct. By participating, you are expected to uphold this code of conduct.
License
2 - Step By Step Guide
Solo User Guide
Table of Contents
- Setup Kubernetes cluster
- Step by Step Instructions
- Initialize solo directories
- Generate pem formatted node keys
- Create a deployment in the specified clusters
- Setup cluster with shared components
- Create a solo deployment
- Deploy helm chart with Hedera network components
- Setup node with Hedera platform software
- Deploy mirror node
- Deploy explorer mode
- Deploy a JSON RPC relay
- Execution Developer
- Destroy relay node
- Destroy mirror node
- Destroy explorer node
- Destroy network
For those who would like to have more control or need some customized setups, here are some step by step instructions of how to setup and deploy a solo network.
Setup Kubernetes cluster
Remote cluster
- You may use remote kubernetes cluster. In this case, ensure kubernetes context is set up correctly.
kubectl config use-context <context-name>
Local cluster
- You may use kind or microk8s to create a cluster. In this case,
ensure your Docker engine has enough resources (e.g. Memory >=8Gb, CPU: >=4). Below we show how you can use
kind
to create a cluster
First, use the following command to set up the environment variables:
export SOLO_CLUSTER_NAME=solo
export SOLO_NAMESPACE=solo
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
export SOLO_DEPLOYMENT=solo-deployment
Then run the following command to set the kubectl context to the new cluster:
kind create cluster -n "${SOLO_CLUSTER_NAME}"
Example output
Creating cluster "solo-e2e" ...
• Ensuring node image (kindest/node:v1.32.2) 🖼 ...
✓ Ensuring node image (kindest/node:v1.32.2) 🖼
• Preparing nodes 📦 ...
✓ Preparing nodes 📦
• Writing configuration 📜 ...
✓ Writing configuration 📜
• Starting control-plane 🕹️ ...
✓ Starting control-plane 🕹️
• Installing CNI 🔌 ...
✓ Installing CNI 🔌
• Installing StorageClass 💾 ...
✓ Installing StorageClass 💾
Set kubectl context to "kind-solo-e2e"
You can now use your cluster with:
kubectl cluster-info --context kind-solo-e2e
Have a nice day! 👋
You may now view pods in your cluster using k9s -A
as below:
Context: kind-solo <0> all <a> Attach <ctr… ____ __.________
Cluster: kind-solo <ctrl-d> Delete <l> | |/ _/ __ \______
User: kind-solo <d> Describe <p> | < \____ / ___/
K9s Rev: v0.32.5 <e> Edit <shif| | \ / /\___ \
K8s Rev: v1.27.3 <?> Help <z> |____|__ \ /____//____ >
CPU: n/a <shift-j> Jump Owner <s> \/ \/
MEM: n/a
┌───────────────────────────────────────────────── Pods(all)[11] ─────────────────────────────────────────────────┐
│ NAMESPACE↑ NAME PF READY STATUS RESTARTS IP NODE │
│ solo-setup console-557956d575-4r5xm ● 1/1 Running 0 10.244.0.5 solo-con │
│ solo-setup minio-operator-7d575c5f84-8shc9 ● 1/1 Running 0 10.244.0.6 solo-con │
│ kube-system coredns-5d78c9869d-6cfbg ● 1/1 Running 0 10.244.0.4 solo-con │
│ kube-system coredns-5d78c9869d-gxcjz ● 1/1 Running 0 10.244.0.3 solo-con │
│ kube-system etcd-solo-control-plane ● 1/1 Running 0 172.18.0.2 solo-con │
│ kube-system kindnet-k75z6 ● 1/1 Running 0 172.18.0.2 solo-con │
│ kube-system kube-apiserver-solo-control-plane ● 1/1 Running 0 172.18.0.2 solo-con │
│ kube-system kube-controller-manager-solo-control-plane ● 1/1 Running 0 172.18.0.2 solo-con │
│ kube-system kube-proxy-cct7t ● 1/1 Running 0 172.18.0.2 solo-con │
│ kube-system kube-scheduler-solo-control-plane ● 1/1 Running 0 172.18.0.2 solo-con │
│ local-path-storage local-path-provisioner-6bc4bddd6b-gwdp6 ● 1/1 Running 0 10.244.0.2 solo-con │
│ │
│ │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
Step by Step Instructions
Initialize solo
directories:
# reset .solo directory
rm -rf ~/.solo
solo init
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : init
**********************************************************************************
❯ Setup home directory and cache
✔ Setup home directory and cache
❯ Check dependencies
❯ Check dependency: helm [OS: darwin, Release: 23.6.0, Arch: arm64]
✔ Check dependency: helm [OS: darwin, Release: 23.6.0, Arch: arm64]
✔ Check dependencies
❯ Create local configuration
↓ Create local configuration [SKIPPED: Create local configuration]
❯ Setup chart manager
push repo hedera-json-rpc-relay -> https://hiero-ledger.github.io/hiero-json-rpc-relay/charts
push repo mirror -> https://hashgraph.github.io/hedera-mirror-node/charts
push repo haproxy-ingress -> https://haproxy-ingress.github.io/charts
✔ Setup chart manager
❯ Copy templates in '/Users/user/.solo/cache'
***************************************************************************************
Note: solo stores various artifacts (config, logs, keys etc.) in its home directory: /Users/user/.solo
If a full reset is needed, delete the directory or relevant sub-directories before running 'solo init'.
***************************************************************************************
✔ Copy templates in '/Users/user/.solo/cache'
Create a deployment in the specified clusters, generate RemoteConfig and LocalConfig objects.
- Associates a cluster reference to a k8s context
solo cluster-ref connect --cluster-ref kind-${SOLO_CLUSTER_SETUP_NAMESPACE} --context kind-${SOLO_CLUSTER_NAME}
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : cluster-ref connect --cluster-ref kind-solo-e2e --context kind-solo-e2e
**********************************************************************************
❯ Initialize
✔ Initialize
❯ Validating cluster ref:
✔ kind-solo-e2e
❯ Test connection to cluster:
✔ Test connection to cluster: kind-solo-e2e
❯ Associate a context with a cluster reference:
✔ Associate a context with a cluster reference: kind-solo-e2e
- Create a deployment
solo deployment create -n "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : deployment create --namespace solo --deployment solo-deployment --realm 0 --shard 0
Kubernetes Namespace : solo
**********************************************************************************
❯ Initialize
✔ Initialize
❯ Add deployment to local config
✔ Adding deployment: solo-deployment with namespace: solo to local config
- Add a cluster to deployment
solo deployment add-cluster --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_SETUP_NAMESPACE} --num-consensus-nodes 3
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : deployment add-cluster --deployment solo-deployment --cluster-ref kind-solo-e2e --num-consensus-nodes 3
**********************************************************************************
❯ Initialize
✔ Initialize
❯ Verify args
✔ Verify args
❯ check network state
✔ check network state
❯ Test cluster connection
✔ Test cluster connection: kind-solo-e2e, context: kind-solo-e2e
❯ Verify prerequisites
✔ Verify prerequisites
❯ add cluster-ref in local config deployments
✔ add cluster-ref: kind-solo-e2e for deployment: solo-deployment in local config
❯ create remote config for deployment
✔ create remote config for deployment: solo-deployment in cluster: kind-solo-e2e
Generate pem
formatted node keys
solo node keys --gossip-keys --tls-keys -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : node keys --gossip-keys --tls-keys --node-aliases node1,node2,node3 --deployment solo-deployment
**********************************************************************************
❯ Initialize
✔ Initialize
❯ Generate gossip keys
❯ Backup old files
✔ Backup old files
❯ Gossip key for node: node1
✔ Gossip key for node: node1
❯ Gossip key for node: node2
✔ Gossip key for node: node2
❯ Gossip key for node: node3
✔ Gossip key for node: node3
✔ Generate gossip keys
❯ Generate gRPC TLS Keys
❯ Backup old files
❯ TLS key for node: node1
❯ TLS key for node: node2
❯ TLS key for node: node3
✔ Backup old files
✔ TLS key for node: node3
✔ TLS key for node: node2
✔ TLS key for node: node1
✔ Generate gRPC TLS Keys
❯ Finalize
✔ Finalize
PEM key files are generated in ~/.solo/cache/keys
directory.
hedera-node1.crt hedera-node3.crt s-private-node1.pem s-public-node1.pem unused-gossip-pem
hedera-node1.key hedera-node3.key s-private-node2.pem s-public-node2.pem unused-tls
hedera-node2.crt hedera-node4.crt s-private-node3.pem s-public-node3.pem
hedera-node2.key hedera-node4.key s-private-node4.pem s-public-node4.pem
Setup cluster with shared components
solo cluster-ref setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : cluster-ref setup --cluster-setup-namespace solo-cluster
**********************************************************************************
❯ Initialize
✔ Initialize
❯ Prepare chart values
✔ Prepare chart values
❯ Install 'solo-cluster-setup' chart
********************** Installed solo-cluster-setup chart **********************
Version : 0.50.0
********************************************************************************
✔ Install 'solo-cluster-setup' chart
In a separate terminal, you may run k9s
to view the pod status.
Deploy helm chart with Hedera network components
It may take a while (5~15 minutes depending on your internet speed) to download various docker images and get the pods started.
If it fails, ensure you have enough resources allocated for Docker engine and retry the command.
solo network deploy -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : network deploy --node-aliases node1,node2,node3 --deployment solo-deployment
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Copy gRPC TLS Certificates
↓ Copy gRPC TLS Certificates [SKIPPED: Copy gRPC TLS Certificates]
❯ Check if cluster setup chart is installed
✔ Check if cluster setup chart is installed
❯ Prepare staging directory
❯ Copy Gossip keys to staging
✔ Copy Gossip keys to staging
❯ Copy gRPC TLS keys to staging
✔ Copy gRPC TLS keys to staging
✔ Prepare staging directory
❯ Copy node keys to secrets
❯ Copy TLS keys
❯ Node: node1, cluster: kind-solo-e2e
❯ Node: node2, cluster: kind-solo-e2e
❯ Node: node3, cluster: kind-solo-e2e
❯ Copy Gossip keys
❯ Copy Gossip keys
❯ Copy Gossip keys
✔ Copy Gossip keys
✔ Node: node1, cluster: kind-solo-e2e
✔ Copy Gossip keys
✔ Node: node3, cluster: kind-solo-e2e
✔ Copy Gossip keys
✔ Node: node2, cluster: kind-solo-e2e
✔ Copy TLS keys
✔ Copy node keys to secrets
❯ Install chart 'solo-deployment'
*********************** Installed solo-deployment chart ************************
Version : 0.50.0
********************************************************************************
✔ Install chart 'solo-deployment'
❯ Check for load balancer
↓ Check for load balancer [SKIPPED: Check for load balancer]
❯ Redeploy chart with external IP address config
↓ Redeploy chart with external IP address config [SKIPPED: Redeploy chart with external IP address config]
❯ Check node pods are running
❯ Check Node: node1, Cluster: kind-solo-e2e
✔ Check Node: node1, Cluster: kind-solo-e2e
❯ Check Node: node2, Cluster: kind-solo-e2e
✔ Check Node: node2, Cluster: kind-solo-e2e
❯ Check Node: node3, Cluster: kind-solo-e2e
✔ Check Node: node3, Cluster: kind-solo-e2e
✔ Check node pods are running
❯ Check proxy pods are running
❯ Check HAProxy for: node1, cluster: kind-solo-e2e
❯ Check HAProxy for: node2, cluster: kind-solo-e2e
❯ Check HAProxy for: node3, cluster: kind-solo-e2e
❯ Check Envoy Proxy for: node1, cluster: kind-solo-e2e
❯ Check Envoy Proxy for: node2, cluster: kind-solo-e2e
❯ Check Envoy Proxy for: node3, cluster: kind-solo-e2e
✔ Check HAProxy for: node3, cluster: kind-solo-e2e
✔ Check Envoy Proxy for: node1, cluster: kind-solo-e2e
✔ Check HAProxy for: node1, cluster: kind-solo-e2e
✔ Check Envoy Proxy for: node2, cluster: kind-solo-e2e
✔ Check Envoy Proxy for: node3, cluster: kind-solo-e2e
✔ Check HAProxy for: node2, cluster: kind-solo-e2e
✔ Check proxy pods are running
❯ Check auxiliary pods are ready
❯ Check MinIO
✔ Check MinIO
✔ Check auxiliary pods are ready
❯ Add node and proxies to remote config
✔ Add node and proxies to remote config
Setup node with Hedera platform software.
- It may take a while as it download the hedera platform code from https://builds.hedera.com/
solo node setup -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : node setup --node-aliases node1,node2,node3 --deployment solo-deployment
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Validate nodes states
❯ Validating state for node node1
✔ Validating state for node node1 - valid state: requested
❯ Validating state for node node2
✔ Validating state for node node2 - valid state: requested
❯ Validating state for node node3
✔ Validating state for node node3 - valid state: requested
✔ Validate nodes states
❯ Identify network pods
❯ Check network pod: node1
❯ Check network pod: node2
❯ Check network pod: node3
✔ Check network pod: node1
✔ Check network pod: node3
✔ Check network pod: node2
✔ Identify network pods
❯ Fetch platform software into network nodes
❯ Update node: node1 [ platformVersion = v0.59.5, context = kind-solo-e2e ]
❯ Update node: node2 [ platformVersion = v0.59.5, context = kind-solo-e2e ]
❯ Update node: node3 [ platformVersion = v0.59.5, context = kind-solo-e2e ]
✔ Update node: node2 [ platformVersion = v0.59.5, context = kind-solo-e2e ]
✔ Update node: node1 [ platformVersion = v0.59.5, context = kind-solo-e2e ]
✔ Update node: node3 [ platformVersion = v0.59.5, context = kind-solo-e2e ]
✔ Fetch platform software into network nodes
❯ Setup network nodes
❯ Node: node1
❯ Node: node2
❯ Node: node3
❯ Copy configuration files
❯ Copy configuration files
❯ Copy configuration files
✔ Copy configuration files
❯ Set file permissions
✔ Copy configuration files
❯ Set file permissions
✔ Copy configuration files
❯ Set file permissions
✔ Set file permissions
✔ Node: node2
✔ Set file permissions
✔ Node: node3
✔ Set file permissions
✔ Node: node1
✔ Setup network nodes
❯ Change node state to setup in remote config
✔ Change node state to setup in remote config
- Start the nodes
solo node start -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : node start --node-aliases node1,node2,node3 --deployment solo-deployment
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Validate nodes states
❯ Validating state for node node1
✔ Validating state for node node1 - valid state: setup
❯ Validating state for node node2
✔ Validating state for node node2 - valid state: setup
❯ Validating state for node node3
✔ Validating state for node node3 - valid state: setup
✔ Validate nodes states
❯ Identify existing network nodes
❯ Check network pod: node1
❯ Check network pod: node2
❯ Check network pod: node3
✔ Check network pod: node1
✔ Check network pod: node2
✔ Check network pod: node3
✔ Identify existing network nodes
❯ Upload state files network nodes
↓ Upload state files network nodes [SKIPPED: Upload state files network nodes]
❯ Starting nodes
❯ Start node: node1
❯ Start node: node2
❯ Start node: node3
✔ Start node: node2
✔ Start node: node3
✔ Start node: node1
✔ Starting nodes
❯ Enable port forwarding for JVM debugger
↓ Enable port forwarding for JVM debugger [SKIPPED: Enable port forwarding for JVM debugger]
❯ Check all nodes are ACTIVE
❯ Check network pod: node1
❯ Check network pod: node2
❯ Check network pod: node3
✔ Check network pod: node1 - status ACTIVE, attempt: 17/300
✔ Check network pod: node3 - status ACTIVE, attempt: 17/300
✔ Check network pod: node2 - status ACTIVE, attempt: 18/300
✔ Check all nodes are ACTIVE
❯ Check node proxies are ACTIVE
❯ Check proxy for node: node1
✔ Check proxy for node: node1
❯ Check proxy for node: node2
✔ Check proxy for node: node2
❯ Check proxy for node: node3
✔ Check proxy for node: node3
✔ Check node proxies are ACTIVE
❯ Change node state to started in remote config
✔ Change node state to started in remote config
❯ Add node stakes
❯ Adding stake for node: node1
✔ Adding stake for node: node1
❯ Adding stake for node: node2
✔ Adding stake for node: node2
❯ Adding stake for node: node3
✔ Adding stake for node: node3
✔ Add node stakes
Deploy mirror node
solo mirror-node deploy --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_SETUP_NAMESPACE}
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : mirror-node deploy --deployment solo-deployment --cluster-ref kind-solo-e2e --quiet-mode
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Enable mirror-node
❯ Prepare address book
✔ Prepare address book
❯ Install mirror ingress controller
↓ Install mirror ingress controller [SKIPPED: Install mirror ingress controller]
❯ Deploy mirror-node
**************************** Installed mirror chart ****************************
Version : v0.126.0
********************************************************************************
✔ Deploy mirror-node
✔ Enable mirror-node
❯ Check pods are ready
❯ Check Postgres DB
❯ Check REST API
❯ Check GRPC
❯ Check Monitor
❯ Check Importer
✔ Check Postgres DB
✔ Check Importer
✔ Check GRPC
✔ Check REST API
✔ Check Monitor
✔ Check pods are ready
❯ Seed DB data
❯ Insert data in public.file_data
✔ Insert data in public.file_data
✔ Seed DB data
❯ Add mirror node to remote config
✔ Add mirror node to remote config
Deploy explorer mode
solo explorer deploy --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_SETUP_NAMESPACE}
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : explorer deploy --deployment solo-deployment --quiet-mode
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Load remote config
✔ Load remote config
❯ Install cert manager
↓ Install cert manager [SKIPPED: Install cert manager]
❯ Install explorer
*********************** Installed hiero-explorer chart ************************
Version : 24.12.1
********************************************************************************
✔ Install explorer
❯ Install explorer ingress controller
↓ Install explorer ingress controller [SKIPPED: Install explorer ingress controller]
❯ Check explorer pod is ready
✔ Check explorer pod is ready
❯ Check haproxy ingress controller pod is ready
↓ Check haproxy ingress controller pod is ready [SKIPPED: Check haproxy ingress controller pod is ready]
❯ Add explorer to remote config
*********************************** ERROR *****************************************
Explorer deployment failed: Error deploying explorer: Invalid cluster: undefined
***********************************************************************************
Deploy a JSON RPC relay
solo relay deploy -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : relay deploy --node-aliases node1,node2,node3 --deployment solo-deployment
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Check chart is installed
✔ Check chart is installed
❯ Prepare chart values
✔ Prepare chart values
❯ Deploy JSON RPC Relay
******************* Installed relay-node1-node2-node3 chart ********************
Version : v0.67.0
********************************************************************************
✔ Deploy JSON RPC Relay
❯ Check relay is running
✔ Check relay is running
❯ Check relay is ready
✔ Check relay is ready
❯ Add relay component in remote config
✔ Add relay component in remote config
Execution Developer
Next: Execution Developer
Destroy relay node
solo relay destroy --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : relay destroy --node-aliases node1,node2,node3 --deployment solo-deployment
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Destroy JSON RPC Relay
*** Destroyed Relays ***
-------------------------------------------------------------------------------
- hiero-explorer [hiero-explorer-chart-24.12.1]
- mirror [hedera-mirror-0.126.0]
- solo-deployment [solo-deployment-0.50.0]
✔ Destroy JSON RPC Relay
❯ Remove relay component from remote config
✔ Remove relay component from remote config
Destroy mirror node
solo mirror-node destroy --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : mirror-node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Destroy mirror-node
✔ Destroy mirror-node
❯ Delete PVCs
✔ Delete PVCs
❯ Uninstall mirror ingress controller
✔ Uninstall mirror ingress controller
❯ Remove mirror node from remote config
✔ Remove mirror node from remote config
Destroy explorer node
solo explorer destroy --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : explorer destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Load remote config
✔ Load remote config
❯ Destroy explorer
✔ Destroy explorer
❯ Uninstall explorer ingress controller
✔ Uninstall explorer ingress controller
❯ Remove explorer from remote config
*********************************** ERROR *****************************************
Explorer destruction failed: Error destroy explorer: Component mirrorNodeExplorer of type mirrorNodeExplorers not found while attempting to remove
***********************************************************************************
Destroy network
solo network destroy --deployment "${SOLO_DEPLOYMENT}"
- Example output
******************************* Solo *********************************************
Version : 0.36.0
Kubernetes Context : kind-solo-e2e
Kubernetes Cluster : kind-solo-e2e
Current Command : network destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
❯ Initialize
❯ Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
❯ Remove deployment from local configuration
✔ Remove deployment from local configuration
❯ Running sub-tasks to destroy network
✔ Deleting the RemoteConfig configmap in namespace solo
You may view the list of pods using k9s
as below:
Context: kind-solo <0> all <a> Attach <ctr… ____ __.________
Cluster: kind-solo <ctrl-d> Delete <l> | |/ _/ __ \______
User: kind-solo <d> Describe <p> | < \____ / ___/
K9s Rev: v0.32.5 <e> Edit <shif| | \ / /\___ \
K8s Rev: v1.27.3 <?> Help <z> |____|__ \ /____//____ >
CPU: n/a <shift-j> Jump Owner <s> \/ \/
MEM: n/a
┌───────────────────────────────────────────────── Pods(all)[31] ─────────────────────────────────────────────────┐
│ NAMESPACE↑ NAME PF READY STATUS RESTARTS I │
│ kube-system coredns-5d78c9869d-994t4 ● 1/1 Running 0 1 │
│ kube-system coredns-5d78c9869d-vgt4q ● 1/1 Running 0 1 │
│ kube-system etcd-solo-control-plane ● 1/1 Running 0 1 │
│ kube-system kindnet-q26c9 ● 1/1 Running 0 1 │
│ kube-system kube-apiserver-solo-control-plane ● 1/1 Running 0 1 │
│ kube-system kube-controller-manager-solo-control-plane ● 1/1 Running 0 1 │
│ kube-system kube-proxy-9b27j ● 1/1 Running 0 1 │
│ kube-system kube-scheduler-solo-control-plane ● 1/1 Running 0 1 │
│ local-path-storage local-path-provisioner-6bc4bddd6b-4mv8c ● 1/1 Running 0 1 │
│ solo envoy-proxy-node1-65f8879dcc-rwg97 ● 1/1 Running 0 1 │
│ solo envoy-proxy-node2-667f848689-628cx ● 1/1 Running 0 1 │
│ solo envoy-proxy-node3-6bb4b4cbdf-dmwtr ● 1/1 Running 0 1 │
│ solo solo-deployment-grpc-75bb9c6c55-l7kvt ● 1/1 Running 0 1 │
│ solo solo-deployment-hiero-explorer-6565ccb4cb-9dbw2 ● 1/1 Running 0 1 │
│ solo solo-deployment-importer-dd74fd466-vs4mb ● 1/1 Running 0 1 │
│ solo solo-deployment-monitor-54b8f57db9-fn5qq ● 1/1 Running 0 1 │
│ solo solo-deployment-postgres-postgresql-0 ● 1/1 Running 0 1 │
│ solo solo-deployment-redis-node-0 ● 2/2 Running 0 1 │
│ solo solo-deployment-rest-6d48f8dbfc-plbp2 ● 1/1 Running 0 1 │
│ solo solo-deployment-restjava-5d6c4cb648-r597f ● 1/1 Running 0 1 │
│ solo solo-deployment-web3-55fdfbc7f7-lzhfl ● 1/1 Running 0 1 │
│ solo haproxy-node1-785b9b6f9b-676mr ● 1/1 Running 1 1 │
│ solo haproxy-node2-644b8c76d-v9mg6 ● 1/1 Running 1 1 │
│ solo haproxy-node3-fbffdb64-272t2 ● 1/1 Running 1 1 │
│ solo minio-pool-1-0 ● 2/2 Running 1 1 │
│ solo network-node1-0 ● 5/5 Running 2 1 │
│ solo network-node2-0 ● 5/5 Running 2 1 │
│ solo network-node3-0 ● 5/5 Running 2 1 │
│ solo relay-node1-node2-node3-hedera-json-rpc-relay-ddd4c8d8b-hdlpb ● 1/1 Running 0 1 │
│ solo-cluster console-557956d575-c5qp7 ● 1/1 Running 0 1 │
│ solo-cluster minio-operator-7d575c5f84-xdwwz ● 1/1 Running 0 1 │
│ │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
3 - Solo CLI User Manual
Solo Command Line User Manual
Solo has a series of commands to use, and some commands have subcommands. User can get help information by running with the following methods:
solo --help
will return the help information for the solo
command to show which commands
are available.
solo command --help
will return the help information for the specific command to show which options
solo account --help
Manage Hedera accounts in solo network
Commands:
account init Initialize system accounts with new keys
account create Creates a new account with a new key and stores the key in th
e Kubernetes secrets, if you supply no key one will be genera
ted for you, otherwise you may supply either a ECDSA or ED255
19 private key
account update Updates an existing account with the provided info, if you wa
nt to update the private key, you can supply either ECDSA or
ED25519 but not both
account get Gets the account info including the current amount of HBAR
Options:
--dev Enable developer mode [boolean]
--force-port-forward Force port forward to access the network services
[boolean]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
solo command subcommand --help
will return the help information for the specific subcommand to show which options
solo account create --help
Creates a new account with a new key and stores the key in the Kubernetes secret
s, if you supply no key one will be generated for you, otherwise you may supply
either a ECDSA or ED25519 private key
Options:
--dev Enable developer mode [boolean]
--force-port-forward Force port forward to access the network services
[boolean]
--hbar-amount Amount of HBAR to add [number]
--create-amount Amount of new account to create [number]
--ecdsa-private-key ECDSA private key for the Hedera account [string]
-d, --deployment The name the user will reference locally to link to
a deployment [string]
--ed25519-private-key ED25519 private key for the Hedera account [string]
--generate-ecdsa-key Generate ECDSA private key for the Hedera account
[boolean]
--set-alias Sets the alias for the Hedera account when it is cr
eated, requires --ecdsa-private-key [boolean]
-c, --cluster-ref The cluster reference that will be used for referen
cing the Kubernetes cluster and stored in the local
and remote configuration for the deployment. For
commands that take multiple clusters they can be se
parated by commas. [string]
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
For more information see: Solo CLI Commands
4 - Solo CLI Commands
Solo Command Reference
Table of Contents
Root Help Output
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js --help
Select a command
Usage:
solo <command> [options]
Commands:
init Initialize local environment
account Manage Hedera accounts in solo network
cluster-ref Manage solo testing cluster
network Manage solo network deployment
node Manage Hedera platform node in solo network
relay Manage JSON RPC relays in solo network
mirror-node Manage Hedera Mirror Node in solo network
explorer Manage Explorer in solo network
deployment Manage solo network deployment
block Manage block related components in solo network
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
init
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js init --help
init
Initialize local environment
Options:
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-u, --user Optional user name used for [string]
local configuration. Only
accepts letters and numbers.
Defaults to the username
provided by the OS
-v, --version Show version number [boolean]
account
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js account --help
Select an account command
account
Manage Hedera accounts in solo network
Commands:
account init Initialize system accounts with new keys
account create Creates a new account with a new key and stores the key in the Kubernetes secrets, if you supply no key one will be generated for you, otherwise you may supply either a ECDSA or ED25519 private key
account update Updates an existing account with the provided info, if you want to update the private key, you can supply either ECDSA or ED25519 but not both
account get Gets the account info including the current amount of HBAR
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
account init
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js account init --help
account init
Initialize system accounts with new keys
Options:
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-v, --version Show version number [boolean]
account create
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js account create --help
account create
Creates a new account with a new key and stores the key in the Kubernetes secrets, if you supply no key one will be generated for you, otherwise you may supply either a ECDSA or ED25519 private key
Options:
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
--create-amount Amount of new account to [number] [default: 1]
create
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--ecdsa-private-key ECDSA private key for the [string]
Hedera account
--ed25519-private-key ED25519 private key for the [string]
Hedera account
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--generate-ecdsa-key Generate ECDSA private key for [boolean] [default: false]
the Hedera account
--hbar-amount Amount of HBAR to add [number] [default: 100]
--set-alias Sets the alias for the Hedera [boolean] [default: false]
account when it is created,
requires --ecdsa-private-key
-v, --version Show version number [boolean]
account update
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js account update --help
account update
Updates an existing account with the provided info, if you want to update the private key, you can supply either ECDSA or ED25519 but not both
Options:
--account-id The Hedera account id, e.g.: [string]
0.0.1001
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--ecdsa-private-key ECDSA private key for the [string]
Hedera account
--ed25519-private-key ED25519 private key for the [string]
Hedera account
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--hbar-amount Amount of HBAR to add [number] [default: 100]
-v, --version Show version number [boolean]
account get
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js account get --help
account get
Gets the account info including the current amount of HBAR
Options:
--account-id The Hedera account id, e.g.: [string]
0.0.1001
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--private-key Show private key information [boolean] [default: false]
-v, --version Show version number [boolean]
cluster-ref
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js cluster-ref --help
Select a context command
cluster-ref
Manage solo testing cluster
Commands:
cluster-ref connect associates a cluster reference to a k8s context
cluster-ref disconnect dissociates a cluster reference from a k8s context
cluster-ref list List all available clusters
cluster-ref info Get cluster info
cluster-ref setup Setup cluster with shared components
cluster-ref reset Uninstall shared components from cluster
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
cluster-ref connect
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js cluster-ref connect --help
Missing required argument: cluster-ref
cluster-ref connect
associates a cluster reference to a k8s context
Options:
-c, --cluster-ref The cluster reference that [string] [required]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
--context The Kubernetes context name to [string]
be used
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
cluster-ref disconnect
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js cluster-ref disconnect --help
Missing required argument: cluster-ref
cluster-ref disconnect
dissociates a cluster reference from a k8s context
Options:
-c, --cluster-ref The cluster reference that [string] [required]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
cluster-ref list
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js cluster-ref list --help
cluster-ref list
List all available clusters
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
cluster-ref info
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js cluster-ref info --help
Missing required argument: cluster-ref
cluster-ref info
Get cluster info
Options:
-c, --cluster-ref The cluster reference that [string] [required]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
cluster-ref setup
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js cluster-ref setup --help
cluster-ref setup
Setup cluster with shared components
Options:
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-s, --cluster-setup-namespace Cluster Setup Namespace [string] [default: "solo-setup"]
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--minio Deploy minio operator [boolean] [default: true]
--prometheus-stack Deploy prometheus stack [boolean] [default: false]
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
cluster-ref reset
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js cluster-ref reset --help
cluster-ref reset
Uninstall shared components from cluster
Options:
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-s, --cluster-setup-namespace Cluster Setup Namespace [string] [default: "solo-setup"]
--dev Enable developer mode [boolean] [default: false]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
network
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js network --help
Select a chart command
network
Manage solo network deployment
Commands:
network deploy Deploy solo network. Requires the chart `solo-cluster-setup` to have been installed in the cluster. If it hasn't the following command can be ran: `solo cluster-ref setup`
network destroy Destroy solo network. If both --delete-pvcs and --delete-secrets are set to true, the namespace will be deleted.
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
network deploy
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js network deploy --help
network deploy
Deploy solo network. Requires the chart `solo-cluster-setup` to have been installed in the cluster. If it hasn't the following command can be ran: `solo cluster-ref setup`
Options:
--api-permission-properties api-permission.properties file [string] [default: "templates/api-permission.properties"]
for node
--app Testing app name [string] [default: "HederaNode.jar"]
--application-env the application.env file for [string] [default: "templates/application.env"]
the node provides environment
variables to the
solo-container to be used when
the hedera platform is started
--application-properties application.properties file [string] [default: "templates/application.properties"]
for node
--aws-bucket name of aws storage bucket [string]
--aws-bucket-prefix path prefix of aws storage [string]
bucket
--aws-endpoint aws storage endpoint URL [string]
--aws-write-access-key aws storage access key for [string]
write access
--aws-write-secrets aws storage secret key for [string]
write access
--backup-bucket name of bucket for backing up [string]
state files
--backup-endpoint backup storage endpoint URL [string]
--backup-provider backup storage service [string] [default: "GCS"]
provider, GCS or AWS
--backup-region backup storage region [string] [default: "us-central1"]
--backup-write-access-key backup storage access key for [string]
write access
--backup-write-secrets backup storage secret key for [string]
write access
--bootstrap-properties bootstrap.properties file for [string] [default: "templates/bootstrap.properties"]
node
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--envoy-ips IP mapping where key = value [string]
is node alias and static ip
for envoy proxy, (e.g.:
--envoy-ips
node1=127.0.0.1,node2=127.0.0.1)
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gcs-bucket name of gcs storage bucket [string]
--gcs-bucket-prefix path prefix of google storage [string]
bucket
--gcs-endpoint gcs storage endpoint URL [string]
--gcs-write-access-key gcs storage access key for [string]
write access
--gcs-write-secrets gcs storage secret key for [string]
write access
--genesis-throttles-file throttles.json file used [string]
during network genesis
--grpc-tls-cert TLS Certificate path for the [string]
gRPC (e.g.
"node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)
--grpc-tls-key TLS Certificate key path for [string]
the gRPC (e.g.
"node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)
--grpc-web-tls-cert TLS Certificate path for gRPC [string]
Web (e.g.
"node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)
--grpc-web-tls-key TLC Certificate key path for [string]
gRPC Web (e.g.
"node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)
--haproxy-ips IP mapping where key = value [string]
is node alias and static ip
for haproxy, (e.g.:
--haproxy-ips
node1=127.0.0.1,node2=127.0.0.1)
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--load-balancer Enable load balancer for [boolean] [default: false]
network node proxies
--log4j2-xml log4j2.xml file for node [string] [default: "templates/log4j2.xml"]
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
--profile Resource profile (local | tiny [string] [default: "local"]
| small | medium | large)
--profile-file Resource profile definition [string] [default: "profiles/custom-spec.yaml"]
(e.g. custom-spec.yaml)
--prometheus-svc-monitor Enable prometheus service [boolean] [default: false]
monitor for the network nodes
--pvcs Enable persistent volume [boolean] [default: false]
claims to store data outside
the pod, required for node add
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--settings-txt settings.txt file for node [string] [default: "templates/settings.txt"]
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
--storage-type storage type for saving stream [default: "minio_only"]
files, available options are
minio_only, aws_only,
gcs_only, aws_and_gcs
-f, --values-file Comma separated chart values [string]
file paths for each cluster
(e.g.
values.yaml,cluster-1=./a/b/values1.yaml,cluster-2=./a/b/values2.yaml)
-v, --version Show version number [boolean]
network destroy
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js network destroy --help
network destroy
Destroy solo network. If both --delete-pvcs and --delete-secrets are set to true, the namespace will be deleted.
Options:
--delete-pvcs Delete the persistent volume [boolean] [default: false]
claims. If both --delete-pvcs
and --delete-secrets are
set to true, the namespace
will be deleted.
--delete-secrets Delete the network secrets. If [boolean] [default: false]
both --delete-pvcs and
--delete-secrets are set to
true, the namespace will be
deleted.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--enable-timeout enable time out for running a [boolean] [default: false]
command
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
node
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node --help
Select a node command
node
Manage Hedera platform node in solo network
Commands:
node setup Setup node with a specific version of Hedera platform
node start Start a node
node stop Stop a node
node freeze Freeze all nodes of the network
node restart Restart all nodes of the network
node keys Generate node keys
node refresh Reset and restart a node
node logs Download application logs from the network nodes and stores them in <SOLO_LOGS_DIR>/<namespace>/<podName>/ directory
node states Download hedera states from the network nodes and stores them in <SOLO_LOGS_DIR>/<namespace>/<podName>/ directory
node add Adds a node with a specific version of Hedera platform
node add-prepare Prepares the addition of a node with a specific version of Hedera platform
node add-submit-transactions Submits NodeCreateTransaction and Upgrade transactions to the network nodes
node add-execute Executes the addition of a previously prepared node
node update Update a node with a specific version of Hedera platform
node update-prepare Prepare the deployment to update a node with a specific version of Hedera platform
node update-submit-transactions Submit transactions for updating a node with a specific version of Hedera platform
node update-execute Executes the updating of a node with a specific version of Hedera platform
node delete Delete a node with a specific version of Hedera platform
node delete-prepare Prepares the deletion of a node with a specific version of Hedera platform
node delete-submit-transactions Submits transactions to the network nodes for deleting a node
node delete-execute Executes the deletion of a previously prepared node
node prepare-upgrade Prepare the network for a Freeze Upgrade operation
node freeze-upgrade Performs a Freeze Upgrade operation with on the network after it has been prepared with prepare-upgrade
node upgrade upgrades all nodes on the network
node upgrade-prepare Prepare the deployment to upgrade network
node upgrade-submit-transactions Submit transactions for upgrading network
node upgrade-execute Executes the upgrading the network
node download-generated-files Downloads the generated files from an existing node
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
node setup
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node setup --help
Missing required argument: deployment
node setup
Setup node with a specific version of Hedera platform
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--admin-public-keys Comma separated list of DER [string]
encoded ED25519 public keys
and must match the order of
the node aliases
--app Testing app name [string] [default: "HederaNode.jar"]
--app-config json config file of testing [string]
app
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--local-build-path path of hedera local repo [string]
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
-v, --version Show version number [boolean]
node start
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node start --help
Missing required argument: deployment
node start
Start a node
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--app Testing app name [string] [default: "HederaNode.jar"]
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--stake-amounts The amount to be staked in the [string]
same order you list the node
aliases with multiple node
staked values comma seperated
--state-file A zipped state file to be used [string]
for the network
-v, --version Show version number [boolean]
node stop
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node stop --help
Missing required argument: deployment
node stop
Stop a node
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
node freeze
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node freeze --help
Missing required argument: deployment
node freeze
Freeze all nodes of the network
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
node restart
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node restart --help
Missing required argument: deployment
node restart
Restart all nodes of the network
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
node keys
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node keys --help
Missing required argument: deployment
node keys
Generate node keys
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
-n, --namespace Namespace [string]
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--tls-keys Generate gRPC TLS keys for [boolean] [default: false]
nodes
-v, --version Show version number [boolean]
node refresh
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node refresh --help
Missing required argument: deployment
node refresh
Reset and restart a node
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--local-build-path path of hedera local repo [string]
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
-v, --version Show version number [boolean]
node logs
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node logs --help
Missing required arguments: deployment, node-aliases
node logs
Download application logs from the network nodes and stores them in <SOLO_LOGS_DIR>/<namespace>/<podName>/ directory
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
-i, --node-aliases Comma separated node aliases [string] [required]
(empty means all nodes)
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
node states
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node states --help
Missing required arguments: deployment, node-aliases
node states
Download hedera states from the network nodes and stores them in <SOLO_LOGS_DIR>/<namespace>/<podName>/ directory
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
-i, --node-aliases Comma separated node aliases [string] [required]
(empty means all nodes)
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
node add
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node add --help
Missing required argument: deployment
node add
Adds a node with a specific version of Hedera platform
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--admin-key Admin key [string] [default: "***"]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--envoy-ips IP mapping where key = value [string]
is node alias and static ip
for envoy proxy, (e.g.:
--envoy-ips
node1=127.0.0.1,node2=127.0.0.1)
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gossip-endpoints Comma separated gossip [string]
endpoints of the node(e.g.
first one is internal, second
one is external)
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
--grpc-endpoints Comma separated gRPC endpoints [string]
of the node (at most 8)
--grpc-tls-cert TLS Certificate path for the [string]
gRPC (e.g.
"node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)
--grpc-tls-key TLS Certificate key path for [string]
the gRPC (e.g.
"node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)
--grpc-web-tls-cert TLS Certificate path for gRPC [string]
Web (e.g.
"node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)
--grpc-web-tls-key TLC Certificate key path for [string]
gRPC Web (e.g.
"node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)
--haproxy-ips IP mapping where key = value [string]
is node alias and static ip
for haproxy, (e.g.:
--haproxy-ips
node1=127.0.0.1,node2=127.0.0.1)
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--local-build-path path of hedera local repo [string]
--pvcs Enable persistent volume [boolean] [default: false]
claims to store data outside
the pod, required for node add
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
--tls-keys Generate gRPC TLS keys for [boolean] [default: false]
nodes
-v, --version Show version number [boolean]
node add-prepare
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node add-prepare --help
Missing required arguments: deployment, output-dir
node add-prepare
Prepares the addition of a node with a specific version of Hedera platform
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--output-dir Path to the directory where [string] [required]
the command context will be
saved to
--admin-key Admin key [string] [default: "***"]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gossip-endpoints Comma separated gossip [string]
endpoints of the node(e.g.
first one is internal, second
one is external)
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
--grpc-endpoints Comma separated gRPC endpoints [string]
of the node (at most 8)
--grpc-tls-cert TLS Certificate path for the [string]
gRPC (e.g.
"node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)
--grpc-tls-key TLS Certificate key path for [string]
the gRPC (e.g.
"node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)
--grpc-web-tls-cert TLS Certificate path for gRPC [string]
Web (e.g.
"node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)
--grpc-web-tls-key TLC Certificate key path for [string]
gRPC Web (e.g.
"node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--local-build-path path of hedera local repo [string]
--pvcs Enable persistent volume [boolean] [default: false]
claims to store data outside
the pod, required for node add
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
--tls-keys Generate gRPC TLS keys for [boolean] [default: false]
nodes
-v, --version Show version number [boolean]
node add-submit-transactions
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node add-submit-transactions --help
Missing required arguments: deployment, input-dir
node add-submit-transactions
Submits NodeCreateTransaction and Upgrade transactions to the network nodes
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--input-dir Path to the directory where [string] [required]
the command context will be
loaded from
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gossip-endpoints Comma separated gossip [string]
endpoints of the node(e.g.
first one is internal, second
one is external)
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
--grpc-endpoints Comma separated gRPC endpoints [string]
of the node (at most 8)
--grpc-tls-cert TLS Certificate path for the [string]
gRPC (e.g.
"node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)
--grpc-tls-key TLS Certificate key path for [string]
the gRPC (e.g.
"node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)
--grpc-web-tls-cert TLS Certificate path for gRPC [string]
Web (e.g.
"node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)
--grpc-web-tls-key TLC Certificate key path for [string]
gRPC Web (e.g.
"node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--local-build-path path of hedera local repo [string]
--pvcs Enable persistent volume [boolean] [default: false]
claims to store data outside
the pod, required for node add
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
--tls-keys Generate gRPC TLS keys for [boolean] [default: false]
nodes
-v, --version Show version number [boolean]
node add-execute
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node add-execute --help
Missing required arguments: deployment, input-dir
node add-execute
Executes the addition of a previously prepared node
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--input-dir Path to the directory where [string] [required]
the command context will be
loaded from
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
--envoy-ips IP mapping where key = value [string]
is node alias and static ip
for envoy proxy, (e.g.:
--envoy-ips
node1=127.0.0.1,node2=127.0.0.1)
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gossip-endpoints Comma separated gossip [string]
endpoints of the node(e.g.
first one is internal, second
one is external)
--gossip-keys Generate gossip keys for nodes [boolean] [default: false]
--grpc-endpoints Comma separated gRPC endpoints [string]
of the node (at most 8)
--grpc-tls-cert TLS Certificate path for the [string]
gRPC (e.g.
"node1=/Users/username/node1-grpc.cert" with multiple nodes comma separated)
--grpc-tls-key TLS Certificate key path for [string]
the gRPC (e.g.
"node1=/Users/username/node1-grpc.key" with multiple nodes comma seperated)
--grpc-web-tls-cert TLS Certificate path for gRPC [string]
Web (e.g.
"node1=/Users/username/node1-grpc-web.cert" with multiple nodes comma separated)
--grpc-web-tls-key TLC Certificate key path for [string]
gRPC Web (e.g.
"node1=/Users/username/node1-grpc-web.key" with multiple nodes comma seperated)
--haproxy-ips IP mapping where key = value [string]
is node alias and static ip
for haproxy, (e.g.:
--haproxy-ips
node1=127.0.0.1,node2=127.0.0.1)
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--local-build-path path of hedera local repo [string]
--pvcs Enable persistent volume [boolean] [default: false]
claims to store data outside
the pod, required for node add
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
--tls-keys Generate gRPC TLS keys for [boolean] [default: false]
nodes
-v, --version Show version number [boolean]
node update
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node update --help
Missing required arguments: deployment, node-alias
node update
Update a node with a specific version of Hedera platform
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--node-alias Node alias (e.g. node99) [string] [required]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gossip-endpoints Comma separated gossip [string]
endpoints of the node(e.g.
first one is internal, second
one is external)
--gossip-private-key path and file name of the [string]
private key for signing gossip
in PEM key format to be used
--gossip-public-key path and file name of the [string]
public key for signing gossip
in PEM key format to be used
--grpc-endpoints Comma separated gRPC endpoints [string]
of the node (at most 8)
--local-build-path path of hedera local repo [string]
--new-account-number new account number for node [string]
update transaction
--new-admin-key new admin key for the Hedera [string]
account
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
--tls-private-key path and file name of the [string]
private TLS key to be used
--tls-public-key path and file name of the [string]
public TLS key to be used
-v, --version Show version number [boolean]
node update-prepare
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node update-prepare --help
Missing required arguments: deployment, output-dir, node-alias
node update-prepare
Prepare the deployment to update a node with a specific version of Hedera platform
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--node-alias Node alias (e.g. node99) [string] [required]
--output-dir Path to the directory where [string] [required]
the command context will be
saved to
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gossip-endpoints Comma separated gossip [string]
endpoints of the node(e.g.
first one is internal, second
one is external)
--gossip-private-key path and file name of the [string]
private key for signing gossip
in PEM key format to be used
--gossip-public-key path and file name of the [string]
public key for signing gossip
in PEM key format to be used
--grpc-endpoints Comma separated gRPC endpoints [string]
of the node (at most 8)
--local-build-path path of hedera local repo [string]
--new-account-number new account number for node [string]
update transaction
--new-admin-key new admin key for the Hedera [string]
account
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
--tls-private-key path and file name of the [string]
private TLS key to be used
--tls-public-key path and file name of the [string]
public TLS key to be used
-v, --version Show version number [boolean]
node update-submit-transactions
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node update-submit-transactions --help
Missing required arguments: deployment, input-dir
node update-submit-transactions
Submit transactions for updating a node with a specific version of Hedera platform
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--input-dir Path to the directory where [string] [required]
the command context will be
loaded from
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gossip-endpoints Comma separated gossip [string]
endpoints of the node(e.g.
first one is internal, second
one is external)
--grpc-endpoints Comma separated gRPC endpoints [string]
of the node (at most 8)
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node update-execute
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node update-execute --help
Missing required arguments: deployment, input-dir
node update-execute
Executes the updating of a node with a specific version of Hedera platform
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--input-dir Path to the directory where [string] [required]
the command context will be
loaded from
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--gossip-endpoints Comma separated gossip [string]
endpoints of the node(e.g.
first one is internal, second
one is external)
--grpc-endpoints Comma separated gRPC endpoints [string]
of the node (at most 8)
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node delete
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node delete --help
Missing required arguments: deployment, node-alias
node delete
Delete a node with a specific version of Hedera platform
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--node-alias Node alias (e.g. node99) [string] [required]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node delete-prepare
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node delete-prepare --help
Missing required arguments: deployment, node-alias, output-dir
node delete-prepare
Prepares the deletion of a node with a specific version of Hedera platform
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--node-alias Node alias (e.g. node99) [string] [required]
--output-dir Path to the directory where [string] [required]
the command context will be
saved to
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node delete-submit-transactions
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node delete-submit-transactions --help
Missing required arguments: deployment, node-alias, input-dir
node delete-submit-transactions
Submits transactions to the network nodes for deleting a node
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--input-dir Path to the directory where [string] [required]
the command context will be
loaded from
--node-alias Node alias (e.g. node99) [string] [required]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node delete-execute
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node delete-execute --help
Missing required arguments: deployment, node-alias, input-dir
node delete-execute
Executes the deletion of a previously prepared node
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--input-dir Path to the directory where [string] [required]
the command context will be
loaded from
--node-alias Node alias (e.g. node99) [string] [required]
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
--domain-names Custom domain names for [string]
consensus nodes mapping for
the(e.g. node0=domain.name
where key is node alias and
value is domain name)with
multiple nodes comma seperated
--endpoint-type Endpoint type (IP or FQDN) [string] [default: "FQDN"]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
--local-build-path path of hedera local repo [string]
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node prepare-upgrade
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node prepare-upgrade --help
Missing required argument: deployment
node prepare-upgrade
Prepare the network for a Freeze Upgrade operation
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
-v, --version Show version number [boolean]
node freeze-upgrade
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node freeze-upgrade --help
Missing required argument: deployment
node freeze-upgrade
Performs a Freeze Upgrade operation with on the network after it has been prepared with prepare-upgrade
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
-v, --version Show version number [boolean]
node upgrade
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node upgrade --help
Missing required arguments: deployment, release-tag, upgrade-zip-file, upgrade-version
node upgrade
upgrades all nodes on the network
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
-t, --release-tag Release tag to be used (e.g. [string] [required]
v0.60.1)
--upgrade-version Version to be used for the [string] [required]
upgrade
--upgrade-zip-file A zipped file used for network [string] [required]
upgrade
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--local-build-path path of hedera local repo [string]
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node upgrade-prepare
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node upgrade-prepare --help
Missing required arguments: deployment, release-tag, upgrade-zip-file, output-dir
node upgrade-prepare
Prepare the deployment to upgrade network
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--output-dir Path to the directory where [string] [required]
the command context will be
saved to
-t, --release-tag Release tag to be used (e.g. [string] [required]
v0.60.1)
--upgrade-zip-file A zipped file used for network [string] [required]
upgrade
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--local-build-path path of hedera local repo [string]
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node upgrade-submit-transactions
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node upgrade-submit-transactions --help
Missing required arguments: deployment, release-tag, input-dir
node upgrade-submit-transactions
Submit transactions for upgrading network
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--input-dir Path to the directory where [string] [required]
the command context will be
loaded from
-t, --release-tag Release tag to be used (e.g. [string] [required]
v0.60.1)
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--local-build-path path of hedera local repo [string]
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node upgrade-execute
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node upgrade-execute --help
Missing required arguments: deployment, release-tag, input-dir
node upgrade-execute
Executes the upgrading the network
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--input-dir Path to the directory where [string] [required]
the command context will be
loaded from
-t, --release-tag Release tag to be used (e.g. [string] [required]
v0.60.1)
--app Testing app name [string] [default: "HederaNode.jar"]
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
--debug-node-alias Enable default jvm debug port [string]
(5005) for the given node id
--dev Enable developer mode [boolean] [default: false]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--local-build-path path of hedera local repo [string]
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
-v, --version Show version number [boolean]
node download-generated-files
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js node download-generated-files --help
Missing required argument: deployment
node download-generated-files
Downloads the generated files from an existing node
Options:
-d, --deployment The name the user will [string] [required]
reference locally to link to a
deployment
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-t, --release-tag Release tag to be used (e.g. [string] [default: "v0.60.1"]
v0.60.1)
-v, --version Show version number [boolean]
relay
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js relay --help
Select a relay command
relay
Manage JSON RPC relays in solo network
Commands:
relay deploy Deploy a JSON RPC relay
relay destroy Destroy JSON RPC relay
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
relay deploy
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js relay deploy --help
relay deploy
Deploy a JSON RPC relay
Options:
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--domain-name Custom domain name [string]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-l, --ledger-id Ledger ID (a.k.a. Chain ID) [string] [default: "298"]
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
--operator-id Operator ID [string]
--operator-key Operator Key [string]
--profile Resource profile (local | tiny [string] [default: "local"]
| small | medium | large)
--profile-file Resource profile definition [string] [default: "profiles/custom-spec.yaml"]
(e.g. custom-spec.yaml)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--relay-release Relay release tag to be used [string] [default: "v0.67.0"]
(e.g. v0.48.0)
--replica-count Replica count [number] [default: 1]
-f, --values-file Comma separated chart values [string]
file
-v, --version Show version number [boolean]
relay destroy
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js relay destroy --help
relay destroy
Destroy JSON RPC relay
Options:
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-i, --node-aliases Comma separated node aliases [string]
(empty means all nodes)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
mirror-node
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js mirror-node --help
Select a mirror-node command
mirror-node
Manage Hedera Mirror Node in solo network
Commands:
mirror-node deploy Deploy mirror-node and its components
mirror-node destroy Destroy mirror-node components and database
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
mirror-node deploy
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js mirror-node deploy --help
mirror-node deploy
Deploy mirror-node and its components
Options:
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--domain-name Custom domain name [string]
--enable-ingress enable ingress on the [boolean] [default: false]
component/pod
--external-database-host Use to provide the external [string]
database host if the '
--use-external-database ' is
passed
--external-database-owner-password Use to provide the external [string]
database owner's password if
the ' --use-external-database
' is passed
--external-database-owner-username Use to provide the external [string]
database owner's username if
the ' --use-external-database
' is passed
--external-database-read-password Use to provide the external [string]
database readonly user's
password if the '
--use-external-database ' is
passed
--external-database-read-username Use to provide the external [string]
database readonly user's
username if the '
--use-external-database ' is
passed
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--ingress-controller-value-file The value file to use for [string]
ingress controller, defaults
to ""
--mirror-node-version Mirror node chart version [string] [default: "v0.129.1"]
--mirror-static-ip static IP address for the [string]
mirror node
--operator-id Operator ID [string]
--operator-key Operator Key [string]
--pinger Enable Pinger service in the [boolean] [default: false]
Mirror node monitor
--profile Resource profile (local | tiny [string] [default: "local"]
| small | medium | large)
--profile-file Resource profile definition [string] [default: "profiles/custom-spec.yaml"]
(e.g. custom-spec.yaml)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--storage-bucket name of storage bucket for [string]
mirror node importer
--storage-bucket-prefix path prefix of storage bucket [string]
mirror node importer
--storage-bucket-region region of storage bucket [string]
mirror node importer
--storage-endpoint storage endpoint URL for [string]
mirror node importer
--storage-read-access-key storage read access key for [string]
mirror node importer
--storage-read-secrets storage read-secret key for [string]
mirror node importer
--storage-type storage type for saving stream [default: "minio_only"]
files, available options are
minio_only, aws_only,
gcs_only, aws_and_gcs
--use-external-database Set to true if you have an [boolean] [default: false]
external database to use
instead of the database that
the Mirror Node Helm chart
supplies
-f, --values-file Comma separated chart values [string]
file
-v, --version Show version number [boolean]
mirror-node destroy
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js mirror-node destroy --help
mirror-node destroy
Destroy mirror-node components and database
Options:
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
explorer
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js explorer --help
Select a explorer command
explorer
Manage Explorer in solo network
Commands:
explorer deploy Deploy explorer
explorer destroy Destroy explorer
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
explorer deploy
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js explorer deploy --help
explorer deploy
Deploy explorer
Options:
--cache-dir Local cache directory [string] [default: "/Users/user/.solo/cache"]
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-s, --cluster-setup-namespace Cluster Setup Namespace [string] [default: "solo-setup"]
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--domain-name Custom domain name [string]
--enable-explorer-tls Enable Explorer TLS, defaults [boolean] [default: false]
to false, requires certManager
and certManagerCrds, which can
be deployed through
solo-cluster-setup chart or
standalone
--enable-ingress enable ingress on the [boolean] [default: false]
component/pod
--explorer-static-ip The static IP address to use [string]
for the Explorer load
balancer, defaults to ""
--explorer-tls-host-name The host name to use for the [string] [default: "explorer.solo.local"]
Explorer TLS, defaults to
"explorer.solo.local"
--explorer-version Explorer chart version [string] [default: "24.15.0"]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--ingress-controller-value-file The value file to use for [string]
ingress controller, defaults
to ""
--mirror-namespace Namespace to use for the [string]
Mirror Node deployment, a new
one will be created if it does
not exist
-n, --namespace Namespace [string]
--profile Resource profile (local | tiny [string] [default: "local"]
| small | medium | large)
--profile-file Resource profile definition [string] [default: "profiles/custom-spec.yaml"]
(e.g. custom-spec.yaml)
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--solo-chart-version Solo testing chart version [string] [default: "0.53.0"]
--tls-cluster-issuer-type The TLS cluster issuer type to [string] [default: "self-signed"]
use for hedera explorer,
defaults to "self-signed", the
available options are:
"acme-staging", "acme-prod",
or "self-signed"
-f, --values-file Comma separated chart values [string]
file
-v, --version Show version number [boolean]
explorer destroy
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js explorer destroy --help
explorer destroy
Destroy explorer
Options:
--chart-dir Local chart directory path [string]
(e.g. ~/solo-charts/charts
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
-f, --force Force actions even if those [boolean] [default: false]
can be skipped
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
deployment
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js deployment --help
Select a chart command
deployment
Manage solo network deployment
Commands:
deployment create Creates a solo deployment
deployment delete Deletes a solo deployment
deployment list List solo deployments inside a cluster
deployment add-cluster Adds cluster to solo deployments
Options:
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-v, --version Show version number [boolean]
deployment create
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js deployment create --help
deployment create
Creates a solo deployment
Options:
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-n, --namespace Namespace [string]
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
--realm Realm number. Requires [number] [default: 0]
network-node > v61.0 for
non-zero values
--shard Shard number. Requires [number] [default: 0]
network-node > v61.0 for
non-zero values
-v, --version Show version number [boolean]
deployment delete
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js deployment delete --help
deployment delete
Deletes a solo deployment
Options:
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
deployment list
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js deployment list --help
deployment list
List solo deployments inside a cluster
Options:
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
--dev Enable developer mode [boolean] [default: false]
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
deployment add-cluster
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js deployment add-cluster --help
deployment add-cluster
Adds cluster to solo deployments
Options:
-c, --cluster-ref The cluster reference that [string]
will be used for referencing
the Kubernetes cluster and
stored in the local and remote
configuration for the
deployment. For commands that
take multiple clusters they
can be separated by commas.
-d, --deployment The name the user will [string]
reference locally to link to a
deployment
--dev Enable developer mode [boolean] [default: false]
--dns-base-domain Base domain for the DNS is the [string] [default: "cluster.local"]
suffix used to construct the
fully qualified domain name
(FQDN)
--dns-consensus-node-pattern Pattern to construct the [string] [default: "network-{nodeAlias}-svc.{namespace}.svc"]
prefix for the fully qualified
domain name (FQDN) for the
consensus node, the suffix is
provided by the
--dns-base-domain option (ex.
network-{nodeAlias}-svc.{namespace}.svc)
--enable-cert-manager Pass the flag to enable cert [boolean] [default: false]
manager
--force-port-forward Force port forward to access [boolean] [default: true]
the network services
--num-consensus-nodes Used to specify desired number [number]
of consensus nodes for
pre-genesis deployments
-q, --quiet-mode Quiet mode, do not prompt for [boolean] [default: false]
confirmation
-v, --version Show version number [boolean]
block
> @hashgraph/solo@0.36.1 solo
> node --no-deprecation --no-warnings dist/solo.js block --help
*********************************** ERROR *****************************************
Error running Solo CLI, failure occurred: select a block command
***********************************************************************************
5 - FAQ
How can I avoid using genesis keys ?
You can run solo account init
anytime after solo node start
Where can I find the default account keys ?
It is the well known default genesis key Link
How do I get the key for an account?
Use the following command to get account balance and private key of the account 0.0.1007
:
# get account info of 0.0.1007 and also show the private key
solo account get --account-id 0.0.1007 --deployment solo-deployment --private-key
The output would be similar to the following:
{
"accountId": "0.0.1007",
"privateKey": "302e020100300506032b657004220420411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7",
"privateKeyRaw": "411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7"
"publicKey": "302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013",
"balance": 100
}
How to handle error “failed to setup chart repositories”
If during the installation of solo-charts you see the error similar to the following:
failed to setup chart repositories,
repository name (hedera-json-rpc-relay) already exists
You need to remove the old helm repo manually, first run command helm repo list
to
see the list of helm repos, and then run helm repo remove <repo-name>
to remove the repo.
For example:
helm repo list
NAME URL
haproxy-ingress https://haproxy-ingress.github.io/charts
haproxytech https://haproxytech.github.io/helm-charts
metrics-server https://kubernetes-sigs.github.io/metrics-server/
metallb https://metallb.github.io/metallb
mirror https://hashgraph.github.io/hedera-mirror-node/charts
hedera-json-rpc-relay https://hashgraph.github.io/hedera-json-rpc-relay/charts
Next run the command to remove the repo:
helm repo remove hedera-json-rpc-relay
6 - Using Solo with Mirror Node
Using Solo with mirror node
User can deploy a Solo network with Mirror Node by running the following command:
export SOLO_CLUSTER_NAME=solo-cluster
export SOLO_NAMESPACE=solo-e2e
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster-setup
export SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref setup --cluster-setup-namespace "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment add-cluster --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 2
solo node keys --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2
solo network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2
solo node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2
solo node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2
solo mirror-node deploy --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME}
solo explorer deploy --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME}
kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 50211:50211 > /dev/null 2>&1 &
kubectl port-forward svc/hiero-explorer -n "${SOLO_NAMESPACE}" 8080:80 > /dev/null 2>&1 &
Then you can access the Explorer at http://localhost:8080
Or you can use Task tool to deploy Solo network with Mirror Node with a single command link
Next, you can try to create a few accounts with Solo and see the transactions in the Explorer.
solo account create -n solo-e2e --hbar-amount 100
solo account create -n solo-e2e --hbar-amount 100
Or you can use Hedera JavaScript SDK examples to create topic, submit message and subscribe to the topic.
7 - Using Solo with Hiero JavaScript SDK
Using Solo with the Hiero JavaScript SDK
First, please follow solo repository README to install solo and Docker Desktop. You also need to install the Taskfile tool following the instructions here.
Then we start with launching a local Solo network with the following commands:
# launch a local Solo network with mirror node and hedera explorer
cd examples
task default-with-mirror
Then create a new test account with the following command:
npm run solo-test -- account create --deployment solo-deployment --hbar-amount 100
The output would be similar to the following:
*** new account created ***
-------------------------------------------------------------------------------
{
"accountId": "0.0.1007",
"publicKey": "302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013",
"balance": 100
}
Then use the following command to get private key of the account 0.0.1007
:
npm run solo-test -- account get --account-id 0.0.1007 --deployment solo-deployment --private-key
The output would be similar to the following:
{
"accountId": "0.0.1007",
"privateKey": "302e020100300506032b657004220420411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7",
"privateKeyRaw": "411a561013bceabb8cb83e3dc5558d052b9bd6a8977b5a7348bf9653034a29d7"
"publicKey": "302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013",
"balance": 100
}
Next step please clone the Hiero Javascript SDK repository https://github.com/hiero-ledger/hiero-sdk-js.
At the root of the project hiero-sdk-js
, create a file .env
and add the following content:
# Hiero Operator Account ID
export OPERATOR_ID="0.0.1007"
# Hiero Operator Private Key
export OPERATOR_KEY="302a300506032b65700321001d8978e647aca1195c54a4d3d5dc469b95666de14e9b6edde8ed337917b96013"
# Hiero Network
export HEDERA_NETWORK="local-node"
Make sure to assign the value of accountId to OPERATOR_ID
and the value of privateKey to OPERATOR_KEY
.
Then try the following command to run the test
node examples/create-account.js
The output should be similar to the following:
private key = 302e020100300506032b6570042204208a3c1093c4df779c4aa980d20731899e0b509c7a55733beac41857a9dd3f1193
public key = 302a300506032b6570032100c55adafae7e85608ea893d0e2c77e2dae3df90ba8ee7af2f16a023ba2258c143
account id = 0.0.1009
Or try the topic creation example:
node examples/create-topic.js
The output should be similar to the following:
topic id = 0.0.1008
topic sequence number = 1
You can use Hiero Explorer to check transactions and topics created in the Solo network: http://localhost:8080/localnet/dashboard
Finally, after done with using solo, using the following command to tear down the Solo network:
task clean
Retrieving Logs
You can find log for running solo command under the directory ~/.solo/logs/
The file solo.log contains the logs for the solo command. The file hashgraph-sdk.log contains the logs from Solo client when sending transactions to network nodes.
8 - Hiero Consensus Node Platform Developer
Use Solo with a Local Built Hiero Consensus Node Testing Application
First, please clone Hiero Consensus Node repo https://github.com/hiero-ledger/hiero-consensus-node/
and build the code
with ./gradlew assemble
. If you need to run multiple nodes with different versions or releases, please duplicate the repo or build directories in
multiple directories, checkout to the respective version and build the code.
Then you can start the custom-built platform testing application with the following command:
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment add-cluster --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo node keys --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
# option 1) if all nodes are running the same version of Hiero app
solo node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data/
# option 2) if each node is running different version of Hiero app, please provide different paths to the local repositories
solo node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path node1=../hiero-consensus-node/hedera-node/data/,node1=<path2>,node3=<path3>
solo node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
It is possible that different nodes are running different versions of Hiero app, as long as in the above setup command, each node0, or node1 is given different paths to the local repositories.
If need to provide customized configuration files for Hedera application, please use the following flags with network deploy command:
--settings-txt
- to provide custom settings.txt file--api-permission-properties
- to provide custom api-permission.properties file--bootstrap-properties
- to provide custom bootstrap.properties file--application-properties
- to provide custom application.properties file
For example:
solo network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --settings-txt <path-to-settings-txt>
9 - Hiero Consensus Node Execution Developer
Hiero Consensus Node Execution Developer
Once the nodes are up, you may now expose various services (using k9s
(shift-f) or kubectl port-forward
) and access. Below are most used services that you may expose.
- where the ’node name’ for Node ID = 0, is
node1
(node${ nodeId + 1 }
) - Node services:
network-<node name>-svc
- HAProxy:
haproxy-<node name>-svc
# enable port forwarding for haproxy # node1 grpc port accessed by localhost:50211 kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 50211:50211 & # node2 grpc port accessed by localhost:51211 kubectl port-forward svc/haproxy-node2-svc -n "${SOLO_NAMESPACE}" 51211:50211 & # node3 grpc port accessed by localhost:52211 kubectl port-forward svc/haproxy-node3-svc -n "${SOLO_NAMESPACE}" 52211:50211 &
- Envoy Proxy:
envoy-proxy-<node name>-svc
# enable port forwarding for envoy proxy kubectl port-forward svc/envoy-proxy-node1-svc -n "${SOLO_NAMESPACE}" 8181:8080 & kubectl port-forward svc/envoy-proxy-node2-svc -n "${SOLO_NAMESPACE}" 8281:8080 & kubectl port-forward svc/envoy-proxy-node3-svc -n "${SOLO_NAMESPACE}" 8381:8080 &
- Hiero explorer:
solo-deployment-hiero-explorer
#enable portforwarding for hiero explorer, can be access at http://localhost:8080/ kubectl port-forward svc/hiero-explorer -n "${SOLO_NAMESPACE}" 8080:80 &
- JSON RPC Relays
You can deploy JSON RPC Relays for one or more nodes as below:
# deploy relay node first
solo relay deploy -i node1 --deployment "${SOLO_DEPLOYMENT}"
# enable relay for node1
kubectl port-forward svc/relay-node1-hedera-json-rpc-relay -n "${SOLO_NAMESPACE}" 7546:7546 &
10 - Attach JVM Debugger and Retrieve Logs
How to Debug a Hiero Consensus Node
1. Using k9s to access running consensus node logs
Running the command k9s -A
in terminal, and select one of the network nodes:
Next, select the root-container
and press the key s
to enter the shell of the container.
Once inside the shell, you can change to directory cd /opt/hgcapp/services-hedera/HapiApp2.0/
to view all hedera related logs and properties files.
[root@network-node1-0 hgcapp]# cd /opt/hgcapp/services-hedera/HapiApp2.0/
[root@network-node1-0 HapiApp2.0]# pwd
/opt/hgcapp/services-hedera/HapiApp2.0
[root@network-node1-0 HapiApp2.0]# ls -ltr data/config/
total 0
lrwxrwxrwx 1 root root 27 Dec 4 02:05 bootstrap.properties -> ..data/bootstrap.properties
lrwxrwxrwx 1 root root 29 Dec 4 02:05 application.properties -> ..data/application.properties
lrwxrwxrwx 1 root root 32 Dec 4 02:05 api-permission.properties -> ..data/api-permission.properties
[root@network-node1-0 HapiApp2.0]# ls -ltr output/
total 1148
-rw-r--r-- 1 hedera hedera 0 Dec 4 02:06 hgcaa.log
-rw-r--r-- 1 hedera hedera 0 Dec 4 02:06 queries.log
drwxr-xr-x 2 hedera hedera 4096 Dec 4 02:06 transaction-state
drwxr-xr-x 2 hedera hedera 4096 Dec 4 02:06 state
-rw-r--r-- 1 hedera hedera 190 Dec 4 02:06 swirlds-vmap.log
drwxr-xr-x 2 hedera hedera 4096 Dec 4 16:01 swirlds-hashstream
-rw-r--r-- 1 hedera hedera 1151446 Dec 4 16:07 swirlds.log
Alternatively, you can use the following command to download hgcaa.log and swirlds.log for further analysis.
# download logs as zip file from node1 and save in default ~/.solo/logs/solo-e2e/<timestamp/
solo node logs -i node1 -n solo-e2e
2. Using IntelliJ remote debug with Solo
NOTE: the hiero-consensus-node path referenced ‘../hiero-consensus-node/hedera-node/data’ may need to be updated based on what directory you are currently in. This also assumes that you have done an assemble/build and the directory contents are up-to-date.
Set up an Intellij run/debug configuration for remote JVM debug as shown in the below screenshot:
If you are working on a Hiero Consensus Node testing application, you should use the following configuration in Intellij:
Set up a breakpoint if necessary.
From Solo repo directory, run the following command from a terminal to launch a three node network, assume we are trying to attach debug to node2
.
Make sure the path following local-build-path
points to the correct directory.
Example 1: attach jvm debugger to a Hiero Consensus Node
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo # to avoid name collision issues if you ran previously with the same deployment name
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment add-cluster --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo node keys --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --debug-node-alias node2
solo node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --debug-node-alias node2
Once you see the following message, you can launch the JVM debugger from Intellij
❯ Check all nodes are ACTIVE
Check node: node1,
Check node: node2, Please attach JVM debugger now.
Check node: node3,
The Hiero Consensus Node application should stop at the breakpoint you set:
Example 2: attach a JVM debugger with the node add operation
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment add-cluster --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo node keys --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --pvcs true
solo node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo node add --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys --debug-node-alias node4 --local-build-path ../hiero-consensus-node/hedera-node/data --pvcs true
Example 3: attach a JVM debugger with the node update operation
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment add-cluster --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo node keys --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo node update --deployment "${SOLO_DEPLOYMENT}" --node-alias node2 --debug-node-alias node2 --local-build-path ../hiero-consensus-node/hedera-node/data --new-account-number 0.0.7 --gossip-public-key ./s-public-node2.pem --gossip-private-key ./s-private-node2.pem --release-tag v0.59.5
Example 4: attach a JVM debugger with the node delete operation
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment add-cluster --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo node keys --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo node delete --deployment "${SOLO_DEPLOYMENT}" --node-alias node2 --debug-node-alias node3 --local-build-path ../hiero-consensus-node/hedera-node/data
3. Save and reuse network state files
With the following command you can save the network state to a file.
# must stop hedera node operation first
solo node stop --deployment "${SOLO_DEPLOYMENT}"
# download state file to default location at ~/.solo/logs/<namespace>
solo node states -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"
By default, the state files are saved under ~/.solo
directory
└── logs
├── solo-e2e
│ ├── network-node1-0-state.zip
│ └── network-node2-0-state.zip
└── solo.log
Later, user can use the following command to upload the state files to the network and restart Hiero Consensus Nodes.
SOLO_CLUSTER_NAME=solo-cluster
SOLO_NAMESPACE=solo-e2e
SOLO_CLUSTER_SETUP_NAMESPACE=solo-setup
SOLO_DEPLOYMENT=solo-deployment
rm -Rf ~/.solo
kind delete cluster -n "${SOLO_CLUSTER_NAME}"
kind create cluster -n "${SOLO_CLUSTER_NAME}"
solo init
solo cluster-ref setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
solo cluster-ref connect --cluster-ref ${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
solo deployment create --namespace "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
solo deployment add-cluster --deployment "${SOLO_DEPLOYMENT}" --cluster-ref ${SOLO_CLUSTER_NAME} --num-consensus-nodes 3
solo node keys --deployment "${SOLO_DEPLOYMENT}" --gossip-keys --tls-keys -i node1,node2,node3
solo network deploy --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo node setup --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3 --local-build-path ../hiero-consensus-node/hedera-node/data
solo node start --deployment "${SOLO_DEPLOYMENT}" -i node1,node2,node3
solo node stop --deployment "${SOLO_DEPLOYMENT}"
solo node states -i node1,node2,node3 --deployment "${SOLO_DEPLOYMENT}"
# start network with pre-existing state files
solo node start --deployment "${SOLO_DEPLOYMENT}" --state-file network-node1-0-state.zip
11 - Using Environment Variables
Environment Variables Used in Solo
User can configure the following environment variables to customize the behavior of Solo.
Table of environment variables
Environment Variable | Description | Default Value |
---|---|---|
SOLO_HOME | Path to the Solo cache and log files | ~/.solo |
SOLO_CHAIN_ID | Chain id of solo network | 298 |
DEFAULT_START_ID_NUMBER | First node account ID of solo test network | 0.0.3 |
SOLO_NODE_INTERNAL_GOSSIP_PORT | Internal gossip port number used by hedera network | 50111 |
SOLO_NODE_EXTERNAL_GOSSIP_PORT | External port number used by hedera network | 50111 |
SOLO_NODE_DEFAULT_STAKE_AMOUNT | Default stake amount for node | 500 |
SOLO_OPERATOR_ID | Operator account ID for solo network | 0.0.2 |
SOLO_OPERATOR_KEY | Operator private key for solo network | 302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137 |
SOLO_OPERATOR_PUBLIC_KEY | Operator public key for solo network | 302a300506032b65700321000aa8e21064c61eab86e2a9c164565b4e7a9a4146106e0a6cd03a8c395a110e92 |
FREEZE_ADMIN_ACCOUNT | Freeze admin account ID for solo network | 0.0.58 |
GENESIS_KEY | Genesis private key for solo network | 302e020100300506032b65700422042091132178e72057a1d7528025956fe39b0b847f200ab59b2fdd367017f3087137 |
LOCAL_NODE_START_PORT | Local node start port for solo network | 30212 |
NODE_CLIENT_MIN_BACKOFF | The minimum amount of time to wait between retries. | 1000 |
NODE_CLIENT_MAX_BACKOFF | The maximum amount of time to wait between retries. | 1000 |
NODE_CLIENT_REQUEST_TIMEOUT | The period of time a transaction or query request will retry from a “busy” network response | 600000 |
NODE_COPY_CONCURRENT | The number of concurrent threads to use when copying files to the node. | 4 |
PODS_RUNNING_MAX_ATTEMPTS | The maximum number of attempts to check if pods are running. | 900 |
PODS_RUNNING_DELAY | The interval between attempts to check if pods are running, in the unit of milliseconds. | 1000 |
NETWORK_NODE_ACTIVE_MAX_ATTEMPTS | The maximum number of attempts to check if network nodes are active. | 300 |
NETWORK_NODE_ACTIVE_DELAY | The interval between attempts to check if network nodes are active, in the unit of milliseconds. | 1000 |
NETWORK_NODE_ACTIVE_TIMEOUT | The period of time to wait for network nodes to become active, in the unit of milliseconds. | 1000 |
NETWORK_PROXY_MAX_ATTEMPTS | The maximum number of attempts to check if network proxy is running. | 300 |
NETWORK_PROXY_DELAY | The interval between attempts to check if network proxy is running, in the unit of milliseconds. | 2000 |
PODS_READY_MAX_ATTEMPTS | The maximum number of attempts to check if pods are ready. | 300 |
PODS_READY_DELAY | The interval between attempts to check if pods are ready, in the unit of milliseconds. | 2000 |
RELAY_PODS_RUNNING_MAX_ATTEMPTS | The maximum number of attempts to check if relay pods are running. | 900 |
RELAY_PODS_RUNNING_DELAY | The interval between attempts to check if relay pods are running, in the unit of milliseconds. | 1000 |
RELAY_PODS_READY_MAX_ATTEMPTS | The maximum number of attempts to check if relay pods are ready. | 100 |
RELAY_PODS_READY_DELAY | The interval between attempts to check if relay pods are ready, in the unit of milliseconds. | 1000 |
NETWORK_DESTROY_WAIT_TIMEOUT | The period of time to wait for network to be destroyed, in the unit of milliseconds. | 120 |
SOLO_LEASE_ACQUIRE_ATTEMPTS | The number of attempts to acquire a lock before failing. | 10 |
SOLO_LEASE_DURATION | The default duration in seconds for which a lock is held before expiration. | 20 |
ACCOUNT_UPDATE_BATCH_SIZE | The number of accounts to update in a single batch operation. | 10 |
NODE_CLIENT_PING_INTERVAL | The interval in milliseconds between node health pings. | 30000 |
NODE_CLIENT_PING_MAX_RETRIES | The maximum number of retries for node health pings. | 5 |
NODE_CLIENT_PING_RETRY_INTERVAL | The interval in milliseconds between node health ping retries. | 10000 |
GRPC_PORT | The gRPC port used for local node communication. | 50211 |
LOCAL_BUILD_COPY_RETRY | The number of times to retry local build copy operations. | 3 |
LOAD_BALANCER_CHECK_DELAY_SECS | The delay in seconds between load balancer status checks. | 5 |
LOAD_BALANCER_CHECK_MAX_ATTEMPTS | The maximum number of attempts to check load balancer status. | 60 |
JSON_RPC_RELAY_CHART_URL | The URL for the JSON-RPC relay Helm chart repository. | https://hiero-ledger.github.io/hiero-json-rpc-relay/charts |
MIRROR_NODE_CHART_URL | The URL for the Hedera mirror node Helm chart repository. | https://hashgraph.github.io/hedera-mirror-node/charts |
NODE_CLIENT_MAX_ATTEMPTS | The maximum number of attempts for node client operations. | 600 |
EXPLORER_CHART_URL | The URL for the Hedera Explorer Helm chart repository. | oci://ghcr.io/hiero-ledger/hiero-mirror-node-explorer/hiero-explorer-chart |
INGRESS_CONTROLLER_CHART_URL | The URL for the ingress controller Helm chart repository. | https://haproxy-ingress.github.io/charts |