Solo User Guide
📝 For less than 16 GB of memory to dedicate to Docker please skip the block node add and destroy steps.
📝 There should be a table of contents on the right side of your screen if your browser width is large enough
Introduction
Welcome to the world of Hedera development! If you’re looking to build and test applications on the Hedera network but don’t want to spend HBAR on testnet or mainnet transactions, you’ve come to the right place. Solo is your gateway to running your own local Hedera test network, giving you complete control over your development environment.
Solo is an opinionated command-line interface (CLI) tool designed to deploy and manage standalone Hedera test networks. Think of it as your personal Hedera sandbox where you can experiment, test features, and develop applications without any external dependencies or costs. Whether you’re building smart contracts, testing consensus mechanisms, or developing DApps, Solo provides the infrastructure you need.
By the end of this tutorial, you’ll have your own Hedera test network running locally, complete with consensus nodes, mirror nodes, and all the infrastructure needed to submit transactions and test your applications. Let’s dive in!
Prerequisites
Before we begin, let’s ensure your system meets the requirements and has all the necessary software installed. Don’t worry if this seems like a lot – we’ll walk through each step together.
System Requirements(for a bare minimum install running 1 node)
First, check that your computer meets these minimum specifications:
- Memory: At least 12GB of RAM (16GB recommended for smoother performance)
- CPU: Minimum 4 cores (8 cores recommended)
- Storage: At least 20GB of free disk space
- Operating System: macOS, Linux, or Windows with WSL2
Required Software
You’ll need to install a few tools before we can set up Solo. Here’s what you need and how to get it:
1. Node.js (≥20.18.0)
Solo is built on Node.js, so you’ll need version 20.18.0 or higher. We recommend using Node Version Manager (nvm) for easy version management:Details <click to expand/collapse>
# Install nvm (macOS/Linux)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Install nvm (Windows - use nvm-windows)# Download from: https://github.com/coreybutler/nvm-windows# Install Node.js
nvm install 20.18.0
nvm use 20.18.0
# Verify installation
node --version
2. Docker Desktop
Docker is essential for running the containerized Hedera network components: After installation, ensure Docker is running:Details <click to expand/collapse>
docker --version
docker ps
Preparing Your Environment
Now that we have all prerequisites in place, let’s install Solo and set up our environment.
One thing to consider, old installs can really hamper your ability to get a new install up and running. If you have an old install of Solo, or if you are having issues with the install, please run the following commands to clean up your environment before proceeding.
1. Installing Solo
Open your terminal and install Solo globally using npm: You should see output showing the latest version which should match our NPM package version: https://www.npmjs.com/package/@hashgraph/soloDetails <click to expand/collapse>
npm install -g @hashgraph/solo
# Verify the installation
solo --version
*Cleaning up an old install
The team is presently working on a number of fixes and automation that will relegate the need for this, but currently as deployed Solo can be finnicky with artifacts from prior installs. A quick command to prep your station for a new install is a good idea.Details <click to expand/collapse>
for cluster in $(kind get clusters);do kind delete cluster -n $cluster;done
rm -Rf ~/.solo
2. Setting up your environmental variables
You need to declare some environmental variables. Keep note that unless you intentionally include these in your zsh config when you close your terminal you may lose them. *throughout the remainder of this walkthrough for simplicity sake I will assume in commands these are the values in your .envDetails <click to expand/collapse>
export SOLO_CLUSTER_NAME=solo
export SOLO_NAMESPACE=solo
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
export SOLO_DEPLOYMENT=solo-deployment
3. Create a cluster
Example output:Details <click to expand/collapse>
kind create cluster -n "${SOLO_CLUSTER_NAME}"
Creating cluster "solo-e2e" ...
Ensuring node image (kindest/node:v1.32.2) 🖼 ...
✓ Ensuring node image (kindest/node:v1.32.2) 🖼
Preparing nodes 📦 ...
✓ Preparing nodes 📦
Writing configuration 📜 ...
✓ Writing configuration 📜
Starting control-plane 🕹️ ...
✓ Starting control-plane 🕹️
Installing CNI 🔌 ...
✓ Installing CNI 🔌
Installing StorageClass 💾 ...
✓ Installing StorageClass 💾
Set kubectl context to "kind-solo-e2e"
You can now use your cluster with:
kubectl cluster-info --context kind-solo-e2e
Have a nice day! 👋
*Connecting to a remote cluster
Details <click to expand/collapse>
kubectl config get-contexts
kubectl config use-context <context-name>
Quick Start Deployment
For a simple setup with a single node with a mirror node, explorer, and JSON RPC relay, you can follow these quick steps. This is ideal for testing and development purposes.
solo quick-start single deploy
When you’re finished, you can tear down your Solo network just as easily:
solo quick-start single destroy
Step-by-Step Solo Network Deployment
If you have a more complex setup in mind, such as multiple nodes or specific configurations, follow these detailed steps to deploy your Solo network.
1. Initialize solo:
Reset the Example output:Details <click to expand/collapse>
.solo
directory before initializing Solo. This step is crucial to ensure a clean setup without any leftover artifacts from previous installations. See: *Cleaning up an old installsolo init
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : init
**********************************************************************************
Setup home directory and cache
✔ Setup home directory and cache
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Create local configuration
✔ Create local configuration
Setup chart manager
✔ Setup chart manager
Copy templates in '/home/runner/.solo/cache'
***************************************************************************************
Note: solo stores various artifacts (config, logs, keys etc.) in its home directory: /home/runner/.solo
If a full reset is needed, delete the directory or relevant sub-directories before running 'solo init'.
***************************************************************************************
✔ Copy templates in '/home/runner/.solo/cache'
2. Connect the cluster and create a deployment
This command will create a deployment in the specified clusters, and generate the LocalConfig and RemoteConfig used by k8s. The deployment will: 📝 notice that the Example output:Details <click to expand/collapse>
--cluster-ref
value is kind-solo
, when you created the Kind cluster it created a cluster reference in the Kubernetes config with the name kind-solo
. If you used a different name, replace kind-solo
with your cluster name, but prefixing with kind-
. If you are working with a remote cluster, you can use the name of your cluster reference which can be gathered with the command: kubectl config get-contexts
.
📝 Note: Solo stores various artifacts (config, logs, keys etc.) in its home directory: ~/.solo. If you need a full reset, delete this directory before running solo init ag# connect to the cluster you created in a previous command
solo cluster-ref config connect --cluster-ref kind-${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
#create the deployment
solo deployment config create -n "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : cluster-ref config connect --cluster-ref kind-solo --context kind-solo
**********************************************************************************
Initialize
✔ Initialize
Validating cluster ref:
✔ kind-solo
Test connection to cluster:
✔ Test connection to cluster: kind-solo
Associate a context with a cluster reference:
✔ Associate a context with a cluster reference: kind-solo
solo-deployment_CREATE_OUTPUT
3. Add a cluster to the deployment you created
*This command is the first command that will specify how many nodes you want to add to your deployment. For the sake of resource Example output:Details <click to expand/collapse>
# Add a cluster to the deployment you created
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --num-consensus-nodes 1
# If the command line command is unresponsive there's also a handy cluster add configurator you can run `solo deployment cluster attach` without any arguments to get a guided setup.
solo-deployment_ADD_CLUSTER_OUTPUT
4. Generate keys
You need to generate keys for your nodes, or in this case single node. Example output: PEM key files are generated in Details <click to expand/collapse>
solo keys consensus generate --gossip-keys --tls-keys --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : keys consensus generate --gossip-keys --tls-keys --deployment solo-deployment
**********************************************************************************
Initialize
✔ Initialize
Generate gossip keys
Backup old files
✔ Backup old files
Gossip key for node: node1
✔ Gossip key for node: node1
✔ Generate gossip keys
Generate gRPC TLS Keys
Backup old files
TLS key for node: node1
✔ Backup old files
✔ TLS key for node: node1
✔ Generate gRPC TLS Keys
Finalize
✔ Finalize
~/.solo/cache/keys
directory.hedera-node1.crt hedera-node3.crt s-private-node1.pem s-public-node1.pem unused-gossip-pem
hedera-node1.key hedera-node3.key s-private-node2.pem s-public-node2.pem unused-tls
hedera-node2.crt hedera-node4.crt s-private-node3.pem s-public-node3.pem
hedera-node2.key hedera-node4.key s-private-node4.pem s-public-node4.pem
5. Setup cluster with shared components
Example output:Details <click to expand/collapse>
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : cluster-ref config setup --cluster-setup-namespace solo-cluster
**********************************************************************************
Initialize
✔ Initialize
Prepare chart values
✔ Prepare chart values
Install 'solo-cluster-setup' chart
- Installed solo-cluster-setup chart, version: 0.56.0
✔ Install 'solo-cluster-setup' chart
Deploying Helm chart with network components
Now comes the exciting part – deploying your Hedera test network!
*Deploy a block node (experimental)
⚠️ Block Node is experimental in Solo. It requires a minimum of 16 GB of memory allocated to Docker. If you have less than 16 GB of memory, skip this step. As mentioned in the warning, Block Node uses a lot of memory. In addition, it requires a version of Consensus Node to be at least v0.62.3. You will need to augment the Example output:Details <click to expand/collapse>
solo consensus network deploy
& solo consensus node setup
command with the --release-tag v0.62.6
option to ensure that the Consensus Node is at the correct version. *note: v0.62.6 is the latest patch for v0.62solo block node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-"${SOLO_CLUSTER_NAME}" --release-tag v0.62.6
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : block node add --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Prepare release name
✔ Prepare release name
Prepare chart values
✔ Prepare chart values
Deploy block node
- Installed block-node-0 chart, version: v0.14.0
✔ Deploy block node
Check block node pod is running
✔ Check block node pod is running
Check software
✔ Check software
Check block node pod is ready
✔ Check block node pod is ready
Check block node readiness
✔ Check block node readiness - [1/100] success
Add block node component in remote config
✔ Add block node component in remote config
1. Deploy the network
Deploying the network runs risks of timeouts as images are downloaded, and pods are starting. If you experience a failure double check the resources you’ve allocated in docker engine and give it another try. Example output:Details <click to expand/collapse>
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus network deploy --deployment solo-deployment
**********************************************************************************
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Copy gRPC TLS Certificates
Copy gRPC TLS Certificates [SKIPPED: Copy gRPC TLS Certificates]
Check if cluster setup chart is installed
✔ Check if cluster setup chart is installed
Prepare staging directory
Copy Gossip keys to staging
✔ Copy Gossip keys to staging
Copy gRPC TLS keys to staging
✔ Copy gRPC TLS keys to staging
✔ Prepare staging directory
Copy node keys to secrets
Copy TLS keys
Node: node1, cluster: kind-solo
Copy Gossip keys
✔ Copy Gossip keys
✔ Node: node1, cluster: kind-solo
✔ Copy TLS keys
✔ Copy node keys to secrets
Install chart 'solo-deployment'
- Installed solo-deployment chart, version: 0.56.0
✔ Install chart 'solo-deployment'
Check for load balancer
Check for load balancer [SKIPPED: Check for load balancer]
Redeploy chart with external IP address config
Redeploy chart with external IP address config [SKIPPED: Redeploy chart with external IP address config]
Check node pods are running
Check Node: node1, Cluster: kind-solo
✔ Check Node: node1, Cluster: kind-solo
✔ Check node pods are running
Check proxy pods are running
Check HAProxy for: node1, cluster: kind-solo
Check Envoy Proxy for: node1, cluster: kind-solo
✔ Check Envoy Proxy for: node1, cluster: kind-solo
✔ Check HAProxy for: node1, cluster: kind-solo
✔ Check proxy pods are running
Check auxiliary pods are ready
Check MinIO
✔ Check MinIO
✔ Check auxiliary pods are ready
Add node and proxies to remote config
✔ Add node and proxies to remote config
Copy block-nodes.json
✔ Copy block-nodes.json
2. Set up a node with Hedera platform software
This step downloads the hedera platform code and sets up your node/nodes. Example output:Details <click to expand/collapse>
# consensus node setup
export CONSENSUS_NODE_VERSION=v0.63.9 # or whatever version you are trying to deploy starting with a `v`
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" --release-tag "${CONSENSUS_NODE_VERSION}"
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus node setup --deployment solo-deployment
**********************************************************************************
Load configuration
✔ Load configuration
Initialize
✔ Initialize
Validate nodes states
Validating state for node node1
✔ Validating state for node node1 - valid state: requested
✔ Validate nodes states
Identify network pods
Check network pod: node1
✔ Check network pod: node1
✔ Identify network pods
Fetch platform software into network nodes
Update node: node1 [ platformVersion = v0.63.9, context = kind-solo ]
✔ Update node: node1 [ platformVersion = v0.63.9, context = kind-solo ]
✔ Fetch platform software into network nodes
Setup network nodes
Node: node1
Copy configuration files
✔ Copy configuration files
Set file permissions
✔ Set file permissions
✔ Node: node1
✔ Setup network nodes
setup network node folders
✔ setup network node folders
Change node state to configured in remote config
✔ Change node state to configured in remote config
3. Start the nodes up!
Now that everything is set up you need to start them. Example output:Details <click to expand/collapse>
# start your node/nodes
solo consensus node start --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus node start --deployment solo-deployment
**********************************************************************************
Load configuration
✔ Load configuration
Initialize
✔ Initialize
Validate nodes states
Validating state for node node1
✔ Validating state for node node1 - valid state: configured
✔ Validate nodes states
Identify existing network nodes
Check network pod: node1
✔ Check network pod: node1
✔ Identify existing network nodes
Upload state files network nodes
Upload state files network nodes [SKIPPED: Upload state files network nodes]
Starting nodes
Start node: node1
✔ Start node: node1
✔ Starting nodes
Enable port forwarding for debug port and/or GRPC port
Using requested port 50211
✔ Enable port forwarding for debug port and/or GRPC port
Check all nodes are ACTIVE
Check network pod: node1
✔ Check network pod: node1 - status ACTIVE, attempt: 17/300
✔ Check all nodes are ACTIVE
Check node proxies are ACTIVE
Check proxy for node: node1
✔ Check proxy for node: node1
✔ Check node proxies are ACTIVE
Change node state to started in remote config
✔ Change node state to started in remote config
Add node stakes
Adding stake for node: node1
Using requested port 30212
✔ Adding stake for node: node1
✔ Add node stakes
set gRPC Web endpoint
✔ set gRPC Web endpoint
Stopping port-forwarder for port [30212]
4. Deploy a mirror node
This is the most memory intensive step from a resource perspective. If you have issues at this step try checking your local resource utilization and make sure there’s memory available for docker (close all unessential applications). Likewise, you can consider lowering your swap in docker settings to ease the swap demand, and try again. The Example output:Details <click to expand/collapse>
--pinger
flag starts a pinging service that sends transactions to the network at regular intervals. This is needed because the record file is not imported into the mirror node until the next one is created.# Deploy with explicit configuration
solo mirror node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --enable-ingress --pinger
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : mirror node add --deployment solo-deployment --cluster-ref kind-solo --enable-ingress --quiet-mode
**********************************************************************************
Initialize
Using requested port 30212
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Enable mirror-node
Prepare address book
✔ Prepare address book
Install mirror ingress controller
- Installed haproxy-ingress chart, version: 0.14.5
✔ Install mirror ingress controller
Deploy mirror-node
- Installed mirror chart, version: v0.136.0
✔ Deploy mirror-node
✔ Enable mirror-node
Check pods are ready
Check Postgres DB
Check REST API
Check GRPC
Check Monitor
Check Web3
Check Importer
✔ Check Postgres DB
✔ Check Web3
✔ Check GRPC
✔ Check REST API
✔ Check Monitor
✔ Check Importer
✔ Check pods are ready
Seed DB data
Insert data in public.file_data
✔ Insert data in public.file_data
✔ Seed DB data
Add mirror node to remote config
✔ Add mirror node to remote config
Enable port forwarding for mirror ingress controller
Using requested port 8081
✔ Enable port forwarding for mirror ingress controller
Stopping port-forwarder for port [30212]
5. Deploy the explorer
Watch the deployment progress: Example output:Details <click to expand/collapse>
# deploy explorer
solo explorer node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : explorer node add --deployment solo-deployment --cluster-ref kind-solo --quiet-mode
**********************************************************************************
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Load remote config
✔ Load remote config
Install cert manager
Install cert manager [SKIPPED: Install cert manager]
Install explorer
- Installed hiero-explorer chart, version: 25.1.1
✔ Install explorer
Install explorer ingress controller
Install explorer ingress controller [SKIPPED: Install explorer ingress controller]
Check explorer pod is ready
✔ Check explorer pod is ready
Check haproxy ingress controller pod is ready
Check haproxy ingress controller pod is ready [SKIPPED: Check haproxy ingress controller pod is ready]
Add explorer to remote config
✔ Add explorer to remote config
Enable port forwarding for explorer
Using requested port 8080
✔ Enable port forwarding for explorer
6. Deploy a JSON RPC relay
The JSON RPC relay allows you to interact with your Hedera network using standard JSON RPC calls. This is useful for integrating with existing tools and libraries. Example output:Details <click to expand/collapse>
#deploy a solo JSON RPC relay
solo relay node add -i node1 --deployment "${SOLO_DEPLOYMENT}"
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : relay node add --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Check chart is installed
✔ Check chart is installed
Prepare chart values
Using requested port 30212
✔ Prepare chart values
Deploy JSON RPC Relay
- Installed relay-node1 chart, version: 0.70.0
✔ Deploy JSON RPC Relay
Check relay is running
✔ Check relay is running
Check relay is ready
✔ Check relay is ready
Add relay component in remote config
✔ Add relay component in remote config
Enable port forwarding for relay node
Using requested port 7546
✔ Enable port forwarding for relay node
Stopping port-forwarder for port [30212]
*Check Pod Status
Here is a command if you want to check the status of your Solo Kubernetes pods:Details <click to expand/collapse>
# Check pod status
kubectl get pods -n solo
Working with Your Network
Network Endpoints
At this time Solo doesn’t automatically set up port forwarding for you, so you’ll need to do that manually. The port forwarding is now automatic for many endpoints. However, you can set up your own using Details <click to expand/collapse>
kubectl port-forward
command:# Consensus Service for node1 (node ID = 0): localhost:50211
# should be automatic: kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 50211:50211 > /dev/null 2>&1 &
# Explorer UI: http://localhost:8080
# should be automatic: kubectl port-forward svc/hiero-explorer -n "${SOLO_NAMESPACE}" 8080:8080 > /dev/null 2>&1 &
# Mirror Node gRPC, REST, REST Java, Web3 will be automatic on `localhost:8081` if you passed `--enable-ingress` to the `solo mirror node add` command
# Mirror Node gRPC: localhost:5600
kubectl port-forward svc/mirror-grpc -n "${SOLO_NAMESPACE}" 5600:5600 > /dev/null 2>&1 &
# Mirror Node REST API: http://localhost:5551
kubectl port-forward svc/mirror-rest -n "${SOLO_NAMESPACE}" 5551:80 > /dev/null 2>&1 &
# Mirror Node REST Java API http://localhost:8084
kubectl port-forward svc/mirror-restjava -n "${SOLO_NAMESPACE}" 8084:80 > /dev/null 2>&1 &
# JSON RPC Relay: localhost:7546
# should be automatic: kubectl port-forward svc/relay-node1-hedera-json-rpc-relay -n "${SOLO_NAMESPACE}" 7546:7546 > /dev/null 2>&1 &
Managing Your Network
Stopping and Starting Nodes
You can control individual nodes or the entire network:Details <click to expand/collapse>
# Stop all nodes
solo consensus node stop --deployment solo-deployment
# Stop a specific node
solo consensus node stop --node-id node-0 --deployment solo-deployment
# Restart nodes
solo consensus node restart --deployment solo-deployment
# Start nodes again
solo consensus node start --deployment solo-deployment
Viewing Logs
Access Solo and Consensus Node logs for troubleshooting:Details <click to expand/collapse>
# Download logs from all nodes
# Logs are saved to ~/.solo/logs/<namespace>/<pod-name>/# You can also use kubectl directly:
solo consensus diagnostics all --deployment solo-deployment
Updating the Network
To update nodes to a new Hedera version, you need to upgrade by one minor version higher at a time:Details <click to expand/collapse>
solo consensus network upgrade --deployment solo-deployment --upgrade-version v0.62.6
Updating a single node
To update a single node to a new Hedera version, you need to update by one minor version higher at a time: It is possible to update a single node to a new Hedera version through a process with separated steps. This is only useful in very specific cases, such as when testing the updating process.Details <click to expand/collapse>
solo consensus node update --deployment solo-deployment --node-alias node1 --release-tag v0.62.6
solo consensus dev-node-update prepare --deployment solo-deployment --node-alias node1 --release-tag v0.62.6 --output-dir context
solo consensus dev-node-update submit-transaction --deployment solo-deployment --input-dir context
solo consensus dev-node-update execute --deployment solo-deployment --input-dir context
Adding a new node to the network
Adding a new node to an existing Solo network: It is possible to add a new node through a process with separated steps. This is only useful in very specific cases, such as when testing the node adding process.Details <click to expand/collapse>
TODO solo consensus node add
solo consensus dev-node-add prepare --gossip-keys true --tls-keys true --deployment solo-deployment --pvcs true --admin-key ***** --node-alias node1 --output-dir context
solo consensus dev-node-add submit-transaction --deployment solo-deployment --input-dir context
solo consensus dev-node-add execute --deployment solo-deployment --input-dir context
Deleting a node from the network
This command is used to delete a node from an existing Solo network: It is possible to delete a node through a process with separated steps. This is only useful in very specific cases, such as when testing the delete process.Details <click to expand/collapse>
TODO solo consensus node destroy
solo consensus dev-node-delete prepare --deployment solo-deployment --node-alias node1 --output-dir context
solo consensus dev-node-delete submit-transaction --deployment solo-deployment --input-dir context
solo consensus dev-node-delete execute --deployment solo-deployment --input-dir context
Troubleshooting: Common Issues and Solutions
1. Pods Not Starting
If pods remain in Details <click to expand/collapse>
Pending
or CrashLoopBackOff
state:# Check pod events
kubectl describe pod -n solo network-node-0
# Common fixes:# - Increase Docker resources (memory/CPU)# - Check disk space# - Restart Docker and kind cluster
2. Connection Refused Errors
If you can’t connect to network endpoints:Details <click to expand/collapse>
# Check service endpoints
kubectl get svc -n solo
# Manually forward ports if needed
kubectl port-forward -n solo svc/network-node-0 50211:50211
3. Node Synchronization Issues
If nodes aren’t forming consensus:Details <click to expand/collapse>
# Check node status
solo consensus state download --deployment solo-deployment --node-aliases node1
# Look for gossip connectivity issues
kubectl logs -n solo network-node-0 | grep -i gossip
# Restart problematic nodes
solo consensus node refresh --node-aliases node1 --deployment solo-deployment
Getting Help
When you need assistance:Details <click to expand/collapse>
solo consensus diagnostics all --deployment solo-deployment
and examine ~/.solo/logs/
Cleanup
When you’re done with your test network: To quickly clean up your Solo network and remove all resources (all Kind clusters!), you can use the following commands, be aware you will lose all your logs and data from prior runs: Example output: Example output: Example output: Block Node destroy should run prior to consensus network destroy, since consensus network destroy removes the remote config. To destroy the block node (if you deployed it), you can use the following command: Example output: Example output:Details <click to expand/collapse>
*Fast clean up
Details <click to expand/collapse>
for cluster in $(kind get clusters);do kind delete cluster -n $cluster;done
rm -Rf ~/.solo
1. Destroy relay node
Details <click to expand/collapse>
solo relay node destroy -i node1 --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}"
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : relay node destroy --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Destroy JSON RPC Relay
✔ Destroy JSON RPC Relay
Remove relay component from remote config
✔ Remove relay component from remote config
2. Destroy mirror node
Details <click to expand/collapse>
solo mirror node destroy --deployment "${SOLO_DEPLOYMENT}" --force
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : mirror node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Destroy mirror-node
✔ Destroy mirror-node
Delete PVCs
✔ Delete PVCs
Uninstall mirror ingress controller
✔ Uninstall mirror ingress controller
Remove mirror node from remote config
✔ Remove mirror node from remote config
3. Destroy explorer node
Details <click to expand/collapse>
solo explorer node destroy --deployment "${SOLO_DEPLOYMENT}" --force
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : explorer node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Load remote config
✔ Load remote config
Destroy explorer
✔ Destroy explorer
Uninstall explorer ingress controller
✔ Uninstall explorer ingress controller
Remove explorer from remote config
✔ Remove explorer from remote config
*Destroy block node (Experimental)
Details <click to expand/collapse>
solo block node destroy --deployment "${SOLO_DEPLOYMENT} --cluster-ref kind-${SOLO_CLUSTER_NAME}"
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : block node destroy --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Look-up block node
✔ Look-up block node
Destroy block node
✔ Destroy block node
Disable block node component in remote config
✔ Disable block node component in remote config
4. Destroy network
Details <click to expand/collapse>
solo consensus network destroy --deployment "${SOLO_DEPLOYMENT}" --force
******************************* Solo *********************************************
Version : 0.43.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus network destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Remove deployment from local configuration
✔ Remove deployment from local configuration
Running sub-tasks to destroy network
✔ Deleting the RemoteConfig configmap in namespace solo
Next Steps
Congratulations! You now have a working Hedera test network. Here are some suggestions for what to explore next: Remember, this is your personal Hedera playground. Experiment freely, break things, learn, and have fun building on Hedera! Happy coding with Solo! 🚀Details <click to expand/collapse>
http://localhost:5551