Solo User Guide
📝 If you have less than 16 GB of memory available for Docker, skip the Block Node add/destroy steps in this guide.
📝 There should be a table of contents on the right side of your screen if your browser width is large enough.
Introduction
Welcome to the world of Hedera development! If you’re looking to build and test applications on the Hedera network but don’t want to spend HBAR on testnet or mainnet transactions, you’ve come to the right place. Solo is your gateway to running your own local Hedera test network, giving you complete control over your development environment.
Solo is an opinionated command-line interface (CLI) tool designed to deploy and manage standalone Hedera test networks. Think of it as your personal Hedera sandbox where you can experiment, test features, and develop applications without any external dependencies or costs. Whether you’re building smart contracts, testing consensus mechanisms, or developing dApps, Solo provides the infrastructure you need.
By the end of this tutorial, you’ll have your own Hedera test network running locally, complete with consensus nodes, mirror nodes, and all the infrastructure needed to submit transactions and test your applications. Let’s dive in!
Prerequisites
Before we begin, let’s ensure your system meets the requirements and has all the necessary software installed. Don’t worry if this seems like a lot – we’ll walk through each step together.
System Requirements (for a bare minimum install running 1 node)
First, check that your computer meets these minimum specifications:
- Memory: At least 12 GB (16 GB recommended for smoother performance)
- CPU: Minimum 6 cores (8 cores recommended)
- Storage: At least 20 GB of free disk space
- Operating System: macOS, Linux, or Windows with WSL2
Platform notes (click to expand/collapse)
- Windows (WSL2) – Enable Virtual Machine Platform and Windows Subsystem for Linux from Turn Windows features on or off, reboot, then run
wsl --install Ubuntuin PowerShell. For the rest of this guide, run all commands from the Ubuntu (WSL2) terminal so Docker and Kubernetes share the same Linux environment. - Linux – Use a recent LTS distribution (for example Ubuntu 22.04+, Debian 12, or Fedora 40+) with cgroup v2 enabled.
- macOS – Apple silicon is fully supported. Intel-based Macs should use macOS 12 or later.
Required Software
You’ll need to install a few tools before we can set up Solo. Here’s what you need and how to get it:
1. Node.js (≥ 22.0.0)
Details (click to expand/collapse)
Solo is built on Node.js, so you’ll need version 22.0.0 or higher. We recommend using Node Version Manager (nvm) for easy version management.
macOS / Linux (nvm):
# Install nvm (macOS/Linux)
curl -o https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Restart your shell, then:
nvm install 22.0.0
nvm use 22.0.0
# Verify installation
node --version
Windows (WSL2 + nvm in Ubuntu):
In your Ubuntu (WSL2) terminal:
# Install nvm in WSL2 (Ubuntu)
curl -o https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Restart your shell, then:
nvm install 22.0.0
nvm use 22.0.0
# Verify installation
node --version
If you prefer to install Node.js directly in Windows (outside WSL2), you can use nvm-windows. See: https://github.com/coreybutler/nvm-windows
In that case, run Solo commands from the same environment where Node.js is installed.
2. Docker Desktop
Details (click to expand/collapse)
Docker is essential for running the containerized Hedera network components:
- macOS/Windows: Download Docker Desktop from https://www.docker.com/products/docker-desktop
- Linux: Follow the installation guide for your distribution at https://docs.docker.com/engine/install/
After installation, ensure Docker is running and reachable:
docker --version
docker ps
3. kubectl (Linux & WSL2)
Details (click to expand/collapse)
On macOS, Docker Desktop already ships a kubectl client, so you usually don’t need to install it separately.
On Linux and inside WSL2, you must install kubectl yourself.
For Ubuntu/Debian-based shells (including Ubuntu on WSL2):
sudo apt update && sudo apt install -y ca-certificates curl
ARCH="$(dpkg --print-architecture)"
curl -fsSLo kubectl "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/${ARCH}/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectl
kubectl version --client
Preparing Your Environment
Now that we have all prerequisites in place, let’s install Solo and set up our environment.
One thing to consider: old installs can really hamper your ability to get a new install up and running. If you have an old install of Solo, or if you are having issues with the install, please run the following commands to clean up your environment before proceeding.
1. Installing Solo
Details (click to expand/collapse)
Open your terminal and install Solo using npm:
npm install -g @hashgraph/solo
# Verify the installation
solo --version
# Or use different output formats (Kubernetes-style)
solo --version -o json # JSON format: {"version": "0.46.1"}
solo --version -o yaml # YAML format: version: 0.46.1
solo --version -o wide # Plain text: 0.46.1
You should see output showing the latest version which should match our NPM package version: https://www.npmjs.com/package/@hashgraph/solo
The --output (or -o) flag can be used with various Solo commands to produce machine-readable output in formats like json, yaml, or wide.
*Cleaning up an old install
Details (click to expand/collapse)
⚠️ Warning: The commands below will:
- Delete all Kind clusters on your machine (
kind delete clusterfor every cluster returned bykind get clusters), and- Remove your Solo home directory (
~/.solo), including cached charts, logs, keys, and configuration.Only run this if you are sure you no longer need any existing Solo or Kind environments.
The team is presently working on a number of fixes and automation that will relegate the need for this, but currently Solo can be finicky with artifacts from prior installs. A quick command to prep your station for a new install is a good idea:
for cluster in $(kind get clusters); do
kind delete cluster -n "$cluster"
done
rm -rf ~/.solo
2. Setting up your environment variables
Details (click to expand/collapse)
You need to declare some environment variables. Unless you intentionally include these in your shell config (for example, .zshrc or .bashrc), you will lose them when you close your terminal.
Throughout the remainder of this walkthrough, we’ll assume these values:
export SOLO_CLUSTER_NAME=solo
export SOLO_NAMESPACE=solo
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
export SOLO_DEPLOYMENT=solo-deployment
3. Create a cluster
Details (click to expand/collapse)
kind create cluster -n "${SOLO_CLUSTER_NAME}"
Example output:
Creating cluster "solo-e2e" ...
Ensuring node image (kindest/node:v1.32.2) 🖼 ...
✓ Ensuring node image (kindest/node:v1.32.2) 🖼
Preparing nodes 📦 ...
✓ Preparing nodes 📦
Writing configuration 📜 ...
✓ Writing configuration 📜
Starting control-plane 🕹️ ...
✓ Starting control-plane 🕹️
Installing CNI 🔌 ...
✓ Installing CNI 🔌
Installing StorageClass 💾 ...
✓ Installing StorageClass 💾
Set kubectl context to "kind-solo-e2e"
You can now use your cluster with:
kubectl cluster-info --context kind-solo-e2e
Have a nice day! 👋
*Connecting to a remote cluster
Details (click to expand/collapse)
You may use a remote Kubernetes cluster. In this case, ensure the Kubernetes context is set up correctly.
kubectl config get-contexts
kubectl config use-context <context-name>
One Shot Deployment
Solo provides three one-shot deployment options to quickly set up your Hedera test network:
Single Node Deployment (Recommended for Development)
For a simple setup with a single node plus mirror node, explorer, and JSON RPC relay, you can follow these quick steps. This is ideal for testing and development purposes.
solo one-shot single deploy
When you’re finished, you can tear down your Solo network just as easily:
solo one-shot single destroy
Multiple Node Deployment (For Consensus Testing)
For testing consensus scenarios or multi-node behavior, you can deploy a network with multiple consensus nodes. This setup includes all the same components as the single node deployment but with multiple consensus nodes for testing consensus mechanisms.
solo one-shot multi deploy
This command will:
- Deploy multiple consensus nodes
- Set up mirror node, explorer, and JSON RPC relay
- Generate appropriate keys for all nodes
- Create predefined accounts for testing
When you’re finished with the multiple node network:
solo one-shot multi destroy
📝 Note: Multiple node deployments require more system resources. Ensure you have adequate memory and CPU allocated to Docker (recommended: 16 GB+ of memory, 8+ CPU cores).
Falcon Deployment (Advanced Configuration)
For advanced users who need fine-grained control over all network components, the Falcon deployment uses a YAML configuration file to customize every aspect of the network.
solo one-shot falcon deploy --values-file falcon-values.yaml
The Falcon deployment allows you to:
- Configure all network components through a single YAML file
- Customize consensus nodes, mirror node, explorer, relay, and block node settings
- Set specific versions, resource allocations, and feature flags
- Integrate cleanly into CI/CD pipelines and automated testing scenarios
Example configuration file (falcon-values.yaml):
network:
--deployment: "my-network"
--release-tag: "v0.65.0"
--node-aliases: "node1"
setup:
--release-tag: "v0.65.0"
--node-aliases: "node1"
consensusNode:
--deployment: "my-network"
--node-aliases: "node1"
--force-port-forward: true
mirrorNode:
--enable-ingress: true
--pinger: true
explorerNode:
--enable-ingress: true
relayNode:
--node-aliases: "node1"
See the Falcon example in the repository for a complete configuration template.
When you’re finished with the Falcon deployment:
solo one-shot falcon destroy
📝 Note: The Falcon deployment reads deployment name and other shared settings from the values file, so you don’t need to specify
--deploymenton the command line.
Step-by-Step Solo Network Deployment
If you have a more complex setup in mind, such as multiple nodes or specific configurations, follow these detailed steps to deploy your Solo network.
It is recommended to reset the .solo directory before creating a new Solo deployment. This step is crucial to ensure a clean setup without any leftover artifacts from previous installations. See: *Cleaning up an old install
1. Connect the cluster and create a deployment
Details (click to expand/collapse)
This command will create a deployment in the specified clusters, and generate the LocalConfig and RemoteConfig used by Kubernetes.
The deployment will:
- Create a namespace (usually matching the deployment name)
- Set up ConfigMaps and secrets
- Deploy network infrastructure
- Create persistent volumes if needed
📝 Notice that the
--cluster-refvalue iskind-solo. When you created the Kind cluster it created a cluster reference in the Kubernetes config with the namekind-solo. If you used a different name, replacekind-solowith your cluster name, but prefix it withkind-.
📝 Solo stores various artifacts (config, logs, keys etc.) in its home directory:~/.solo. If you need a full reset, delete this directory before runningsolo initor other commands again.
# Connect to the cluster you created in a previous command
solo cluster-ref config connect --cluster-ref kind-${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}
# Create the deployment
solo deployment config create -n "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : cluster-ref config connect --cluster-ref kind-solo --context kind-solo
**********************************************************************************
Initialize
✔ Initialize
Validating cluster ref:
✔ Validating cluster ref: kind-solo
Test connection to cluster:
✔ Test connection to cluster: kind-solo
Associate a context with a cluster reference:
✔ Associate a context with a cluster reference: kind-solo
solo-deployment_CREATE_OUTPUT
2. Add a cluster to the deployment you created
Details (click to expand/collapse)
This command is the first time you specify how many consensus nodes you want to add to your deployment. For the sake of resource usage in this guide, we’ll use 1 consensus node.
# Add a cluster to the deployment you created
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --num-consensus-nodes 1
# Tip: if the CLI is unresponsive, there’s a guided mode:
# solo deployment cluster attach
Example output:
solo-deployment_ADD_CLUSTER_OUTPUT
3. Generate keys
Details (click to expand/collapse)
You need to generate keys for your nodes — in this example, a single node.
solo keys consensus generate --gossip-keys --tls-keys --deployment "${SOLO_DEPLOYMENT}"
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : keys consensus generate --gossip-keys --tls-keys --deployment solo-deployment
**********************************************************************************
Initialize
✔ Initialize
Generate gossip keys
Backup old files
✔ Backup old files
Gossip key for node: node1
✔ Gossip key for node: node1 [0.3s]
✔ Generate gossip keys [0.3s]
Generate gRPC TLS Keys
Backup old files
TLS key for node: node1
✔ Backup old files
✔ TLS key for node: node1 [0.8s]
✔ Generate gRPC TLS Keys [0.8s]
Finalize
✔ Finalize
PEM key files are generated in the ~/.solo/cache/keys directory:
hedera-node1.crt hedera-node3.crt s-private-node1.pem s-public-node1.pem unused-gossip-pem
hedera-node1.key hedera-node3.key s-private-node2.pem s-public-node2.pem unused-tls
hedera-node2.crt hedera-node4.crt s-private-node3.pem s-public-node3.pem
hedera-node2.key hedera-node4.key s-private-node4.pem s-public-node4.pem
4. Set up cluster with shared components
Details (click to expand/collapse)
solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : cluster-ref config setup --cluster-setup-namespace solo-cluster
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
✔ Initialize
Install cluster charts
Skipping Grafana Agent chart installation
Install pod-monitor-role ClusterRole
✅ ClusterRole pod-monitor-role installed successfully in context kind-solo
✔ Install pod-monitor-role ClusterRole
Install MinIO Operator chart
✅ MinIO Operator chart installed successfully on context kind-solo
✔ Install MinIO Operator chart [0.9s]
✔ Install cluster charts [1s]
Deploying Helm chart with network components
Now comes the exciting part – deploying your Hedera test network!
*Deploy a Block Node (experimental)
Details (click to expand/collapse)
⚠️ Block Node is experimental in Solo. It requires a minimum of 16 GB of memory allocated to Docker. If you have less than 16 GB of memory, skip this step.
Block Node uses a lot of memory. In addition, it requires a version of Consensus Node to be at least v0.62.3. You will need to augment the solo consensus network deploy and solo consensus node setup commands with the --release-tag v0.62.6 option to ensure that the Consensus Node is at the correct version.
Note: v0.62.6 is the latest patch for v0.62.
solo block node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-"${SOLO_CLUSTER_NAME}" --release-tag v0.62.6
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : block node add --deployment solo-deployment --cluster-ref kind-solo --release-tag v0.66.0
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Prepare release name and block node name
✔ Prepare release name and block node name
Prepare chart values
✔ Prepare chart values
Deploy block node
- Installed block-node-1 chart, version: 0.23.2
✔ Deploy block node [1s]
Check block node pod is running
✔ Check block node pod is running [14s]
Check software
✔ Check software
Check block node pod is ready
✔ Check block node pod is ready [31m[40s]
Check block node readiness
✔ Check block node readiness - [1/100] success [0.1s]
Add block node component in remote config
✔ Add block node component in remote config
1. Deploy the network
Details (click to expand/collapse)
Deploying the network can sometimes time out as images are downloaded and pods start. If you experience a failure, double-check the resources you’ve allocated in Docker and try again.
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}"
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus network deploy --deployment solo-deployment --release-tag v0.66.0
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize [0.1s]
Copy gRPC TLS Certificates
Copy gRPC TLS Certificates [SKIPPED: Copy gRPC TLS Certificates]
Prepare staging directory
Copy Gossip keys to staging
✔ Copy Gossip keys to staging
Copy gRPC TLS keys to staging
✔ Copy gRPC TLS keys to staging
✔ Prepare staging directory
Copy node keys to secrets
Copy TLS keys
Node: node1, cluster: kind-solo
Copy Gossip keys
✔ Copy Gossip keys
✔ Node: node1, cluster: kind-solo
✔ Copy TLS keys
✔ Copy node keys to secrets
Install monitoring CRDs
Pod Logs CRDs
✔ Pod Logs CRDs [0.5s]
Prometheus Operator CRDs
- Installed prometheus-operator-crds chart, version: 24.0.2
✔ Prometheus Operator CRDs [3s]
✔ Install monitoring CRDs [3s]
Install chart 'solo-deployment'
- Installed solo-deployment chart, version: 0.58.1
✔ Install chart 'solo-deployment' [2s]
Check for load balancer
Check for load balancer [SKIPPED: Check for load balancer]
Redeploy chart with external IP address config
Redeploy chart with external IP address config [SKIPPED: Redeploy chart with external IP address config]
Check node pods are running
Check Node: node1, Cluster: kind-solo
✔ Check Node: node1, Cluster: kind-solo [28s]
✔ Check node pods are running [28s]
Check proxy pods are running
Check HAProxy for: node1, cluster: kind-solo
Check Envoy Proxy for: node1, cluster: kind-solo
✔ Check Envoy Proxy for: node1, cluster: kind-solo
✔ Check HAProxy for: node1, cluster: kind-solo
✔ Check proxy pods are running
Check auxiliary pods are ready
Check MinIO
✔ Check MinIO
✔ Check auxiliary pods are ready
Add node and proxies to remote config
✔ Add node and proxies to remote config
Copy block-nodes.json
✔ Copy block-nodes.json [0.4s]
2. Set up a node with Hedera platform software
Details (click to expand/collapse)
This step downloads the Hedera platform code and sets up your node(s).
# Consensus node setup
export CONSENSUS_NODE_VERSION=v0.66.0 # or whatever version you are trying to deploy, starting with a `v`
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" --release-tag "${CONSENSUS_NODE_VERSION}"
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus node setup --deployment solo-deployment --release-tag v0.66.0
**********************************************************************************
Load configuration
✔ Load configuration [0.2s]
Initialize
✔ Initialize [0.1s]
Validate nodes states
Validating state for node node1
✔ Validating state for node node1 - valid state: requested
✔ Validate nodes states
Identify network pods
Check network pod: node1
✔ Check network pod: node1
✔ Identify network pods
Fetch platform software into network nodes
Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ]
✔ Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ] [3s]
✔ Fetch platform software into network nodes [3s]
Setup network nodes
Node: node1
Copy configuration files
✔ Copy configuration files [0.2s]
Set file permissions
✔ Set file permissions [0.3s]
✔ Node: node1 [0.6s]
✔ Setup network nodes [0.7s]
setup network node folders
✔ setup network node folders [0.1s]
Change node state to configured in remote config
✔ Change node state to configured in remote config
3. Start the nodes
Details (click to expand/collapse)
Now that everything is set up, start your consensus node(s):
solo consensus node start --deployment "${SOLO_DEPLOYMENT}"
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus node start --deployment solo-deployment
**********************************************************************************
Check dependencies
Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Load configuration
✔ Load configuration [0.2s]
Initialize
✔ Initialize [0.2s]
Validate nodes states
Validating state for node node1
✔ Validating state for node node1 - valid state: configured
✔ Validate nodes states
Identify existing network nodes
Check network pod: node1
✔ Check network pod: node1
✔ Identify existing network nodes
Upload state files network nodes
Upload state files network nodes [SKIPPED: Upload state files network nodes]
Starting nodes
Start node: node1
✔ Start node: node1 [0.1s]
✔ Starting nodes [0.1s]
Enable port forwarding for debug port and/or GRPC port
Using requested port 50211
✔ Enable port forwarding for debug port and/or GRPC port
Check all nodes are ACTIVE
Check network pod: node1
✔ Check network pod: node1 - status ACTIVE, attempt: 17/300 [21s]
✔ Check all nodes are ACTIVE [21s]
Check node proxies are ACTIVE
Check proxy for node: node1
✔ Check proxy for node: node1 [6s]
✔ Check node proxies are ACTIVE [6s]
set gRPC Web endpoint
Using requested port 30212
✔ set gRPC Web endpoint [3s]
Change node state to started in remote config
✔ Change node state to started in remote config
Add node stakes
Adding stake for node: node1
✔ Adding stake for node: node1 [4s]
✔ Add node stakes [4s]
Stopping port-forwarder for port [30212]
4. Deploy a mirror node
Details (click to expand/collapse)
This is the most memory-intensive step from a resource perspective. If you have issues here, check your local resource utilization and make sure there’s memory available for Docker (close all non-essential applications). You can also reduce Docker’s swap usage in settings to ease memory pressure.
The --pinger flag starts a pinging service that sends transactions to the network at regular intervals. This is needed because the record file is not imported into the mirror node until the next one is created.
# Deploy with explicit configuration
solo mirror node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --enable-ingress --pinger
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : mirror node add --deployment solo-deployment --cluster-ref kind-solo --enable-ingress --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Using requested port 30212
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize [1s]
Enable mirror-node
Prepare address book
✔ Prepare address book
Install mirror ingress controller
- Installed haproxy-ingress-1 chart, version: 0.14.5
✔ Install mirror ingress controller [0.7s]
Deploy mirror-node
- Installed mirror chart, version: v0.143.0
✔ Deploy mirror-node [3s]
✔ Enable mirror-node [3s]
Check pods are ready
Check Postgres DB
Check REST API
Check GRPC
Check Monitor
Check Web3
Check Importer
✔ Check Postgres DB [31m[38s]
✔ Check Web3 [31m[50s]
✔ Check GRPC [31m[56s]
✔ Check REST API [31m[1m10s]
✔ Check Monitor [31m[1m22s]
✔ Check Importer [31m[1m52s]
✔ Check pods are ready [31m[1m52s]
Seed DB data
Insert data in public.file_data
✔ Insert data in public.file_data [0.3s]
✔ Seed DB data [0.3s]
Add mirror node to remote config
✔ Add mirror node to remote config
Enable port forwarding for mirror ingress controller
Using requested port 8081
✔ Enable port forwarding for mirror ingress controller
Stopping port-forwarder for port [30212]
5. Deploy the explorer
Details (click to expand/collapse)
The explorer gives you a UI to inspect accounts, transactions, and network status.
# Deploy explorer
solo explorer node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : explorer node add --deployment solo-deployment --cluster-ref kind-solo --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize [0.4s]
Load remote config
✔ Load remote config [0.1s]
Install cert manager
Install cert manager [SKIPPED: Install cert manager]
Install explorer
- Installed hiero-explorer-1 chart, version: 25.1.1
✔ Install explorer [0.5s]
Install explorer ingress controller
Install explorer ingress controller [SKIPPED: Install explorer ingress controller]
Check explorer pod is ready
✔ Check explorer pod is ready [16s]
Check haproxy ingress controller pod is ready
Check haproxy ingress controller pod is ready [SKIPPED: Check haproxy ingress controller pod is ready]
Add explorer to remote config
✔ Add explorer to remote config
Enable port forwarding for explorer
Using requested port 8080
✔ Enable port forwarding for explorer [0.1s]
6. Deploy a JSON RPC relay
Details (click to expand/collapse)
The JSON RPC relay allows you to interact with your Hedera network using standard JSON RPC calls. This is useful for integrating with existing tools and libraries.
# Deploy a Solo JSON RPC relay
solo relay node add -i node1 --deployment "${SOLO_DEPLOYMENT}"
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : relay node add --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize [0.5s]
Check chart is installed
✔ Check chart is installed [0.1s]
Prepare chart values
Using requested port 30212
✔ Prepare chart values [1s]
Deploy JSON RPC Relay
- Installed relay-1 chart, version: 0.73.0
✔ Deploy JSON RPC Relay [31m[41s]
Check relay is running
✔ Check relay is running
Check relay is ready
✔ Check relay is ready
Add relay component in remote config
✔ Add relay component in remote config
Enable port forwarding for relay node
Using requested port 7546
✔ Enable port forwarding for relay node [0.1s]
Stopping port-forwarder for port [30212]
*Check pod status
Details (click to expand/collapse)
To check the status of your Solo Kubernetes pods:
kubectl get pods -n "${SOLO_NAMESPACE}"
Working with Your Network
Network Endpoints
Details (click to expand/collapse)
Some port forwarding is automatic, but in other cases you may want to configure your own using kubectl port-forward.
# Consensus Service for node1 (node ID = 0): localhost:50211
# (Usually automatic)
# kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 50211:50211 > /dev/null 2>&1 &
# Explorer UI: http://localhost:8080
# (Usually automatic)
# kubectl port-forward svc/hiero-explorer -n "${SOLO_NAMESPACE}" 8080:8080 > /dev/null 2>&1 &
# Mirror Node gRPC, REST, REST Java, Web3 are usually exposed on `localhost:8081`
# when you passed `--enable-ingress` to the `solo mirror node add` command.
# Mirror Node gRPC: localhost:5600
kubectl port-forward svc/mirror-1-grpc -n "${SOLO_NAMESPACE}" 5600:5600 > /dev/null 2>&1 &
# Mirror Node REST API: http://localhost:5551
kubectl port-forward svc/mirror-1-rest -n "${SOLO_NAMESPACE}" 5551:80 > /dev/null 2>&1 &
# Mirror Node REST Java API: http://localhost:8084
kubectl port-forward svc/mirror-1-restjava -n "${SOLO_NAMESPACE}" 8084:80 > /dev/null 2>&1 &
# JSON RPC Relay: localhost:7546
# (Usually automatic)
# kubectl port-forward svc/relay-node1-hedera-json-rpc-relay -n "${SOLO_NAMESPACE}" 7546:7546 > /dev/null 2>&1 &
Managing Your Network
Stopping and starting nodes
Details (click to expand/collapse)
You can control individual nodes or the entire network:
# Stop all nodes
solo consensus node stop --deployment "${SOLO_DEPLOYMENT}"
# Stop a specific node
solo consensus node stop --deployment "${SOLO_DEPLOYMENT}" --node-aliases node1
# Restart nodes
solo consensus node restart --deployment "${SOLO_DEPLOYMENT}"
# Start nodes again
solo consensus node start --deployment "${SOLO_DEPLOYMENT}"
Viewing logs
Details (click to expand/collapse)
Access Solo and Consensus Node logs for troubleshooting:
# Capture logs, configs, and diagnostic artifacts from all consensus nodes and test connections
solo consensus diagnostics all --deployment "${SOLO_DEPLOYMENT}"
You can also use kubectl logs directly if you prefer.
Updating the network
Details (click to expand/collapse)
To update nodes to a new Hedera version, you typically upgrade one minor version at a time:
solo consensus network upgrade --deployment "${SOLO_DEPLOYMENT}" --upgrade-version v0.62.6
Updating a single node
Details (click to expand/collapse)
To update a single node to a new Hedera version (again, usually one minor version at a time):
solo consensus node update --deployment "${SOLO_DEPLOYMENT}" --node-alias node1 --release-tag v0.62.6
It is also possible to update a single node through a process with separated steps. This is only useful in very specific cases, such as when testing the update process itself:
solo consensus dev-node-update prepare --deployment "${SOLO_DEPLOYMENT}" --node-alias node1 --release-tag v0.62.6 --output-dir context
solo consensus dev-node-update submit-transaction --deployment "${SOLO_DEPLOYMENT}" --input-dir context
solo consensus dev-node-update execute --deployment "${SOLO_DEPLOYMENT}" --input-dir context
Adding a new node to the network
Details (click to expand/collapse)
Adding a new node to an existing Solo network (high-level overview):
TODO solo consensus node add
It is possible to add a new node through a process with separated steps. This is only useful in very specific cases, such as when testing the node-adding process:
solo consensus dev-node-add prepare --gossip-keys true --tls-keys true --deployment "${SOLO_DEPLOYMENT}" --pvcs true --admin-key ***** --node-alias node1 --output-dir context
solo consensus dev-node-add submit-transaction --deployment "${SOLO_DEPLOYMENT}" --input-dir context
solo consensus dev-node-add execute --deployment "${SOLO_DEPLOYMENT}" --input-dir context
Deleting a node from the network
Details (click to expand/collapse)
This command is used to delete a node from an existing Solo network:
TODO solo consensus node destroy
It is possible to delete a node through a process with separated steps. This is only useful in very specific cases, such as when testing the delete process:
solo consensus dev-node-delete prepare --deployment "${SOLO_DEPLOYMENT}" --node-alias node1 --output-dir context
solo consensus dev-node-delete submit-transaction --deployment "${SOLO_DEPLOYMENT}" --input-dir context
solo consensus dev-node-delete execute --deployment "${SOLO_DEPLOYMENT}" --input-dir context
Troubleshooting: Common Issues and Solutions
1. Pods not starting
Details (click to expand/collapse)
If pods remain in Pending or CrashLoopBackOff state:
# Check pod events
kubectl describe pod -n "${SOLO_NAMESPACE}" <pod-name>
Common fixes:
- Increase Docker resources (memory/CPU)
- Check disk space
- Restart Docker and the Kind cluster
2. Connection refused errors
Details (click to expand/collapse)
If you can’t connect to network endpoints:
# Check service endpoints
kubectl get svc -n "${SOLO_NAMESPACE}"
# Manually forward ports if needed (example)
kubectl port-forward -n "${SOLO_NAMESPACE}" svc/network-node-0 50211:50211
3. Node synchronization issues
Details (click to expand/collapse)
If nodes aren’t forming consensus:
# Check node status
solo consensus state download --deployment "${SOLO_DEPLOYMENT}" --node-aliases node1
# Look for gossip connectivity issues
kubectl logs -n "${SOLO_NAMESPACE}" network-node-0 | grep -i gossip
# Restart problematic nodes
solo consensus node refresh --node-aliases node1 --deployment "${SOLO_DEPLOYMENT}"
Getting Help
Details (click to expand/collapse)
When you need assistance:
Check the logs
Use:solo consensus diagnostics all --deployment "${SOLO_DEPLOYMENT}"Then examine
~/.solo/logs/.Documentation
Visit the Solo docs site (linked from the repository README).GitHub Issues
Report bugs at: https://github.com/hiero-ledger/solo/issuesCommunity Support
Join the Hedera Discord community (linked from the Hedera docs / website).
Cleanup
Details (click to expand/collapse)
When you’re done with your test network, you can clean up resources.
*Fast clean up
Details (click to expand/collapse)
To quickly clean up your Solo network and remove all resources (all Kind clusters!), you can use the following commands. Be aware you will lose all your logs and data from prior runs:
for cluster in $(kind get clusters); do
kind delete cluster -n "$cluster"
done
rm -rf ~/.solo
1. Destroy relay node
Details (click to expand/collapse)
solo relay node destroy -i node1 --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : relay node destroy --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize [0.6s]
Destroy JSON RPC Relay
*** Destroyed Relays ***
-------------------------------------------------------------------------------
- block-node-1 [block-node-server-0.23.2]
- haproxy-ingress-1 [haproxy-ingress-0.14.5]
- hiero-explorer-1 [hiero-explorer-chart-25.1.1]
- mirror-1 [hedera-mirror-0.143.0]
- prometheus-operator-crds [prometheus-operator-crds-24.0.2]
- solo-deployment [solo-deployment-0.58.1]
✔ Destroy JSON RPC Relay [0.5s]
Remove relay component from remote config
✔ Remove relay component from remote config
2. Destroy mirror node
Details (click to expand/collapse)
solo mirror node destroy --deployment "${SOLO_DEPLOYMENT}" --force
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : mirror node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Using requested port 30212
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize [1s]
Destroy mirror-node
✔ Destroy mirror-node [0.4s]
Delete PVCs
✔ Delete PVCs
Uninstall mirror ingress controller
✔ Uninstall mirror ingress controller [0.2s]
Remove mirror node from remote config
✔ Remove mirror node from remote config
Stopping port-forwarder for port [30212]
3. Destroy explorer node
Details (click to expand/collapse)
solo explorer node destroy --deployment "${SOLO_DEPLOYMENT}" --force
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : explorer node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize [0.5s]
Load remote config
✔ Load remote config [0.1s]
Destroy explorer
✔ Destroy explorer [0.2s]
Uninstall explorer ingress controller
✔ Uninstall explorer ingress controller [0.1s]
Remove explorer from remote config
✔ Remove explorer from remote config
*Destroy block node (experimental)
Details (click to expand/collapse)
Block Node destroy should run before consensus network destroy, since consensus network destroy removes the remote config. To destroy the block node (if you deployed it):
solo block node destroy --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : block node destroy --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize [0.5s]
Destroy block node
✔ Destroy block node [0.3s]
Disable block node component in remote config
✔ Disable block node component in remote config
4. Destroy network
Details (click to expand/collapse)
solo consensus network destroy --deployment "${SOLO_DEPLOYMENT}" --force
Example output:
******************************* Solo *********************************************
Version : 0.52.0
Kubernetes Context : kind-solo
Kubernetes Cluster : kind-solo
Current Command : consensus network destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
Check dependencies
Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 5.15.0-151-generic, Arch: x64]
✔ Check dependencies
Setup chart manager
✔ Setup chart manager [5s]
Initialize
Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
Running sub-tasks to destroy network
✔ Deleting the RemoteConfig configmap in namespace solo [0.4s]
Next Steps
Details (click to expand/collapse)
Congratulations! You now have a working Hedera test network. Here are some suggestions for what to explore next:
- Deploy Smart Contracts – Test your Solidity contracts on the local network.
- Mirror Node Queries – Explore the REST API at
http://localhost:5551(or your configured port). - Multi-Node Testing – Add more nodes to test scalability and consensus behavior.
- Network Upgrades – Practice upgrading the Hedera platform version using Solo’s upgrade commands.
- Integration Testing – Connect your applications to the local network and build end-to-end tests.
Remember, this is your personal Hedera playground. Experiment freely, break things, learn, and have fun building on Hedera!
Happy coding with Solo! 🚀