Solo User Guide

Learn how to set up your first Hedera test network using Solo. This step-by-step guide covers installation, deployment, and your first transaction.

📝 For less than 16 GB of memory to dedicate to Docker please skip the block node add and destroy steps.

📝 There should be a table of contents on the right side of your screen if your browser width is large enough

Introduction

Welcome to the world of Hedera development! If you’re looking to build and test applications on the Hedera network but don’t want to spend HBAR on testnet or mainnet transactions, you’ve come to the right place. Solo is your gateway to running your own local Hedera test network, giving you complete control over your development environment.

Solo is an opinionated command-line interface (CLI) tool designed to deploy and manage standalone Hedera test networks. Think of it as your personal Hedera sandbox where you can experiment, test features, and develop applications without any external dependencies or costs. Whether you’re building smart contracts, testing consensus mechanisms, or developing DApps, Solo provides the infrastructure you need.

By the end of this tutorial, you’ll have your own Hedera test network running locally, complete with consensus nodes, mirror nodes, and all the infrastructure needed to submit transactions and test your applications. Let’s dive in!

Prerequisites

Before we begin, let’s ensure your system meets the requirements and has all the necessary software installed. Don’t worry if this seems like a lot – we’ll walk through each step together.

System Requirements(for a bare minimum install running 1 node)

First, check that your computer meets these minimum specifications:

  • Memory: At least 8GB of RAM (16GB recommended for smoother performance)
  • CPU: Minimum 4 cores (8 cores recommended)
  • Storage: At least 20GB of free disk space
  • Operating System: macOS, Linux, or Windows with WSL2

Required Software

You’ll need to install a few tools before we can set up Solo. Here’s what you need and how to get it:

1. Node.js (≥20.18.0)

Details <click to expand/collapse>

Solo is built on Node.js, so you’ll need version 20.18.0 or higher. We recommend using Node Version Manager (nvm) for easy version management:

# Install nvm (macOS/Linux)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash

# Install nvm (Windows - use nvm-windows)# Download from: https://github.com/coreybutler/nvm-windows# Install Node.js
nvm install 20.18.0
nvm use 20.18.0

# Verify installation
node --version

2. Docker Desktop

Details <click to expand/collapse>

Docker is essential for running the containerized Hedera network components:

  • macOS/Windows: Download Docker Desktop from docker.com
  • Linux: Follow the installation guide for your distribution at docs.docker.com

After installation, ensure Docker is running:

docker --version
docker ps

Preparing Your Environment

Now that we have all prerequisites in place, let’s install Solo and set up our environment.

One thing to consider, old installs can really hamper your ability to get a new install up and running. If you have an old install of Solo, or if you are having issues with the install, please run the following commands to clean up your environment before proceeding.

1. Installing Solo

Details <click to expand/collapse>

Open your terminal and install Solo globally using npm:

npm install -g @hashgraph/solo

# Verify the installation
solo --version

You should see output showing the latest version which should match our NPM package version: https://www.npmjs.com/package/@hashgraph/solo


*Cleaning up an old install

Details <click to expand/collapse>

The team is presently working on a number of fixes and automation that will relegate the need for this, but currently as deployed Solo can be finnicky with artifacts from prior installs. A quick command to prep your station for a new install is a good idea.

for cluster in $(kind get clusters);do;kind delete cluster -n $cluster;done
rm -Rf ~/.solo

2. Setting up your environmental variables

Details <click to expand/collapse>

You need to declare some environmental variables. Keep note that unless you intentionally include these in your zsh config when you close your terminal you may lose them.

*throughout the remainder of this walkthrough for simplicity sake I will assume in commands these are the values in your .env

export SOLO_CLUSTER_NAME=solo
export SOLO_NAMESPACE=solo
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
export SOLO_DEPLOYMENT=solo-deployment

3. Create a cluster

Details <click to expand/collapse>
kind create cluster -n "${SOLO_CLUSTER_NAME}"

Example output:

Creating cluster "solo-e2e" ...
 ✓ Ensuring node image (kindest/node:v1.32.2) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-solo-e2e"
You can now use your cluster with:

kubectl cluster-info --context kind-solo-e2e

Have a nice day! 👋

*Connecting to a remote cluster

Details <click to expand/collapse>
  • You may use a remote Kubernetes cluster. In this case, ensure Kubernetes context is set up correctly.
kubectl config get-contexts
kubectl config use-context <context-name>

Quick Start Deployment

For a simple setup with a single node with a mirror node, explorer, and JSON RPC relay, you can follow these quick steps. This is ideal for testing and development purposes.

solo quick-start single deploy

Step-by-Step Solo Network Deployment

If you have a more complex setup in mind, such as multiple nodes or specific configurations, follow these detailed steps to deploy your Solo network.

1. Initialize solo:

Details <click to expand/collapse>

Reset the .solo directory before initializing Solo. This step is crucial to ensure a clean setup without any leftover artifacts from previous installations. See: *Cleaning up an old install

solo init

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: init
**********************************************************************************
✔ Setup home directory and cache
✔ Check dependency: helm [OS: linux, Release: 5.15.0-131-generic, Arch: x64]
✔ Check dependency: kind [OS: linux, Release: 5.15.0-131-generic, Arch: x64]
✔ Check dependencies
✔ Create local configuration
✔ Setup chart manager

***************************************************************************************
Note: solo stores various artifacts (config, logs, keys etc.) in its home directory: /home/runner/.solo
If a full reset is needed, delete the directory or relevant sub-directories before running 'solo init'.
***************************************************************************************
✔ Copy templates in '/home/runner/.solo/cache'

2. Connect the cluster and create a deployment

Details <click to expand/collapse>

This command will create a deployment in the specified clusters, and generate the LocalConfig and RemoteConfig used by k8s.

The deployment will:

  • Create a namespace (usually matching the deployment name)
  • Set up ConfigMaps and secrets
  • Deploy network infrastructure
  • Create persistent volumes if needed

📝 notice that the --cluster-ref value is kind-solo, when you created the Kind cluster it created a cluster reference in the Kubernetes config with the name kind-solo. If you used a different name, replace kind-solo with your cluster name, but prefixing with kind-. If you are working with a remote cluster, you can use the name of your cluster reference which can be gathered with the command: kubectl config get-contexts. 📝 Note: Solo stores various artifacts (config, logs, keys etc.) in its home directory: ~/.solo. If you need a full reset, delete this directory before running solo init ag

# connect to the cluster you created in a previous command
solo cluster-ref connect --cluster-ref kind-${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}

#create the deployment
solo deployment create -n "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: cluster-ref connect --cluster-ref kind-solo --context kind-solo
**********************************************************************************
✔ Initialize
✔ kind-solo
✔ Test connection to cluster: kind-solo
✔ Associate a context with a cluster reference: kind-solo

******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: deployment create --namespace solo --deployment solo-deployment --realm 0 --shard 0
Kubernetes Namespace	: solo
**********************************************************************************
✔ Initialize
✔ Adding deployment: solo-deployment with namespace: solo to local config

3. Add a cluster to the deployment you created

Details <click to expand/collapse>

*This command is the first command that will specify how many nodes you want to add to your deployment. For the sake of resource

# Add a cluster to the deployment you created
solo deployment add-cluster --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --num-consensus-nodes 1
# If the command line command is unresponsive there's also a handy cluster add configurator you can run `solo deployment add-cluster` without any arguments to get a guided setup.

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: deployment add-cluster --deployment solo-deployment --cluster-ref kind-solo --num-consensus-nodes 1
**********************************************************************************
✔ Initialize
✔ Verify args
✔ check ledger phase
✔ Test cluster connection: kind-solo, context: kind-solo
✔ Verify prerequisites
✔ add cluster-ref: kind-solo for deployment: solo-deployment in local config
✔ create remote config for deployment: solo-deployment in cluster: kind-solo

4. Generate keys

Details <click to expand/collapse>

You need to generate keys for your nodes, or in this case single node.

solo node keys --gossip-keys --tls-keys --deployment "${SOLO_DEPLOYMENT}"

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: node keys --gossip-keys --tls-keys --deployment solo-deployment
**********************************************************************************
✔ Initialize
✔ Backup old files
✔ Gossip key for node: node1
✔ Generate gossip keys
✔ Backup old files
✔ TLS key for node: node1
✔ Generate gRPC TLS Keys
✔ Finalize

PEM key files are generated in ~/.solo/cache/keys directory.

hedera-node1.crt    hedera-node3.crt    s-private-node1.pem s-public-node1.pem  unused-gossip-pem
hedera-node1.key    hedera-node3.key    s-private-node2.pem s-public-node2.pem  unused-tls
hedera-node2.crt    hedera-node4.crt    s-private-node3.pem s-public-node3.pem
hedera-node2.key    hedera-node4.key    s-private-node4.pem s-public-node4.pem

5. Setup cluster with shared components

Details <click to expand/collapse>
solo cluster-ref setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: cluster-ref setup --cluster-setup-namespace solo-cluster
**********************************************************************************
✔ Initialize
✔ Prepare chart values
 - Installed solo-cluster-setup chart, version: 0.54.4
✔ Install 'solo-cluster-setup' chart

Deploying Helm chart with network components

Now comes the exciting part – deploying your Hedera test network!

*Deploy a block node (experimental)

Details <click to expand/collapse>

⚠️ Block Node is experimental in Solo. It requires a minimum of 16 GB of memory allocated to Docker. If you have less than 16 GB of memory, skip this step.

As mentioned in the warning, Block Node uses a lot of memory. In addition, it requires a version of Consensus Node to be at least v0.62.3. You will need to augment the solo network deploy & solo node setup command with the --release-tag v0.62.6 option to ensure that the Consensus Node is at the correct version. *note: v0.62.6 is the latest patch for v0.62

solo block node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-"${SOLO_CLUSTER_NAME}" --release-tag v0.62.6

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: block node add --deployment solo-deployment --cluster-ref kind-solo --release-tag v0.62.6
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Prepare release name
✔ Prepare chart values
 - Installed block-node-0 chart, version: 0.11.0
✔ Deploy block node
✔ Check block node pod is running
✔ Check software
✔ Check block node pod is ready
✔ Check block node readiness - [1/100] success
✔ Add block node component in remote config

1. Deploy the network

Details <click to expand/collapse>

Deploying the network runs risks of timeouts as images are downloaded, and pods are starting. If you experience a failure double check the resources you’ve allocated in docker engine and give it another try.

solo network deploy --deployment "${SOLO_DEPLOYMENT}"

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: network deploy --deployment solo-deployment --release-tag v0.62.6
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Check if cluster setup chart is installed
✔ Copy Gossip keys to staging
✔ Copy gRPC TLS keys to staging
✔ Prepare staging directory
✔ Copy TLS keys
✔ Copy Gossip keys
✔ Node: node1, cluster: kind-solo
✔ Copy node keys to secrets
 - Installed solo-deployment chart, version: 0.54.4
✔ Install chart 'solo-deployment'
✔ Check Node: node1, Cluster: kind-solo
✔ Check node pods are running
✔ Check HAProxy for: node1, cluster: kind-solo
✔ Check Envoy Proxy for: node1, cluster: kind-solo
✔ Check proxy pods are running
✔ Check MinIO
✔ Check auxiliary pods are ready
✔ Add node and proxies to remote config
✔ Copy block-nodes.json

2. Set up a node with Hedera platform software

Details <click to expand/collapse>

This step downloads the hedera platform code and sets up your node/nodes.

# node setup
export CONSENSUS_NODE_VERSION=v0.63.9 # or whatever version you are trying to deploy starting with a `v`
solo node setup --deployment "${SOLO_DEPLOYMENT}" --release-tag "${CONSENSUS_NODE_VERSION}"

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: node setup --deployment solo-deployment --release-tag v0.62.6
**********************************************************************************
✔ Load configuration
✔ Initialize
✔ Validating state for node node1 - valid state: requested
✔ Validate nodes states
✔ Check network pod: node1
✔ Identify network pods
✔ Update node: node1 [ platformVersion = v0.62.6, context = kind-solo ]
✔ Fetch platform software into network nodes
✔ Copy configuration files
✔ Set file permissions
✔ Node: node1
✔ Setup network nodes
✔ Change node state to configured in remote config

3. Start the nodes up!

Details <click to expand/collapse>

Now that everything is set up you need to start them.

# start your node/nodes
solo node start --deployment "${SOLO_DEPLOYMENT}"

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: node start --deployment solo-deployment
**********************************************************************************
✔ Load configuration
✔ Initialize
✔ Validating state for node node1 - valid state: configured
✔ Validate nodes states
✔ Check network pod: node1
✔ Identify existing network nodes
✔ Start node: node1
✔ Starting nodes
✔ Enable port forwarding
✔ Check network pod: node1  - status ACTIVE, attempt: 18/300
✔ Check all nodes are ACTIVE
✔ Check proxy for node: node1
✔ Check node proxies are ACTIVE
✔ Change node state to started in remote config
✔ Adding stake for node: node1
✔ Add node stakes

4. Deploy a mirror node

Details <click to expand/collapse>

This is the most memory intensive step from a resource perspective. If you have issues at this step try checking your local resource utilization and make sure there’s memory available for docker (close all unessential applications). Likewise, you can consider lowering your swap in docker settings to ease the swap demand, and try again.

# Deploy with explicit configuration
solo mirror-node deploy --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --enable-ingress

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: mirror-node deploy --deployment solo-deployment --cluster-ref kind-solo --enable-ingress --quiet-mode
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Prepare address book
 - Installed haproxy-ingress chart, version: 0.14.5
✔ Install mirror ingress controller
 - Installed mirror chart, version: v0.131.0
✔ Deploy mirror-node
✔ Enable mirror-node
✔ Check Postgres DB
✔ Check GRPC
✔ Check Importer
✔ Check REST API
✔ Check Monitor
✔ Check pods are ready
✔ Insert data in public.file_data
✔ Seed DB data
✔ Add mirror node to remote config
✔ Enable port forwarding

5. Deploy the explorer

Details <click to expand/collapse>

Watch the deployment progress:

# deploy explorer
solo explorer deploy --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: explorer deploy --deployment solo-deployment --cluster-ref kind-solo --quiet-mode
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Load remote config
 - Installed hiero-explorer chart, version: 25.0.0
✔ Install explorer
✔ Check explorer pod is ready
✔ Add explorer to remote config
✔ Enable port forwarding

6. Deploy a JSON RPC relay

Details <click to expand/collapse>

The JSON RPC relay allows you to interact with your Hedera network using standard JSON RPC calls. This is useful for integrating with existing tools and libraries.

#deploy a solo JSON RPC relay
solo relay deploy -i node1 --deployment "${SOLO_DEPLOYMENT}"

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: relay deploy --node-aliases node1 --deployment solo-deployment
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Check chart is installed
✔ Prepare chart values
 - Installed relay-node1 chart, version: v0.67.0
✔ Deploy JSON RPC Relay
✔ Check relay is running
✔ Check relay is ready
✔ Add relay component in remote config
✔ Enable port forwarding

*Check Pod Status

Details <click to expand/collapse>

Here is a command if you want to check the status of your Solo Kubernetes pods:

# Check pod status
kubectl get pods -n solo

Working with Your Network

Network Endpoints

Details <click to expand/collapse>

At this time Solo doesn’t automatically set up port forwarding for you, so you’ll need to do that manually.

The port forwarding is now automatic for many endpoints. However, you can set up your own using kubectl port-forward command:

# Consensus Service for node1 (node ID = 0): localhost:50211
# should be automatic: kubectl port-forward svc/haproxy-node1-svc -n "${SOLO_NAMESPACE}" 50211:50211 > /dev/null 2>&1 &
# Explorer UI: http://localhost:8080
# should be automatic: kubectl port-forward svc/hiero-explorer -n "${SOLO_NAMESPACE}" 8080:8080 > /dev/null 2>&1 &
# Mirror Node gRPC, REST, REST Java, Web3 will be automatic on `localhost:8081` if you passed `--enable-ingress` to the `solo mirror-node deploy` command
# Mirror Node gRPC: localhost:5600
kubectl port-forward svc/mirror-grpc -n "${SOLO_NAMESPACE}" 5600:5600 > /dev/null 2>&1 &
# Mirror Node REST API: http://localhost:5551
kubectl port-forward svc/mirror-rest -n "${SOLO_NAMESPACE}" 5551:80 > /dev/null 2>&1 &
# Mirror Node REST Java API http://localhost:8084
kubectl port-forward service/mirror-restjava -n "${SOLO_NAMESPACE}" 8084:80 > /dev/null 2>&1 &
# JSON RPC Relay: localhost:7546
# should be automatic: kubectl port-forward svc/relay-node1-hedera-json-rpc-relay -n "${SOLO_NAMESPACE}" 7546:7546 > /dev/null 2>&1 &

Managing Your Network

Stopping and Starting Nodes

Details <click to expand/collapse>

You can control individual nodes or the entire network:

# Stop all nodes
solo node stop --deployment solo-deployment

# Stop a specific node
solo node stop --node-id node-0 --deployment solo-deployment

# Restart nodes
solo node restart --deployment solo-deployment

# Start nodes again
solo node start --deployment solo-deployment

Viewing Logs

Details <click to expand/collapse>

Access Solo and Consensus Node logs for troubleshooting:

# Download logs from all nodes

# Logs are saved to ~/.solo/logs/<namespace>/<pod-name>/# You can also use kubectl directly:
solo node logs --node-aliases node1 --deployment solo-deployment

Updating the Network

Details <click to expand/collapse>

To update nodes to a new Hedera version, you need to upgrade by one minor version higher at a time:

solo node upgrade --deployment solo-deployment --upgrade-version v0.62.6

Updating a single node

Details <click to expand/collapse>

To update a single node to a new Hedera version, you need to update by one minor version higher at a time:

solo node update --deployment solo-deployment --node-alias node1 --release-tag v0.62.6

It is possible to update a single node to a new Hedera version through a process with separated steps. This is only useful in very specific cases, such as when testing the updating process.

solo node update-prepare --deployment solo-deployment --node-alias node1 --release-tag v0.62.6 --output-dir context
solo node update-submit-transactions --deployment solo-deployment --input-dir context
solo node update-execute --deployment solo-deployment --input-dir context

Adding a new node to the network

Details <click to expand/collapse>

Adding a new node to an existing Solo network:

TODO solo node add

It is possible to add a new node through a process with separated steps. This is only useful in very specific cases, such as when testing the node adding process.

solo node add-prepare --gossip-keys true --tls-keys true --deployment solo-deployment --pvcs true --admin-key ***** --node-alias node1 --output-dir context
solo node add-submit-transactions --deployment solo-deployment --input-dir context
solo node add-execute --deployment solo-deployment --input-dir context

Deleting a node from the network

Details <click to expand/collapse>

This command is used to delete a node from an existing Solo network:

TODO solo node delete

It is possible to delete a node through a process with separated steps. This is only useful in very specific cases, such as when testing the delete process.

solo node delete-prepare --deployment solo-deployment --node-alias node1 --output-dir context
solo node delete-submit-transactions --deployment solo-deployment --input-dir context
solo node delete-execute --deployment solo-deployment --input-dir context

Troubleshooting: Common Issues and Solutions

1. Pods Not Starting

Details <click to expand/collapse>

If pods remain in Pending or CrashLoopBackOff state:

# Check pod events
kubectl describe pod -n solo network-node-0

# Common fixes:# - Increase Docker resources (memory/CPU)# - Check disk space# - Restart Docker and kind cluster

2. Connection Refused Errors

Details <click to expand/collapse>

If you can’t connect to network endpoints:

# Check service endpoints
kubectl get svc -n solo

# Manually forward ports if needed
kubectl port-forward -n solo svc/network-node-0 50211:50211

3. Node Synchronization Issues

Details <click to expand/collapse>

If nodes aren’t forming consensus:

# Check node status
solo node states --deployment solo-deployment --node-aliases node1

# Look for gossip connectivity issues
kubectl logs -n solo network-node-0 | grep -i gossip

# Restart problematic nodes
solo node refresh --node-aliases node1 --deployment solo-deployment

Getting Help

Details <click to expand/collapse>

When you need assistance:

  1. Check the logs: Use solo node logs --deployment solo-deployment --node-aliases node1 and examine ~/.solo/logs/
  2. Documentation: Visit https://solo.hiero.org/latest/docs/
  3. GitHub Issues: Report bugs at https://github.com/hiero-ledger/solo/issues
  4. Community Support: Join the Hedera Discord community: https://discord.gg/Ysruf53q

Cleanup

Details <click to expand/collapse>

When you’re done with your test network:

*Fast clean up

Details <click to expand/collapse>

To quickly clean up your Solo network and remove all resources (all Kind clusters!), you can use the following commands, be aware you will lose all your logs and data from prior runs:

for cluster in $(kind get clusters);do;kind delete cluster -n $cluster;done
rm -Rf ~/.solo

1. Destroy relay node

Details <click to expand/collapse>
solo relay destroy -i node1 --deployment "${SOLO_DEPLOYMENT}"

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: relay destroy --node-aliases node1 --deployment solo-deployment
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Destroy JSON RPC Relay
✔ Remove relay component from remote config

2. Destroy mirror node

Details <click to expand/collapse>
solo mirror-node destroy --deployment "${SOLO_DEPLOYMENT}" --force

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: mirror-node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Destroy mirror-node
✔ Delete PVCs
✔ Uninstall mirror ingress controller
✔ Remove mirror node from remote config

3. Destroy explorer node

Details <click to expand/collapse>
solo explorer destroy --deployment "${SOLO_DEPLOYMENT}" --force

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: explorer destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Load remote config
✔ Destroy explorer
✔ Uninstall explorer ingress controller
✔ Remove explorer from remote config

*Destroy block node (Experimental)

Details <click to expand/collapse>

Block Node destroy should run prior to network destroy, since network destroy removes the remote config. To destroy the block node (if you deployed it), you can use the following command:

solo block node destroy --deployment "${SOLO_DEPLOYMENT}"

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: block node destroy --deployment solo-deployment
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Look-up block node
✔ Destroy block node
✔ Disable block node component in remote config

4. Destroy network

Details <click to expand/collapse>
solo network destroy --deployment "${SOLO_DEPLOYMENT}" --force

Example output:


******************************* Solo *********************************************
Version			: 0.40.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: network destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
✔ Acquire lock - lock acquired successfully, attempt: 1/10
✔ Initialize
✔ Remove deployment from local configuration
✔ Deleting the RemoteConfig configmap in namespace solo


Next Steps

Details <click to expand/collapse>

Congratulations! You now have a working Hedera test network. Here are some suggestions for what to explore next:

  1. Deploy Smart Contracts: Test your Solidity contracts on the local network
  2. Mirror Node Queries: Explore the REST API at http://localhost:5551
  3. Multi-Node Testing: Add more nodes to test scalability
  4. Network Upgrades: Practice upgrading the Hedera platform version
  5. Integration Testing: Connect your applications to the local network

Remember, this is your personal Hedera playground. Experiment freely, break things, learn, and have fun building on Hedera!

Happy coding with Solo! 🚀