Advanced Network Deployments

Advanced deployment options for Solo networks including Falcon configuration, manual step-by-step deployment, Helm chart customization, and dynamic node management.

This guide covers advanced deployment scenarios for users who need more control over their Solo network configuration.

Prerequisites

Before using advanced deployment options, ensure you have completed the Solo User Guide and have:

  • Solo installed (solo --version)
  • Docker running with adequate resources
  • kubectl configured
  • A Kind cluster created

Set up your environment variables if not already done:

export SOLO_CLUSTER_NAME=solo
export SOLO_NAMESPACE=solo
export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
export SOLO_DEPLOYMENT=solo-deployment

Falcon Deployment

Falcon deployment provides fine-grained control over all network components through a YAML configuration file. This is ideal for CI/CD pipelines, automated testing, and complex deployment scenarios.

Basic Falcon Deployment

solo one-shot falcon deploy --values-file falcon-values.yaml

Example Configuration File

Create a file named falcon-values.yaml:

network:
  --deployment: "my-network"
  --release-tag: "v0.65.0"
  --node-aliases: "node1"

setup:
  --release-tag: "v0.65.0"
  --node-aliases: "node1"

consensusNode:
  --deployment: "my-network"
  --node-aliases: "node1"
  --force-port-forward: true

mirrorNode:
  --enable-ingress: true
  --pinger: true

explorerNode:
  --enable-ingress: true

relayNode:
  --node-aliases: "node1"

Multi-Node Falcon Configuration

For multiple consensus nodes:

network:
  --deployment: "my-multi-network"
  --release-tag: "v0.65.0"
  --node-aliases: "node1,node2,node3"

setup:
  --release-tag: "v0.65.0"
  --node-aliases: "node1,node2,node3"

consensusNode:
  --deployment: "my-multi-network"
  --node-aliases: "node1,node2,node3"
  --force-port-forward: true

mirrorNode:
  --enable-ingress: true
  --pinger: true

explorerNode:
  --enable-ingress: true

relayNode:
  --node-aliases: "node1"

Falcon with Block Node

Note: Block Node is experimental and requires at least 16 GB of memory allocated to Docker.

network:
  --deployment: "block-node-network"
  --release-tag: "v0.62.6"
  --node-aliases: "node1"

setup:
  --release-tag: "v0.62.6"
  --node-aliases: "node1"

consensusNode:
  --deployment: "block-node-network"
  --node-aliases: "node1"
  --force-port-forward: true

blockNode:
  --deployment: "block-node-network"
  --release-tag: "v0.62.6"

mirrorNode:
  --enable-ingress: true
  --pinger: true

explorerNode:
  --enable-ingress: true

relayNode:
  --node-aliases: "node1"

Tearing Down Falcon Deployment

solo one-shot falcon destroy

See the Falcon example for a complete configuration template.

Step-by-Step Manual Deployment

For maximum control, you can deploy each component individually. This is useful for debugging, custom configurations, or when you need to modify specific deployment steps.

1. Connect Cluster and Create Deployment

# Connect to the Kind cluster
solo cluster-ref config connect --cluster-ref kind-${SOLO_CLUSTER_NAME} --context kind-${SOLO_CLUSTER_NAME}

# Create a new deployment
solo deployment config create -n "${SOLO_NAMESPACE}" --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: cluster-ref config connect --cluster-ref kind-solo --context kind-solo
**********************************************************************************
 Initialize
✔ Initialize 
 Validating cluster ref: 
✔ Validating cluster ref: kind-solo 
 Test connection to cluster: 
✔ Test connection to cluster: kind-solo 
 Associate a context with a cluster reference: 
✔ Associate a context with a cluster reference: kind-solo
solo-deployment_CREATE_OUTPUT

2. Add Cluster to Deployment

Specify the number of consensus nodes:

# For a single node
solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --num-consensus-nodes 1

# For multiple nodes (e.g., 3 nodes)
# solo deployment cluster attach --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --num-consensus-nodes 3

Example output:

solo-deployment_ADD_CLUSTER_OUTPUT

3. Generate Keys

solo keys consensus generate --gossip-keys --tls-keys --deployment "${SOLO_DEPLOYMENT}"

PEM key files are generated in ~/.solo/cache/keys/.

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: keys consensus generate --gossip-keys --tls-keys --deployment solo-deployment
**********************************************************************************
 Initialize
✔ Initialize 
 Generate gossip keys
 Backup old files
✔ Backup old files 
 Gossip key for node: node1
✔ Gossip key for node: node1 [0.1s]
✔ Generate gossip keys [0.1s]
 Generate gRPC TLS Keys
 Backup old files
 TLS key for node: node1
✔ Backup old files 
✔ TLS key for node: node1 [0.4s]
✔ Generate gRPC TLS Keys [0.4s]
 Finalize
✔ Finalize

4. Set Up Cluster with Shared Components

solo cluster-ref config setup -s "${SOLO_CLUSTER_SETUP_NAMESPACE}"

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: cluster-ref config setup --cluster-setup-namespace solo-cluster
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
✔ Initialize 
 Install cluster charts
Skipping Grafana Agent chart installation
 Install pod-monitor-role ClusterRole
⏭️  ClusterRole pod-monitor-role already exists in context kind-solo, skipping
✔ Install pod-monitor-role ClusterRole 
 Install MinIO Operator chart
✅ MinIO Operator chart installed successfully on context kind-solo
✔ Install MinIO Operator chart [0.9s]
✔ Install cluster charts [0.9s]

5. Deploy the Network

solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: consensus network deploy --deployment solo-deployment --release-tag v0.66.0
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
 Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize [0.1s]
 Copy gRPC TLS Certificates
 Copy gRPC TLS Certificates KIPPED: Copy gRPC TLS Certificates]
 Prepare staging directory
 Copy Gossip keys to staging
✔ Copy Gossip keys to staging 
 Copy gRPC TLS keys to staging
✔ Copy gRPC TLS keys to staging 
✔ Prepare staging directory 
 Copy node keys to secrets
 Copy TLS keys
 Node: node1, cluster: kind-solo
 Copy Gossip keys
✔ Copy Gossip keys 
✔ Node: node1, cluster: kind-solo 
✔ Copy TLS keys 
✔ Copy node keys to secrets 
 Install monitoring CRDs
 Pod Logs CRDs
✔ Pod Logs CRDs 
 Prometheus Operator CRDs
 - Installed prometheus-operator-crds chart, version: 24.0.2
✔ Prometheus Operator CRDs ]
✔ Install monitoring CRDs ]
 Install chart 'solo-deployment'
 - Installed solo-deployment chart, version: 0.60.2
✔ Install chart 'solo-deployment' ]
 Check for load balancer
 Check for load balancer KIPPED: Check for load balancer]
 Redeploy chart with external IP address config
 Redeploy chart with external IP address config KIPPED: Redeploy chart with external IP address config]
 Check node pods are running
 Check Node: node1, Cluster: kind-solo
✔ Check Node: node1, Cluster: kind-solo ]
✔ Check node pods are running ]
 Check proxy pods are running
 Check HAProxy for: node1, cluster: kind-solo
 Check Envoy Proxy for: node1, cluster: kind-solo
✔ Check HAProxy for: node1, cluster: kind-solo 
✔ Check Envoy Proxy for: node1, cluster: kind-solo 
✔ Check proxy pods are running 
 Check auxiliary pods are ready
 Check MinIO
✔ Check MinIO 
✔ Check auxiliary pods are ready 
 Add node and proxies to remote config
✔ Add node and proxies to remote config 
 Copy block-nodes.json
✔ Copy block-nodes.json ]

6. Set Up Consensus Nodes

export CONSENSUS_NODE_VERSION=v0.66.0
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" --release-tag "${CONSENSUS_NODE_VERSION}"

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: consensus node setup --deployment solo-deployment --release-tag v0.66.0
**********************************************************************************
 Load configuration
✔ Load configuration [0.2s]
 Initialize
✔ Initialize [0.1s]
 Validate nodes states
 Validating state for node node1
✔ Validating state for node node1 - valid state: requested 
✔ Validate nodes states 
 Identify network pods
 Check network pod: node1
✔ Check network pod: node1 
✔ Identify network pods 
 Fetch platform software into network nodes
 Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ]
✔ Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ] ]
✔ Fetch platform software into network nodes ]
 Setup network nodes
 Node: node1
 Copy configuration files
✔ Copy configuration files [0.3s]
 Set file permissions
✔ Set file permissions [0.4s]
✔ Node: node1 [0.8s]
✔ Setup network nodes [0.9s]
 setup network node folders
✔ setup network node folders [0.1s]
 Change node state to configured in remote config
✔ Change node state to configured in remote config

7. Start Consensus Nodes

solo consensus node start --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: consensus node start --deployment solo-deployment
**********************************************************************************
 Check dependencies
 Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Load configuration
✔ Load configuration [0.2s]
 Initialize
✔ Initialize [0.2s]
 Validate nodes states
 Validating state for node node1
✔ Validating state for node node1 - valid state: configured 
✔ Validate nodes states 
 Identify existing network nodes
 Check network pod: node1
✔ Check network pod: node1 
✔ Identify existing network nodes 
 Upload state files network nodes
 Upload state files network nodes KIPPED: Upload state files network nodes]
 Starting nodes
 Start node: node1
✔ Start node: node1 [0.2s]
✔ Starting nodes [0.2s]
 Enable port forwarding for debug port and/or GRPC port
Using requested port 50211
✔ Enable port forwarding for debug port and/or GRPC port 
 Check all nodes are ACTIVE
 Check network pod: node1 
✔ Check network pod: node1  - status ACTIVE, attempt: 16/300 ]
✔ Check all nodes are ACTIVE ]
 Check node proxies are ACTIVE
 Check proxy for node: node1
✔ Check proxy for node: node1 ]
✔ Check node proxies are ACTIVE ]
 set gRPC Web endpoint
Using requested port 30212
✔ set gRPC Web endpoint ]
 Change node state to started in remote config
✔ Change node state to started in remote config 
 Add node stakes
 Adding stake for node: node1
✔ Adding stake for node: node1 ]
✔ Add node stakes ]
Stopping port-forwarder for port [30212]

8. Deploy Mirror Node

solo mirror node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME} --enable-ingress --pinger

The --pinger flag ensures record files are imported regularly.

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: mirror node add --deployment solo-deployment --cluster-ref kind-solo --enable-ingress --quiet-mode
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
 Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
Using requested port 30212
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize ]
 Enable mirror-node
 Prepare address book
✔ Prepare address book 
 Install mirror ingress controller
 - Installed haproxy-ingress-1 chart, version: 0.14.5
✔ Install mirror ingress controller [0.5s]
 Deploy mirror-node
 - Installed mirror chart, version: v0.146.0
✔ Deploy mirror-node ]
✔ Enable mirror-node ]
 Check pods are ready
 Check Postgres DB
 Check REST API
 Check GRPC
 Check Monitor
 Check Web3
 Check Importer
✔ Check Postgres DB [38s]
✔ Check Web3 [1m4s]
✔ Check GRPC [1m6s]
✔ Check Monitor [1m14s]
✔ Check REST API [1m18s]
✔ Check Importer [1m52s]
✔ Check pods are ready [1m52s]
 Seed DB data
 Insert data in public.file_data
✔ Insert data in public.file_data [0.4s]
✔ Seed DB data [0.4s]
 Add mirror node to remote config
✔ Add mirror node to remote config 
 Enable port forwarding for mirror ingress controller
Using requested port 8081
✔ Enable port forwarding for mirror ingress controller 
Stopping port-forwarder for port [30212]

9. Deploy Explorer

solo explorer node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: explorer node add --deployment solo-deployment --cluster-ref kind-solo --quiet-mode
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
 Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize [0.4s]
 Load remote config
✔ Load remote config [0.2s]
 Install cert manager
 Install cert manager KIPPED: Install cert manager]
 Install explorer
 - Installed hiero-explorer-1 chart, version: 26.0.0
✔ Install explorer [0.8s]
 Install explorer ingress controller
 Install explorer ingress controller KIPPED: Install explorer ingress controller]
 Check explorer pod is ready
✔ Check explorer pod is ready ]
 Check haproxy ingress controller pod is ready
 Check haproxy ingress controller pod is ready KIPPED: Check haproxy ingress controller pod is ready]
 Add explorer to remote config
✔ Add explorer to remote config 
 Enable port forwarding for explorer
No port forward config found for Explorer
Using requested port 8080
✔ Enable port forwarding for explorer [0.1s]

10. Deploy JSON RPC Relay

solo relay node add -i node1 --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: relay node add --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
 Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize [0.4s]
 Check chart is installed
✔ Check chart is installed [0.1s]
 Prepare chart values
Using requested port 30212
✔ Prepare chart values ]
 Deploy JSON RPC Relay
 - Installed relay-1 chart, version: 0.73.0
✔ Deploy JSON RPC Relay [40s]
 Check relay is running
✔ Check relay is running 
 Check relay is ready
✔ Check relay is ready ]
 Add relay component in remote config
✔ Add relay component in remote config 
 Enable port forwarding for relay node
Using requested port 7546
✔ Enable port forwarding for relay node [0.1s]
Stopping port-forwarder for port [30212]

Deploying Block Node (Experimental)

Warning: Block Node requires at least 16 GB of memory and Consensus Node version v0.62.3 or higher.

Block Node must be deployed before the network:

# Deploy Block Node first
solo block node add --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-"${SOLO_CLUSTER_NAME}" --release-tag v0.62.6

# Then deploy the network with the matching version
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}"
solo consensus node setup --deployment "${SOLO_DEPLOYMENT}" --release-tag v0.62.6
solo consensus node start --deployment "${SOLO_DEPLOYMENT}"

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: block node add --deployment solo-deployment --cluster-ref kind-solo --release-tag v0.66.0
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
 Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize 
 Prepare release name and block node name
✔ Prepare release name and block node name 
 Prepare chart values
✔ Prepare chart values 
 Deploy block node
 - Installed block-node-1 chart, version: 0.26.2
✔ Deploy block node ]
 Check block node pod is running
✔ Check block node pod is running ]
 Check software
✔ Check software 
 Check block node pod is ready
✔ Check block node pod is ready [41s]
 Check block node readiness
✔ Check block node readiness - [1/100] success [0.1s]
 Add block node component in remote config
✔ Add block node component in remote config 
 Update consensus nodes
 Update consensus nodes in remote config
✔ Update consensus nodes in remote config 
✔ Update consensus nodes

To destroy Block Node (must be done before network destruction):

solo block node destroy --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}

Connecting to a Remote Cluster

Solo can deploy to any Kubernetes cluster, not just local Kind clusters.

Setting Up Remote Cluster Connection

# View available contexts
kubectl config get-contexts

# Switch to your remote cluster context
kubectl config use-context <context-name>

# Connect Solo to the remote cluster
solo cluster-ref config connect --cluster-ref <cluster-ref-name> --context <context-name>

Remote Cluster Requirements

  • Kubernetes 1.24 or higher
  • Sufficient resources for network components
  • Network access to pull container images
  • Storage class available for persistent volumes

Adding Nodes to an Existing Network

You can dynamically add new consensus nodes to a running network.

Quick Add (When Available)

# TODO: solo consensus node add (coming soon)

Step-by-Step Node Addition

For precise control over the node addition process:

# Prepare the new node
solo consensus dev-node-add prepare \
  --gossip-keys true \
  --tls-keys true \
  --deployment "${SOLO_DEPLOYMENT}" \
  --pvcs true \
  --admin-key <admin-key> \
  --node-alias node2 \
  --output-dir context

# Submit the transaction to add the node
solo consensus dev-node-add submit-transaction \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

# Execute the node addition
solo consensus dev-node-add execute \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

See the node-create-transaction example for a complete walkthrough.

Deleting Nodes from a Network

You can dynamically remove consensus nodes from a running network.

Quick Delete (When Available)

# TODO: solo consensus node destroy (coming soon)

Step-by-Step Node Deletion

For precise control over the node deletion process:

# Prepare the node for deletion
solo consensus dev-node-delete prepare \
  --deployment "${SOLO_DEPLOYMENT}" \
  --node-alias node2 \
  --output-dir context

# Submit the transaction to delete the node
solo consensus dev-node-delete submit-transaction \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

# Execute the node deletion
solo consensus dev-node-delete execute \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

See the node-delete-transaction example for a complete walkthrough.

Step-by-Step Node Update

For testing the update process or granular control:

# Prepare the update
solo consensus dev-node-update prepare \
  --deployment "${SOLO_DEPLOYMENT}" \
  --node-alias node1 \
  --release-tag v0.66.0 \
  --output-dir context

# Submit the update transaction
solo consensus dev-node-update submit-transaction \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

# Execute the update
solo consensus dev-node-update execute \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

See the node-update-transaction example for a complete walkthrough.

Complete Cleanup for Manual Deployments

When using manual deployment, clean up in reverse order:

# 1. Destroy relay node
solo relay node destroy -i node1 --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: relay node destroy --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize [0.7s]
 Destroy JSON RPC Relay

 *** Destroyed Relays ***
-------------------------------------------------------------------------------
 - block-node-1 [block-node-server-0.26.2]
 - haproxy-ingress-1 [haproxy-ingress-0.14.5]
 - hiero-explorer-1 [hiero-explorer-chart-26.0.0]
 - mirror-1 [hedera-mirror-0.146.0]
 - prometheus-operator-crds [prometheus-operator-crds-24.0.2]
 - solo-deployment [solo-deployment-0.60.2]


✔ Destroy JSON RPC Relay [0.6s]
 Remove relay component from remote config
✔ Remove relay component from remote config
# 2. Destroy mirror node
solo mirror node destroy --deployment "${SOLO_DEPLOYMENT}" --force

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: mirror node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
Using requested port 30212
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize ]
 Destroy mirror-node
✔ Destroy mirror-node [0.5s]
 Delete PVCs
✔ Delete PVCs 
 Uninstall mirror ingress controller
✔ Uninstall mirror ingress controller [0.3s]
 Remove mirror node from remote config
✔ Remove mirror node from remote config 
Stopping port-forwarder for port [30212]
# 3. Destroy explorer node
solo explorer node destroy --deployment "${SOLO_DEPLOYMENT}" --force

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: explorer node destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize [0.6s]
 Load remote config
✔ Load remote config [0.1s]
 Destroy explorer
✔ Destroy explorer [0.2s]
 Uninstall explorer ingress controller
✔ Uninstall explorer ingress controller [0.1s]
 Remove explorer from remote config
✔ Remove explorer from remote config
# 4. Destroy block node (if deployed) - BEFORE network destruction
solo block node destroy --deployment "${SOLO_DEPLOYMENT}" --cluster-ref kind-${SOLO_CLUSTER_NAME}

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: block node destroy --deployment solo-deployment --cluster-ref kind-solo
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
 Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize [0.6s]
 Destroy block node
✔ Destroy block node [0.4s]
 Disable block node component in remote config
✔ Disable block node component in remote config 
 Rebuild 'block.nodes.json' for consensus nodes
✔ Rebuild 'block.nodes.json' for consensus nodes ]
# 5. Destroy the network
solo consensus network destroy --deployment "${SOLO_DEPLOYMENT}" --force

Example output:

******************************* Solo *********************************************
Version			: 0.57.0
Kubernetes Context	: kind-solo
Kubernetes Cluster	: kind-solo
Current Command		: consensus network destroy --deployment solo-deployment --quiet-mode
**********************************************************************************
 Check dependencies
 Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
 Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64]
✔ Check dependency: helm [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-94-generic, Arch: x64] 
✔ Check dependencies 
 Setup chart manager
✔ Setup chart manager 
 Initialize
 Acquire lock
✔ Acquire lock - lock acquired successfully, attempt: 1/10 
✔ Initialize 
 Running sub-tasks to destroy network
✔ Deleting the RemoteConfig configmap in namespace solo [0.4s]

Additional Examples

Explore more deployment scenarios in the Examples section: