This is the multi-page printable view of this section. Click here to print.
Network Deployments
1 - One-shot Falcon Deployment
Overview
One-shot Falcon deployment is Solo’s YAML-driven one-shot workflow. It uses the same core
deployment pipeline as solo one-shot single deploy, but lets you inject
component-specific flags through a single values file.
One-shot use Falcon deployment when you need a repeatable advanced setup, want to check a complete deployment into source control, or need to customise component flags without running every Solo command manually.
Falcon is especially useful for:
- CI/CD pipelines and automated test environments.
- Reproducible local developer setups.
- Advanced deployments that need custom chart paths, image versions, ingress, storage, TLS, or node startup options.
Important: Falcon is an orchestration layer over Solo’s standard commands. It does not introduce a separate deployment model. Solo still creates a deployment, attaches clusters, deploys the network, configures nodes, and then adds optional components such as mirror node, explorer, and relay.
Prerequisites
Before proceeding, ensure you have completed the following:
System Readiness -your local environment meets the hardware and software requirements for Solo, Kubernetes, Docker, Kind, kubectl, and Helm.
Quickstart -you are already familiar with the standard one-shot deployment workflow.
Set your environment variables if you have not already done so:
export SOLO_CLUSTER_NAME=solo export SOLO_NAMESPACE=solo export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster export SOLO_DEPLOYMENT=solo-deployment
How Falcon Works
When you run Falcon deployment, Solo executes the same end-to-end deployment sequence used by its one-shot workflows:
- Connect to the Kubernetes cluster.
- Create a deployment and attach the cluster reference.
- Set up shared cluster components.
- Generate gossip and TLS keys.
- Deploy the consensus network and, if enabled, the block node (in parallel).
- Set up and start consensus nodes.
- Optionally, deploy mirror node, explorer, and relay in parallel for faster startup.
- Create predefined test accounts.
- Write deployment notes, versions, port-forward details, and account data to a local output directory.
The difference is that Falcon reads a YAML file and maps its top-level sections to the underlying Solo subcommands.
| Values file section | Solo subcommand invoked |
|---|---|
network | solo consensus network deploy |
setup | solo consensus node setup |
consensusNode | solo consensus node start |
mirrorNode | solo mirror node add |
explorerNode | solo explorer node add |
relayNode | solo relay node add |
blockNode | solo block node add (when ONE_SHOT_WITH_BLOCK_NODE=true) |
For the full list of supported CLI flags per section, see the Falcon Values File Reference.
Create a Falcon Values File
Create a YAML file to control every component of your Solo deployment. The file can have any name -falcon-values.yaml is used throughout this guide as a convention.
Note: Keys within each section must be the full CLI flag name including the
--prefix - for example,--release-tag, notrelease-tagor-r. Any section you omit from the file is skipped, and Solo uses the built-in defaults for that component.
Example: Single-Node Falcon Deployment
The following falcon-values.yaml example deploys a standard single-node network with mirror node,
explorer, and relay enabled:
network:
--release-tag: "v0.71.0"
--pvcs: false
setup:
--release-tag: "v0.71.0"
consensusNode:
--force-port-forward: true
mirrorNode:
--enable-ingress: true
--pinger: true
--force-port-forward: true
explorerNode:
--enable-ingress: true
--force-port-forward: true
relayNode:
--node-aliases: "node1"
--force-port-forward: true
Deploy with Falcon one-shot
Run Falcon deployment by pointing Solo at the values file:
solo one-shot falcon deploy --values-file falcon-values.yaml
Solo creates a one-shot deployment, applies the values from the YAML file to the appropriate subcommands, and then deploys the full environment.
What Falcon Does Not Read from the File
Some Falcon settings are controlled directly by the top-level command flags, not by section entries in the values file:
--values-fileselects the YAML file to load.--deploy-mirror-node,--deploy-explorer, and--deploy-relaycontrol whether those optional components are deployed at all.--deployment,--namespace,--cluster-ref, and--num-consensus-nodesare top-level one-shot inputs.
Important: Do not rely on
--deploymentinsidefalcon-values.yaml. Solo intentionally ignores--deploymentvalues from section content during Falcon argument expansion. Set the deployment name on the command line if you need a specific name.
Tip: When not specified, Falcon uses these defaults:
--deployment one-shot,--namespace one-shot,--cluster-ref one-shot, and--num-consensus-nodes 1. Pass any of these explicitly on the command line to override them.
Example:
solo one-shot falcon deploy \
--deployment falcon-demo \
--cluster-ref one-shot \
--values-file falcon-values.yaml
Multi-Node Falcon Deployment
For multiple consensus nodes, set the node count on the Falcon command and then provide matching per-node settings where required.
Example:
solo one-shot falcon deploy \ --deployment falcon-multi \ --num-consensus-nodes 3 \ --values-file falcon-values.yamlExample multi-node values file:
network: --release-tag: "v0.71.0" --pvcs: true setup: --release-tag: "v0.71.0" consensusNode: --force-port-forward: true --stake-amounts: "100,100,100" mirrorNode: --enable-ingress: true --pinger: true explorerNode: --enable-ingress: true relayNode: --node-aliases: "node1,node2,node3"The
--node-aliasesvalue in therelayNodesection must match the node aliases generated by--num-consensus-nodes. Nodes are auto-namednode1,node2,node3, and so on. Setting this to onlynode1is valid if you want the relay to serve a single node, but specifying all aliases is typical for full coverage.Use this pattern when you need a repeatable multi-node deployment but do not want to manage each step manually.
Note: Multi-node deployments require more host resources than single-node deployments. Follow the resource guidance in System Readiness, and increase Docker memory and CPU allocation before deploying.
(Optional) Component Toggles
Falcon can skip optional components at the command line without requiring a second YAML file.
For example, to deploy only the consensus network and mirror node:
solo one-shot falcon deploy \
--values-file falcon-values.yaml \
--deploy-explorer=false \
--deploy-relay=false
Available toggles and their defaults:
| Flag | Default | Description |
|---|---|---|
--deploy-mirror-node | true | Include the mirror node in the deployment. |
--deploy-explorer | true | Include the explorer in the deployment. |
--deploy-relay | true | Include the JSON RPC relay in the deployment. |
Important: The explorer and relay both depend on the mirror node. Setting
--deploy-mirror-node=falsewhile keeping--deploy-explorer=trueor--deploy-relay=trueis not a supported configuration and will result in a failed deployment.
This is useful when you want to:
- Reduce resource usage in CI jobs.
- Isolate one component during testing.
- Reuse the same YAML file across multiple deployment profiles.
Common Falcon Customisations
Because each YAML section maps directly to the corresponding Solo subcommand, you can use Falcon to centralise advanced options such as:
- Custom release tags for the consensus node platform.
- Local chart directories for mirror node, relay, explorer, or block node.
- Local consensus node build paths for development workflows.
- Ingress and domain settings.
- Mirror node external database settings.
- Node startup settings such as state files, port forwarding, and stake amounts.
- Storage backends and credentials for stream file handling.
Example: Local Development with Local Chart Directories
setup:
--local-build-path: "/path/to/hiero-consensus-node/hedera-node/data"
mirrorNode:
--mirror-node-chart-dir: "/path/to/hiero-mirror-node/charts"
relayNode:
--relay-chart-dir: "/path/to/hiero-json-rpc-relay/charts"
explorerNode:
--explorer-chart-dir: "/path/to/hiero-mirror-node-explorer/charts"
This pattern is useful for local integration testing against unpublished component builds.
Falcon with Block Node
Falcon can also include block node configuration.
Note: Block node workflows are advanced and require higher resource allocation and version compatibility across consensus node, block node, and related components. Docker memory must be set to at least 16 GB before deploying with block node enabled.
Block node support also requires the
ONE_SHOT_WITH_BLOCK_NODE=trueenvironment variable to be set before runningfalcon deploy. Without it, Solo skips the block node add step even if ablockNodesection is present in the values file.
Block node deployment is subject to version compatibility requirements. Minimum versions are consensus node ≥ v0.72.0 and block node ≥ 0.29.0. Mixing incompatible versions will cause the deployment to fail. Check the Version Compatibility Reference before enabling block node.
Example:
network:
--release-tag: "v0.72.0"
setup:
--release-tag: "v0.72.0"
consensusNode:
--force-port-forward: true
blockNode:
--release-tag: "v0.29.0"
--enable-ingress: false
mirrorNode:
--enable-ingress: true
--pinger: true
explorerNode:
--enable-ingress: true
relayNode:
--node-aliases: "node1"
--force-port-forward: true
Use block node settings only when your target Solo and component versions are known to be compatible.
Rollback and Failure Behaviour
Falcon deployment enables automatic rollback by default.
If deployment fails after resources have already been created, Solo attempts to destroy the one-shot deployment automatically and clean up the namespace.
If you want to preserve the failed deployment for debugging, disable rollback:
solo one-shot falcon deploy \
--values-file falcon-values.yaml \
--no-rollback
Use --no-rollback only when you explicitly want to inspect partial resources,
logs, or Kubernetes objects after a failed run.
Deployment Output
After a successful Falcon deployment, Solo writes deployment metadata to
~/.solo/one-shot-<deployment>/ where <deployment> is the value of the
--deployment flag (default: one-shot).
This directory typically contains:
notes- human-readable deployment summaryversions- component versions recorded at deploy timeforwards- port-forward configurationaccounts.json- predefined test account keys and IDs. All accounts are ECDSA Alias accounts (EVM-compatible) and include apublicAddressfield. The file also includes the system operator account.
This makes Falcon especially useful for automation, because the deployment artifacts are written to a predictable path after each run.
To inspect the latest one-shot deployment metadata later, run:
solo one-shot show deployment
If port-forwards are interrupted after deployment - for example after a system restart or network disruption - restore them without redeploying:
solo deployment refresh port-forwards
Destroy a Falcon Deployment
Destroy the Falcon deployment with:
solo one-shot falcon destroySolo removes deployed extensions first, then destroys the mirror node, network, cluster references, and local deployment metadata.
If multiple deployments exist locally, Solo prompts you to choose which one to destroy unless you pass
--deploymentexplicitly.solo one-shot falcon destroy --deployment falcon-demo
When to Use Falcon vs. Manual Deployment
Use Falcon deployment when you want a single, repeatable command backed by a versioned YAML file.
Use Step-by-Step Manual Deployment when you need to pause between steps, inspect intermediate state, or debug a specific deployment phase in isolation.
In practice:
- Falcon is better for automation and repeatability.
- Manual deployment is better for debugging and low-level control.
Reference
- Falcon Values File Reference - full list of supported CLI flags, types, and defaults for every section.
- Upstream example values file - working reference from the Solo repository.
Tip: If you are creating a values file for the first time, start from the annotated template in the Solo repository rather than writing one from scratch:
examples/one-shot-falcon/falcon-values.yamlThis file includes all supported sections and flags with inline comments explaining each option. Copy it, remove what you do not need, and adjust the values for your environment.
2 - Falcon Values File Reference
Overview
This page catalogs the Solo CLI flags accepted under each top-level section of a Falcon values file. Each entry corresponds to the command-line flag that the underlying Solo subcommand accepts.
Sections map to subcommands as follows:
| Section | Solo subcommand |
|---|---|
network | solo consensus network deploy |
setup | solo consensus node setup |
consensusNode | solo consensus node start |
mirrorNode | solo mirror node add |
explorerNode | solo explorer node add |
relayNode | solo relay node add |
blockNode | solo block node add |
All flag names must be written in long form with double dashes (for example,
--release-tag). Flags left empty ("") or matching their default value are
ignored by Solo at argument expansion time.
Note: Not every flag listed here is relevant to every deployment. Use this page as a lookup when writing or debugging a values file. For a working example file, see the upstream reference at https://github.com/hiero-ledger/solo/tree/main/examples/one-shot-falcon.
Consensus Network Deploy — network
Flags passed to solo consensus network deploy.
| Flag | Type | Default | Description |
|---|---|---|---|
--release-tag | string | current Hedera platform version | Consensus node release tag (e.g. v0.71.0). |
--pvcs | boolean | false | Enable Persistent Volume Claims for consensus node storage. Required for node add operations. |
--load-balancer | boolean | false | Enable load balancer for network node proxies. |
--chart-dir | string | — | Path to a local Helm chart directory for the Solo network chart. |
--solo-chart-version | string | current chart version | Specific Solo testing chart version to deploy. |
--haproxy-ips | string | — | Static IP mapping for HAProxy pods (e.g. node1=127.0.0.1,node2=127.0.0.2). |
--envoy-ips | string | — | Static IP mapping for Envoy proxy pods. |
--debug-node-alias | string | — | Enable the default JVM debug port (5005) for the specified node alias. |
--domain-names | string | — | Custom domain name mapping per node alias (e.g. node1=node1.example.com). |
--grpc-tls-cert | string | — | TLS certificate path for gRPC, per node alias (e.g. node1=/path/to/cert). |
--grpc-web-tls-cert | string | — | TLS certificate path for gRPC Web, per node alias. |
--grpc-tls-key | string | — | TLS certificate key path for gRPC, per node alias. |
--grpc-web-tls-key | string | — | TLS certificate key path for gRPC Web, per node alias. |
--storage-type | string | minio_only | Stream file storage backend. Options: minio_only, aws_only, gcs_only, aws_and_gcs. |
--gcs-write-access-key | string | — | GCS write access key. |
--gcs-write-secrets | string | — | GCS write secret key. |
--gcs-endpoint | string | — | GCS storage endpoint URL. |
--gcs-bucket | string | — | GCS bucket name. |
--gcs-bucket-prefix | string | — | GCS bucket path prefix. |
--aws-write-access-key | string | — | AWS write access key. |
--aws-write-secrets | string | — | AWS write secret key. |
--aws-endpoint | string | — | AWS storage endpoint URL. |
--aws-bucket | string | — | AWS bucket name. |
--aws-bucket-region | string | — | AWS bucket region. |
--aws-bucket-prefix | string | — | AWS bucket path prefix. |
--settings-txt | string | template | Path to a custom settings.txt file for consensus nodes. |
--application-properties | string | template | Path to a custom application.properties file. |
--application-env | string | template | Path to a custom application.env file. |
--api-permission-properties | string | template | Path to a custom api-permission.properties file. |
--bootstrap-properties | string | template | Path to a custom bootstrap.properties file. |
--log4j2-xml | string | template | Path to a custom log4j2.xml file. |
--genesis-throttles-file | string | — | Path to a custom throttles.json file for network genesis. |
--service-monitor | boolean | false | Install a ServiceMonitor custom resource for Prometheus metrics. |
--pod-log | boolean | false | Install a PodLog custom resource for node pod log monitoring. |
--quiet-mode | boolean | false | Suppress confirmation prompts. |
--values-file | string | — | Comma-separated Helm chart values file paths (not the Falcon values file). |
Consensus Node Setup — setup
Flags passed to solo consensus node setup.
| Flag | Type | Default | Description |
|---|---|---|---|
--release-tag | string | current Hedera platform version | Consensus node release tag. Must match network.--release-tag. |
--local-build-path | string | — | Path to a local Hiero consensus node build (e.g. ~/hiero-consensus-node/hedera-node/data). Used for local development workflows. |
--app | string | HederaNode.jar | Name of the consensus node application binary. |
--app-config | string | — | Path to a JSON configuration file for the testing app. |
--admin-public-keys | string | — | Comma-separated DER-encoded ED25519 public keys in node alias order. |
--domain-names | string | — | Custom domain name mapping per node alias. |
--dev | boolean | false | Enable developer mode. |
--quiet-mode | boolean | false | Suppress confirmation prompts. |
--cache-dir | string | ~/.solo/cache | Local cache directory for downloaded artifacts. |
Consensus Node Start — consensusNode
Flags passed to solo consensus node start.
| Flag | Type | Default | Description |
|---|---|---|---|
--force-port-forward | boolean | true | Force port forwarding to access network services locally. |
--stake-amounts | string | — | Comma-separated stake amounts in node alias order (e.g. 100,100,100). Required for multi-node deployments that need non-default stakes. |
--state-file | string | — | Path to a zipped state file to restore the network from. |
--debug-node-alias | string | — | Enable JVM debug port (5005) for the specified node alias. |
--app | string | HederaNode.jar | Name of the consensus node application binary. |
--quiet-mode | boolean | false | Suppress confirmation prompts. |
Mirror Node Add — mirrorNode
Flags passed to solo mirror node add.
| Flag | Type | Default | Description |
|---|---|---|---|
--mirror-node-version | string | current version | Mirror node Helm chart version to deploy. |
--enable-ingress | boolean | false | Deploy an ingress controller for the mirror node. |
--force-port-forward | boolean | true | Enable port forwarding for mirror node services. |
--pinger | boolean | false | Enable the mirror node Pinger service. |
--mirror-static-ip | string | — | Static IP address for the mirror node load balancer. |
--domain-name | string | — | Custom domain name for the mirror node. |
--ingress-controller-value-file | string | — | Path to a Helm values file for the ingress controller. |
--mirror-node-chart-dir | string | — | Path to a local mirror node Helm chart directory. |
--use-external-database | boolean | false | Connect to an external PostgreSQL database instead of the chart-bundled one. |
--external-database-host | string | — | Hostname of the external database. Requires --use-external-database. |
--external-database-owner-username | string | — | Owner username for the external database. |
--external-database-owner-password | string | — | Owner password for the external database. |
--external-database-read-username | string | — | Read-only username for the external database. |
--external-database-read-password | string | — | Read-only password for the external database. |
--storage-type | string | minio_only | Stream file storage backend for the mirror node importer. |
--storage-read-access-key | string | — | Storage read access key for the mirror node importer. |
--storage-read-secrets | string | — | Storage read secret key for the mirror node importer. |
--storage-endpoint | string | — | Storage endpoint URL for the mirror node importer. |
--storage-bucket | string | — | Storage bucket name for the mirror node importer. |
--storage-bucket-prefix | string | — | Storage bucket path prefix. |
--storage-bucket-region | string | — | Storage bucket region. |
--operator-id | string | — | Operator account ID for the mirror node. |
--operator-key | string | — | Operator private key for the mirror node. |
--quiet-mode | boolean | false | Suppress confirmation prompts. |
--values-file | string | — | Comma-separated Helm chart values file paths for the mirror node chart. |
Explorer Add — explorerNode
Flags passed to solo explorer node add.
| Flag | Type | Default | Description |
|---|---|---|---|
--explorer-version | string | current version | Hiero Explorer Helm chart version to deploy. |
--enable-ingress | boolean | false | Deploy an ingress controller for the explorer. |
--force-port-forward | boolean | true | Enable port forwarding for the explorer service. |
--domain-name | string | — | Custom domain name for the explorer. |
--ingress-controller-value-file | string | — | Path to a Helm values file for the ingress controller. |
--explorer-chart-dir | string | — | Path to a local Hiero Explorer Helm chart directory. |
--explorer-static-ip | string | — | Static IP address for the explorer load balancer. |
--enable-explorer-tls | boolean | false | Enable TLS for the explorer. Requires cert-manager. |
--explorer-tls-host-name | string | explorer.solo.local | Hostname used for the explorer TLS certificate. |
--tls-cluster-issuer-type | string | self-signed | TLS cluster issuer type. Options: self-signed, acme-staging, acme-prod. |
--mirror-node-id | number | — | ID of the mirror node instance to connect the explorer to. |
--mirror-namespace | string | — | Kubernetes namespace of the mirror node. |
--solo-chart-version | string | current version | Solo chart version used for explorer cluster setup. |
--quiet-mode | boolean | false | Suppress confirmation prompts. |
--values-file | string | — | Comma-separated Helm chart values file paths for the explorer chart. |
JSON-RPC Relay Add — relayNode
Flags passed to solo relay node add.
| Flag | Type | Default | Description |
|---|---|---|---|
--relay-release | string | current version | Hiero JSON-RPC Relay Helm chart release to deploy. |
--node-aliases | string | — | Comma-separated node aliases the relay will observe (e.g. node1 or node1,node2). |
--replica-count | number | 1 | Number of relay replicas to deploy. |
--chain-id | string | 298 | EVM chain ID exposed by the relay (Hedera testnet default). |
--force-port-forward | boolean | true | Enable port forwarding for the relay service. |
--domain-name | string | — | Custom domain name for the relay. |
--relay-chart-dir | string | — | Path to a local Hiero JSON-RPC Relay Helm chart directory. |
--operator-id | string | — | Operator account ID for relay transaction signing. |
--operator-key | string | — | Operator private key for relay transaction signing. |
--mirror-node-id | number | — | ID of the mirror node instance the relay will query. |
--mirror-namespace | string | — | Kubernetes namespace of the mirror node. |
--quiet-mode | boolean | false | Suppress confirmation prompts. |
--values-file | string | — | Comma-separated Helm chart values file paths for the relay chart. |
Block Node Add — blockNode
Flags passed to solo block node add.
Important: The
blockNodesection is only read whenONE_SHOT_WITH_BLOCK_NODE=trueis set in the environment. Otherwise Solo skips the block node add step regardless of whether ablockNodesection is present. Version requirements: Consensus node ≥ v0.72.0 and block node ≥ 0.29.0. Use--forceto bypass version gating during testing.
| Flag | Type | Default | Description |
|---|---|---|---|
--release-tag | string | current version | Hiero block node release tag. |
--image-tag | string | — | Docker image tag to override the Helm chart default. |
--enable-ingress | boolean | false | Deploy an ingress controller for the block node. |
--domain-name | string | — | Custom domain name for the block node. |
--dev | boolean | false | Enable developer mode for the block node. |
--block-node-chart-dir | string | — | Path to a local Hiero block node Helm chart directory. |
--quiet-mode | boolean | false | Suppress confirmation prompts. |
--values-file | string | — | Comma-separated Helm chart values file paths for the block node chart. |
Top-Level Falcon Command Flags
The following flags are passed directly on the solo one-shot falcon deploy command
line. They are not read from the values file sections.
| Flag | Type | Default | Description |
|---|---|---|---|
--values-file | string | — | Path to the Falcon values YAML file. |
--deployment | string | one-shot | Deployment name for Solo’s internal state. |
--namespace | string | one-shot | Kubernetes namespace to deploy into. |
--cluster-ref | string | one-shot | Cluster reference name. |
--num-consensus-nodes | number | 1 | Number of consensus nodes to deploy. |
--deploy-mirror-node | boolean | true | Deploy or skip the mirror node. |
--deploy-explorer | boolean | true | Deploy or skip the explorer. |
--deploy-relay | boolean | true | Deploy or skip the JSON-RPC relay. |
--no-rollback | boolean | false | Disable automatic cleanup on deployment failure. Preserves partial resources for inspection. |
--quiet-mode | boolean | false | Suppress all interactive prompts. |
--force | boolean | false | Force actions that would otherwise be skipped. |
3 - Step-by-Step Manual Deployment
Overview
Manual deployment lets you deploy each Solo network component individually, giving you full control over configuration, sequencing, and troubleshooting. Use this approach when you need to customise specific steps, debug a component in isolation, or integrate Solo into a bespoke automation pipeline.
Prerequisites
Before proceeding, ensure you have completed the following:
System Readiness — your local environment meets all hardware and software requirements (Docker, kind, kubectl, helm, Solo).
Quickstart — you have a running Kind cluster and have run
solo initat least once.Set your environment variables if you have not already done so:
export SOLO_CLUSTER_NAME=solo export SOLO_NAMESPACE=solo export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster export SOLO_DEPLOYMENT=solo-deployment
Deployment Steps
1. Connect Cluster and Create Deployment
Connect Solo to the Kind cluster and create a new deployment configuration:
# Connect to the Kind cluster solo cluster-ref config connect \ --cluster-ref kind-${SOLO_CLUSTER_NAME} \ --context kind-${SOLO_CLUSTER_NAME} # Create a new deployment solo deployment config create \ -n "${SOLO_NAMESPACE}" \ --deployment "${SOLO_DEPLOYMENT}"Expected Output:
******************************* Solo ********************************************* Version : 0.63.0 Kubernetes Context : kind-solo Kubernetes Cluster : kind-solo Current Command : cluster-ref config connect --cluster-ref kind-solo --context kind-solo ********************************************************************************** Initialize ✔ Initialize Validating cluster ref: ✔ Validating cluster ref: kind-solo Test connection to cluster: ✔ Test connection to cluster: kind-solo Associate a context with a cluster reference: ✔ Associate a context with a cluster reference: kind-solo
2. Add Cluster to Deployment
Attach the cluster to your deployment and specify the number of consensus nodes:
1. Single node:
solo deployment cluster attach \ --deployment "${SOLO_DEPLOYMENT}" \ --cluster-ref kind-${SOLO_CLUSTER_NAME} \ --num-consensus-nodes 12. Multiple nodes (e.g., –num-consensus-nodes 3):
solo deployment cluster attach \ --deployment "${SOLO_DEPLOYMENT}" \ --cluster-ref kind-${SOLO_CLUSTER_NAME} \ --num-consensus-nodes 3Expected Output:
solo-deployment_ADD_CLUSTER_OUTPUT
3. Generate Keys
Generate the gossip and TLS keys for your consensus nodes:
solo keys consensus generate \ --gossip-keys \ --tls-keys \ --deployment "${SOLO_DEPLOYMENT}"PEM key files are written to
~/.solo/cache/keys/.Example output:
******************************* Solo ********************************************* Version : 0.63.0 Kubernetes Context : kind-solo Kubernetes Cluster : kind-solo Current Command : keys consensus generate --gossip-keys --tls-keys --deployment solo-deployment ********************************************************************************** Initialize ✔ Initialize Generate gossip keys Backup old files ✔ Backup old files Gossip key for node: node1 ✔ Gossip key for node: node1 [0.2s] ✔ Generate gossip keys [0.2s] Generate gRPC TLS Keys Backup old files TLS key for node: node1 ✔ Backup old files ✔ TLS key for node: node1 [0.3s] ✔ Generate gRPC TLS Keys [0.3s] Finalize ✔ Finalize
4. Set Up Cluster with Shared Components
Install shared cluster-level components (MinIO Operator, Prometheus CRDs, etc.) into the cluster setup namespace:
solo cluster-ref config setup --cluster-setup-namespace "${SOLO_CLUSTER_SETUP_NAMESPACE}"Example output:
******************************* Solo ********************************************* Version : 0.63.0 Kubernetes Context : kind-solo Kubernetes Cluster : kind-solo Current Command : cluster-ref config setup --cluster-setup-namespace solo-cluster ********************************************************************************** Check dependencies Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependencies Setup chart manager ✔ Setup chart manager [0.6s] Initialize ✔ Initialize Install cluster charts Install pod-monitor-role ClusterRole - ClusterRole pod-monitor-role already exists in context kind-solo, skipping ✔ Install pod-monitor-role ClusterRole Install MinIO Operator chart ✔ MinIO Operator chart installed successfully on context kind-solo ✔ Install MinIO Operator chart [0.8s] ✔ Install cluster charts [0.8s]
5. Deploy the Network
Deploy the Solo network Helm chart, which provisions the consensus node pods, HAProxy, Envoy, and MinIO:
solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}"Example output:
******************************* Solo ********************************************* Version : 0.63.0 Kubernetes Context : kind-solo Kubernetes Cluster : kind-solo Current Command : consensus network deploy --deployment solo-deployment --release-tag v0.66.0 ********************************************************************************** Check dependencies Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependencies Setup chart manager ✔ Setup chart manager [0.7s] Initialize Acquire lock ✔ Acquire lock - lock acquired successfully, attempt: 1/10 ✔ Initialize [0.2s] Copy gRPC TLS Certificates Copy gRPC TLS Certificates [SKIPPED: Copy gRPC TLS Certificates] Prepare staging directory Copy Gossip keys to staging ✔ Copy Gossip keys to staging Copy gRPC TLS keys to staging ✔ Copy gRPC TLS keys to staging ✔ Prepare staging directory Copy node keys to secrets Copy TLS keys Node: node1, cluster: kind-solo Copy Gossip keys ✔ Copy TLS keys ✔ Copy Gossip keys ✔ Node: node1, cluster: kind-solo ✔ Copy node keys to secrets Install monitoring CRDs Pod Logs CRDs ✔ Pod Logs CRDs Prometheus Operator CRDs - Installed prometheus-operator-crds chart, version: 24.0.2 ✔ Prometheus Operator CRDs [4s] ✔ Install monitoring CRDs [4s] Install chart 'solo-deployment' - Installed solo-deployment chart, version: 0.62.0 ✔ Install chart 'solo-deployment' [2s] Check for load balancer Check for load balancer [SKIPPED: Check for load balancer] Redeploy chart with external IP address config Redeploy chart with external IP address config [SKIPPED: Redeploy chart with external IP address config] Check node pods are running Check Node: node1, Cluster: kind-solo ✔ Check Node: node1, Cluster: kind-solo [24s] ✔ Check node pods are running [24s] Check proxy pods are running Check HAProxy for: node1, cluster: kind-solo Check Envoy Proxy for: node1, cluster: kind-solo ✔ Check HAProxy for: node1, cluster: kind-solo ✔ Check Envoy Proxy for: node1, cluster: kind-solo ✔ Check proxy pods are running Check auxiliary pods are ready Check MinIO ✔ Check MinIO ✔ Check auxiliary pods are ready Add node and proxies to remote config ✔ Add node and proxies to remote config Copy wraps lib into consensus node Copy wraps lib into consensus node [SKIPPED: Copy wraps lib into consensus node] Copy block-nodes.json ✔ Copy block-nodes.json [1s] Copy JFR config file to nodes Copy JFR config file to nodes [SKIPPED: Copy JFR config file to nodes]
6. Set Up Consensus Nodes
Download the consensus node platform software and configure each node:
export CONSENSUS_NODE_VERSION=v0.66.0 solo consensus node setup \ --deployment "${SOLO_DEPLOYMENT}" \ --release-tag "${CONSENSUS_NODE_VERSION}"Example output:
******************************* Solo ********************************************* Version : 0.63.0 Kubernetes Context : kind-solo Kubernetes Cluster : kind-solo Current Command : consensus node setup --deployment solo-deployment --release-tag v0.66.0 ********************************************************************************** Load configuration ✔ Load configuration [0.2s] Initialize ✔ Initialize [0.2s] Validate nodes states Validating state for node node1 ✔ Validating state for node node1 - valid state: requested ✔ Validate nodes states Identify network pods Check network pod: node1 ✔ Check network pod: node1 ✔ Identify network pods Fetch platform software into network nodes Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ] ✔ Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ] [3s] ✔ Fetch platform software into network nodes [3s] Setup network nodes Node: node1 Copy configuration files ✔ Copy configuration files [0.3s] Set file permissions ✔ Set file permissions [0.4s] ✔ Node: node1 [0.8s] ✔ Setup network nodes [0.9s] setup network node folders ✔ setup network node folders [0.1s] Change node state to configured in remote config ✔ Change node state to configured in remote config
7. Start Consensus Nodes
Start all configured nodes and wait for them to reach ACTIVE status:
solo consensus node start --deployment "${SOLO_DEPLOYMENT}"Example output:
******************************* Solo ********************************************* Version : 0.63.0 Kubernetes Context : kind-solo Kubernetes Cluster : kind-solo Current Command : consensus node start --deployment solo-deployment ********************************************************************************** Check dependencies Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependencies Setup chart manager ✔ Setup chart manager [0.7s] Load configuration ✔ Load configuration [0.2s] Initialize ✔ Initialize [0.2s] Validate nodes states Validating state for node node1 ✔ Validating state for node node1 - valid state: configured ✔ Validate nodes states Identify existing network nodes Check network pod: node1 ✔ Check network pod: node1 ✔ Identify existing network nodes Upload state files network nodes Upload state files network nodes [SKIPPED: Upload state files network nodes] Starting nodes Start node: node1 ✔ Start node: node1 [0.1s] ✔ Starting nodes [0.1s] Enable port forwarding for debug port and/or GRPC port Using requested port 50211 ✔ Enable port forwarding for debug port and/or GRPC port Check all nodes are ACTIVE Check network pod: node1 ✔ Check network pod: node1 - status ACTIVE, attempt: 16/300 [20s] ✔ Check all nodes are ACTIVE [20s] Check node proxies are ACTIVE Check proxy for node: node1 ✔ Check proxy for node: node1 [6s] ✔ Check node proxies are ACTIVE [6s] Wait for TSS Wait for TSS [SKIPPED: Wait for TSS] set gRPC Web endpoint Using requested port 30212 ✔ set gRPC Web endpoint [3s] Change node state to started in remote config ✔ Change node state to started in remote config Add node stakes Adding stake for node: node1 ✔ Adding stake for node: node1 [4s] ✔ Add node stakes [4s] Stopping port-forward for port [30212]
8. Deploy Mirror Node
Deploy the Hedera Mirror Node, which indexes all transaction data and exposes a REST API and gRPC endpoint:
solo mirror node add \ --deployment "${SOLO_DEPLOYMENT}" \ --cluster-ref kind-${SOLO_CLUSTER_NAME} \ --enable-ingress \ --pingerThe
--pingerflag keeps the mirror node’s importer active by regularly submitting record files. The--enable-ingressflag installs the HAProxy ingress controller for the mirror node REST API.Example output:
******************************* Solo ********************************************* Version : 0.63.0 Kubernetes Context : kind-solo Kubernetes Cluster : kind-solo Current Command : mirror node add --deployment solo-deployment --cluster-ref kind-solo --enable-ingress --quiet-mode ********************************************************************************** Check dependencies Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependencies Setup chart manager ✔ Setup chart manager [0.6s] Initialize Using requested port 30212 Acquire lock ✔ Acquire lock - lock acquired successfully, attempt: 1/10 [0.1s] ✔ Initialize [1s] Enable mirror-node Prepare address book ✔ Prepare address book Install mirror ingress controller - Installed haproxy-ingress-1 chart, version: 0.14.5 ✔ Install mirror ingress controller [0.7s] Deploy mirror-node - Installed mirror chart, version: v0.149.0 ✔ Deploy mirror-node [3s] ✔ Enable mirror-node [4s] Check pods are ready Check Postgres DB Check REST API Check GRPC Check Monitor Check Web3 Check Importer ✔ Check Postgres DB [32s] ✔ Check Web3 [46s] ✔ Check REST API [52s] ✔ Check GRPC [58s] ✔ Check Monitor [1m16s] ✔ Check Importer [1m32s] ✔ Check pods are ready [1m32s] Seed DB data Insert data in public.file_data ✔ Insert data in public.file_data [0.6s] ✔ Seed DB data [0.6s] Add mirror node to remote config ✔ Add mirror node to remote config Enable port forwarding for mirror ingress controller Using requested port 8081 ✔ Enable port forwarding for mirror ingress controller Stopping port-forward for port [30212]
9. Deploy Explorer
Deploy the Hiero Explorer, a web UI for browsing transactions and accounts:
solo explorer node add \ --deployment "${SOLO_DEPLOYMENT}" \ --cluster-ref kind-${SOLO_CLUSTER_NAME}Example output:
******************************* Solo ********************************************* Version : 0.63.0 Kubernetes Context : kind-solo Kubernetes Cluster : kind-solo Current Command : explorer node add --deployment solo-deployment --cluster-ref kind-solo --quiet-mode ********************************************************************************** Check dependencies Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependencies Setup chart manager ✔ Setup chart manager [0.7s] Initialize Acquire lock ✔ Acquire lock - lock acquired successfully, attempt: 1/10 ✔ Initialize [0.5s] Load remote config ✔ Load remote config [0.2s] Install cert manager Install cert manager [SKIPPED: Install cert manager] Install explorer - Installed hiero-explorer-1 chart, version: 26.0.0 ✔ Install explorer [0.8s] Install explorer ingress controller Install explorer ingress controller [SKIPPED: Install explorer ingress controller] Check explorer pod is ready ✔ Check explorer pod is ready [18s] Check haproxy ingress controller pod is ready Check haproxy ingress controller pod is ready [SKIPPED: Check haproxy ingress controller pod is ready] Add explorer to remote config ✔ Add explorer to remote config Enable port forwarding for explorer No port forward config found for Explorer Using requested port 8080 ✔ Enable port forwarding for explorer [0.1s]
10. Deploy JSON-RPC Relay
Deploy the Hiero JSON-RPC Relay to expose an Ethereum-compatible JSON-RPC endpoint for EVM tooling (MetaMask, Hardhat, Foundry, etc.):
solo relay node add \ -i node1 \ --deployment "${SOLO_DEPLOYMENT}"Example output:
******************************* Solo ********************************************* Version : 0.63.0 Kubernetes Context : kind-solo Kubernetes Cluster : kind-solo Current Command : relay node add --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo ********************************************************************************** Check dependencies Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] ✔ Check dependencies Setup chart manager ✔ Setup chart manager [0.7s] Initialize Acquire lock ✔ Acquire lock - lock acquired successfully, attempt: 1/10 ✔ Initialize [0.4s] Check chart is installed ✔ Check chart is installed [0.1s] Prepare chart values Using requested port 30212 ✔ Prepare chart values [1s] Deploy JSON RPC Relay - Installed relay-1 chart, version: 0.73.0 ✔ Deploy JSON RPC Relay [0.7s] Check relay is running ✔ Check relay is running [16s] Check relay is ready ✔ Check relay is ready [21s] Add relay component in remote config ✔ Add relay component in remote config Enable port forwarding for relay node Using requested port 7546 ✔ Enable port forwarding for relay node [0.1s] Stopping port-forward for port [30212]
Cleanup
When you are done, destroy components in the reverse order of deployment.
Important: Always destroy components before destroying the network. Skipping this order can leave orphaned Helm releases and PVCs in your cluster.
1. Destroy JSON-RPC Relay
solo relay node destroy \
-i node1 \
--deployment "${SOLO_DEPLOYMENT}" \
--cluster-ref kind-${SOLO_CLUSTER_NAME}
2. Destroy Mirror Node
solo mirror node destroy \
--deployment "${SOLO_DEPLOYMENT}" \
--force
3. Destroy Explorer
solo explorer node destroy \
--deployment "${SOLO_DEPLOYMENT}" \
--force
4. Destroy the Network
solo consensus network destroy \
--deployment "${SOLO_DEPLOYMENT}" \
--force
4 - Dynamically add, update, and remove Consensus Nodes
Overview
This guide covers how to dynamically manage consensus nodes in a running Solo network - adding new nodes, updating existing ones, and removing nodes that are no longer needed. All three operations can be performed without taking the network offline.
Prerequisites
Before proceeding, ensure you have:
A running Solo network. If you don’t have one, deploy using one of the following methods:
- Quickstart - single command deployment using
solo one-shot single deploy. - Manual Deployment - step-by-step deployment with full control over each component.
- Quickstart - single command deployment using
Set the required environment variables as described below:
export SOLO_CLUSTER_NAME=solo export SOLO_NAMESPACE=solo export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster export SOLO_DEPLOYMENT=solo-deployment
Key and Storage Concepts
Before running any node operation, it helps to understand two concepts that
appear in the prepare step.
Cryptographic Keys
Solo generates two types of keys for each consensus node:
- Gossip keys — used for encrypted node-to-node communication within the
network. Stored as
s-private-node*.pemands-public-node*.pemunder~/.solo/cache/keys/. - TLS keys — used to secure gRPC connections to the node. Stored as
hedera-node*.crtandhedera-node*.keyunder~/.solo/cache/keys/.
When adding a new node, Solo generates a fresh key pair and stores it alongside the keys for existing nodes in the same directory. For more detail, see Where are my keys stored?.
- Gossip keys — used for encrypted node-to-node communication within the
network. Stored as
Persistent Volume Claims (PVCs)
By default, consensus node storage is ephemeral - data stored by a node is lost if its pod crashes or is restarted. This is intentional for lightweight local testing where persistence is not required.
The
--pvcs trueflag creates Persistent Volume Claims (PVCs) for the node, ensuring its state survives pod restarts. Enable this flag for any node that needs to persist across restarts or that will participate in longer-running test scenarios.Note: PVCs are not enabled by default. Enable them only if your node needs to persist state across pod restarts.
Staging Directory
The
--output-dir contextflag specifies a local staging directory where Solo writes all artifacts produced duringprepare. Solo’s working files are stored under~/.solo/— if you use a relative path likecontext, the directory is created in your current working directory. Do not delete it untilexecutehas completed successfully.
Adding a Node to an Existing Network
You can dynamically add a new consensus node to a running network without taking the network offline. This process involves three stages: preparing the node’s keys and configuration, submitting the on-chain transaction, and executing the addition.
Step 1: Prepare the new node
Generate the new node’s gossip and TLS keys, create its persistent volumes, and stage its configuration into an output directory:
solo consensus dev-node-add prepare \
--gossip-keys true \
--tls-keys true \
--deployment "${SOLO_DEPLOYMENT}" \
--pvcs true \
--admin-key <admin-key> \
--node-alias node2 \
--output-dir context
| Flag | Description |
|---|---|
| –gossip-keys | Generate gossip keys for the new node. |
| –tls-keys | Generate gRPC TLS keys for the new node. |
| –pvcs | Create persistent volume claims for the new node. |
| –admin-key | The admin key used to authorize the node addition transaction. |
| –node-alias | Alias for the new node (e.g., node2). |
| –output-dir | Directory where prepared context files are saved for use in subsequent steps. |
Step 2: Submit the transaction to add the node
Submit the on-chain transaction to register the new node with the network:
solo consensus dev-node-add submit-transaction \
--deployment "${SOLO_DEPLOYMENT}" \
--input-dir context
Step 3: Execute the node addition
Apply the node addition and bring the new node online:
solo consensus dev-node-add execute \
--deployment "${SOLO_DEPLOYMENT}" \
--input-dir context
Note: For a complete walkthrough with expected outputs, see the Node Create Transaction example.
Updating a Node
You can update an existing consensus node - for example, to upgrade its software version or modify its configuration - without removing it from the network.
Step 1: Prepare the update
Stage the updated configuration and any new software version for the target node:
solo consensus dev-node-update prepare \
--deployment "${SOLO_DEPLOYMENT}" \
--node-alias node1 \
--release-tag v0.61.0 \
--output-dir context
| Flag | Description |
|---|---|
| –node-alias | Alias of the node to update (e.g., node1). |
| –release-tag | The consensus node software version to update to. |
| –output-dir | Directory where prepared context files are saved for use in subsequent steps. |
Step 2: Submit the update transaction
Submit the on-chain transaction to register the node update with the network:
solo consensus dev-node-update submit-transaction \
--deployment "${SOLO_DEPLOYMENT}" \
--input-dir context
Step 3: Execute the update
Apply the update and restart the node with the new configuration:
solo consensus dev-node-update execute \
--deployment "${SOLO_DEPLOYMENT}" \
--input-dir context
Note: For a complete walkthrough with expected outputs, see the Node Update Transaction example.
Removing a Node from a Network
You can dynamically remove a consensus node from a running network without taking the remaining nodes offline.
Note: Removing a node permanently reduces the number of consensus nodes in the network. Ensure the remaining nodes meet the minimum threshold required for consensus before proceeding.
Step 1: Prepare the Node for Deletion
Stage the deletion context for the target node:
solo consensus dev-node-delete prepare \
--deployment "${SOLO_DEPLOYMENT}" \
--node-alias node2 \
--output-dir context
| Flag | Description |
|---|---|
| –node-alias | Alias of the node to remove (e.g., node2). |
| –output-dir | Directory where prepared context files are saved for use in subsequent steps. |
Step 2: Submit the delete transaction
Submit the on-chain transaction to deregister the node from the network:
solo consensus dev-node-delete submit-transaction \
--deployment "${SOLO_DEPLOYMENT}" \
--input-dir context
Step 3: Execute the deletion
Remove the node and clean up its associated resources:
solo consensus dev-node-delete execute \
--deployment "${SOLO_DEPLOYMENT}" \
--input-dir context
Note: For a complete walkthrough with expected outputs, see the Node Delete Transaction example.