This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Network Deployments

Step-by-step workflows and component-level customization for users who need full control over network initialization, configuration, and scaling. Explore YAML-driven deployments, manual orchestration, dynamic node management, and extensive reference documentation.

1 - One-shot Falcon Deployment

Deploy a complete Solo network from a single YAML file for repeatable advanced setups, CI pipelines, and custom component configuration. Falcon combines simplicity with full customization using the Solo values file format.

Overview

One-shot Falcon deployment is Solo’s YAML-driven one-shot workflow. It uses the same core deployment pipeline as solo one-shot single deploy, but lets you inject component-specific flags through a single values file.

One-shot use Falcon deployment when you need a repeatable advanced setup, want to check a complete deployment into source control, or need to customise component flags without running every Solo command manually.

Falcon is especially useful for:

  • CI/CD pipelines and automated test environments.
  • Reproducible local developer setups.
  • Advanced deployments that need custom chart paths, image versions, ingress, storage, TLS, or node startup options.

Important: Falcon is an orchestration layer over Solo’s standard commands. It does not introduce a separate deployment model. Solo still creates a deployment, attaches clusters, deploys the network, configures nodes, and then adds optional components such as mirror node, explorer, and relay.

Prerequisites

Before proceeding, ensure you have completed the following:

  • System Readiness -your local environment meets the hardware and software requirements for Solo, Kubernetes, Docker, Kind, kubectl, and Helm.

  • Quickstart -you are already familiar with the standard one-shot deployment workflow.

  • Set your environment variables if you have not already done so:

    export SOLO_CLUSTER_NAME=solo
    export SOLO_NAMESPACE=solo
    export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
    export SOLO_DEPLOYMENT=solo-deployment
    

How Falcon Works

When you run Falcon deployment, Solo executes the same end-to-end deployment sequence used by its one-shot workflows:

  1. Connect to the Kubernetes cluster.
  2. Create a deployment and attach the cluster reference.
  3. Set up shared cluster components.
  4. Generate gossip and TLS keys.
  5. Deploy the consensus network and, if enabled, the block node (in parallel).
  6. Set up and start consensus nodes.
  7. Optionally, deploy mirror node, explorer, and relay in parallel for faster startup.
  8. Create predefined test accounts.
  9. Write deployment notes, versions, port-forward details, and account data to a local output directory.

The difference is that Falcon reads a YAML file and maps its top-level sections to the underlying Solo subcommands.

Values file sectionSolo subcommand invoked
networksolo consensus network deploy
setupsolo consensus node setup
consensusNodesolo consensus node start
mirrorNodesolo mirror node add
explorerNodesolo explorer node add
relayNodesolo relay node add
blockNodesolo block node add (when ONE_SHOT_WITH_BLOCK_NODE=true)

For the full list of supported CLI flags per section, see the Falcon Values File Reference.

Create a Falcon Values File

Create a YAML file to control every component of your Solo deployment. The file can have any name -falcon-values.yaml is used throughout this guide as a convention.

Note: Keys within each section must be the full CLI flag name including the -- prefix - for example, --release-tag, not release-tag or -r. Any section you omit from the file is skipped, and Solo uses the built-in defaults for that component.

Example: Single-Node Falcon Deployment

The following falcon-values.yaml example deploys a standard single-node network with mirror node, explorer, and relay enabled:

network:
  --release-tag: "v0.71.0"
  --pvcs: false

setup:
  --release-tag: "v0.71.0"

consensusNode:
  --force-port-forward: true

mirrorNode:
  --enable-ingress: true
  --pinger: true
  --force-port-forward: true

explorerNode:
  --enable-ingress: true
  --force-port-forward: true

relayNode:
  --node-aliases: "node1"
  --force-port-forward: true

Deploy with Falcon one-shot

Run Falcon deployment by pointing Solo at the values file:

solo one-shot falcon deploy --values-file falcon-values.yaml

Solo creates a one-shot deployment, applies the values from the YAML file to the appropriate subcommands, and then deploys the full environment.

What Falcon Does Not Read from the File

Some Falcon settings are controlled directly by the top-level command flags, not by section entries in the values file:

  • --values-file selects the YAML file to load.
  • --deploy-mirror-node, --deploy-explorer, and --deploy-relay control whether those optional components are deployed at all.
  • --deployment, --namespace, --cluster-ref, and --num-consensus-nodes are top-level one-shot inputs.

Important: Do not rely on --deployment inside falcon-values.yaml. Solo intentionally ignores --deployment values from section content during Falcon argument expansion. Set the deployment name on the command line if you need a specific name.


Tip: When not specified, Falcon uses these defaults: --deployment one-shot, --namespace one-shot, --cluster-ref one-shot, and --num-consensus-nodes 1. Pass any of these explicitly on the command line to override them.

Example:

solo one-shot falcon deploy \
  --deployment falcon-demo \
  --cluster-ref one-shot \
  --values-file falcon-values.yaml

Multi-Node Falcon Deployment

For multiple consensus nodes, set the node count on the Falcon command and then provide matching per-node settings where required.

  • Example:

    solo one-shot falcon deploy \
      --deployment falcon-multi \
      --num-consensus-nodes 3 \
      --values-file falcon-values.yaml
    
  • Example multi-node values file:

    network:
      --release-tag: "v0.71.0"
      --pvcs: true
    
    setup:
      --release-tag: "v0.71.0"
    
    consensusNode:
      --force-port-forward: true
      --stake-amounts: "100,100,100"
    
    mirrorNode:
      --enable-ingress: true
      --pinger: true
    
    explorerNode:
      --enable-ingress: true
    
    relayNode:
      --node-aliases: "node1,node2,node3"
    
  • The --node-aliases value in the relayNode section must match the node aliases generated by --num-consensus-nodes. Nodes are auto-named node1, node2, node3, and so on. Setting this to only node1 is valid if you want the relay to serve a single node, but specifying all aliases is typical for full coverage.

  • Use this pattern when you need a repeatable multi-node deployment but do not want to manage each step manually.

Note: Multi-node deployments require more host resources than single-node deployments. Follow the resource guidance in System Readiness, and increase Docker memory and CPU allocation before deploying.

(Optional) Component Toggles

Falcon can skip optional components at the command line without requiring a second YAML file.

For example, to deploy only the consensus network and mirror node:

solo one-shot falcon deploy \
  --values-file falcon-values.yaml \
  --deploy-explorer=false \
  --deploy-relay=false

Available toggles and their defaults:

FlagDefaultDescription
--deploy-mirror-nodetrueInclude the mirror node in the deployment.
--deploy-explorertrueInclude the explorer in the deployment.
--deploy-relaytrueInclude the JSON RPC relay in the deployment.

Important: The explorer and relay both depend on the mirror node. Setting --deploy-mirror-node=false while keeping --deploy-explorer=true or --deploy-relay=true is not a supported configuration and will result in a failed deployment.

This is useful when you want to:

  • Reduce resource usage in CI jobs.
  • Isolate one component during testing.
  • Reuse the same YAML file across multiple deployment profiles.

Common Falcon Customisations

Because each YAML section maps directly to the corresponding Solo subcommand, you can use Falcon to centralise advanced options such as:

  • Custom release tags for the consensus node platform.
  • Local chart directories for mirror node, relay, explorer, or block node.
  • Local consensus node build paths for development workflows.
  • Ingress and domain settings.
  • Mirror node external database settings.
  • Node startup settings such as state files, port forwarding, and stake amounts.
  • Storage backends and credentials for stream file handling.

Example: Local Development with Local Chart Directories

setup:
  --local-build-path: "/path/to/hiero-consensus-node/hedera-node/data"

mirrorNode:
  --mirror-node-chart-dir: "/path/to/hiero-mirror-node/charts"

relayNode:
  --relay-chart-dir: "/path/to/hiero-json-rpc-relay/charts"

explorerNode:
  --explorer-chart-dir: "/path/to/hiero-mirror-node-explorer/charts"

This pattern is useful for local integration testing against unpublished component builds.

Falcon with Block Node

Falcon can also include block node configuration.

Note: Block node workflows are advanced and require higher resource allocation and version compatibility across consensus node, block node, and related components. Docker memory must be set to at least 16 GB before deploying with block node enabled.

Block node support also requires the ONE_SHOT_WITH_BLOCK_NODE=true environment variable to be set before running falcon deploy. Without it, Solo skips the block node add step even if a blockNode section is present in the values file.

Block node deployment is subject to version compatibility requirements. Minimum versions are consensus node ≥ v0.72.0 and block node ≥ 0.29.0. Mixing incompatible versions will cause the deployment to fail. Check the Version Compatibility Reference before enabling block node.

Example:

network:
  --release-tag: "v0.72.0"

setup:
  --release-tag: "v0.72.0"

consensusNode:
  --force-port-forward: true

blockNode:
  --release-tag: "v0.29.0"
  --enable-ingress: false

mirrorNode:
  --enable-ingress: true
  --pinger: true

explorerNode:
  --enable-ingress: true

relayNode:
  --node-aliases: "node1"
  --force-port-forward: true

Use block node settings only when your target Solo and component versions are known to be compatible.

Rollback and Failure Behaviour

Falcon deployment enables automatic rollback by default.

If deployment fails after resources have already been created, Solo attempts to destroy the one-shot deployment automatically and clean up the namespace.

If you want to preserve the failed deployment for debugging, disable rollback:

solo one-shot falcon deploy \
  --values-file falcon-values.yaml \
  --no-rollback

Use --no-rollback only when you explicitly want to inspect partial resources, logs, or Kubernetes objects after a failed run.

Deployment Output

After a successful Falcon deployment, Solo writes deployment metadata to ~/.solo/one-shot-<deployment>/ where <deployment> is the value of the --deployment flag (default: one-shot).

This directory typically contains:

  • notes - human-readable deployment summary
  • versions - component versions recorded at deploy time
  • forwards - port-forward configuration
  • accounts.json - predefined test account keys and IDs. All accounts are ECDSA Alias accounts (EVM-compatible) and include a publicAddress field. The file also includes the system operator account.

This makes Falcon especially useful for automation, because the deployment artifacts are written to a predictable path after each run.

To inspect the latest one-shot deployment metadata later, run:

solo one-shot show deployment

If port-forwards are interrupted after deployment - for example after a system restart or network disruption - restore them without redeploying:

solo deployment refresh port-forwards

Destroy a Falcon Deployment

  • Destroy the Falcon deployment with:

    solo one-shot falcon destroy
    
  • Solo removes deployed extensions first, then destroys the mirror node, network, cluster references, and local deployment metadata.

  • If multiple deployments exist locally, Solo prompts you to choose which one to destroy unless you pass --deployment explicitly.

    solo one-shot falcon destroy --deployment falcon-demo
    

When to Use Falcon vs. Manual Deployment

Use Falcon deployment when you want a single, repeatable command backed by a versioned YAML file.

Use Step-by-Step Manual Deployment when you need to pause between steps, inspect intermediate state, or debug a specific deployment phase in isolation.

In practice:

  • Falcon is better for automation and repeatability.
  • Manual deployment is better for debugging and low-level control.

Reference

Tip: If you are creating a values file for the first time, start from the annotated template in the Solo repository rather than writing one from scratch:

examples/one-shot-falcon/falcon-values.yaml

This file includes all supported sections and flags with inline comments explaining each option. Copy it, remove what you do not need, and adjust the values for your environment.

2 - Falcon Values File Reference

Comprehensive reference for all supported CLI flags per section of a Falcon values file, including defaults, types, and descriptions. Use this as your source of truth when customizing Falcon deployments.

Overview

This page catalogs the Solo CLI flags accepted under each top-level section of a Falcon values file. Each entry corresponds to the command-line flag that the underlying Solo subcommand accepts.

Sections map to subcommands as follows:

SectionSolo subcommand
networksolo consensus network deploy
setupsolo consensus node setup
consensusNodesolo consensus node start
mirrorNodesolo mirror node add
explorerNodesolo explorer node add
relayNodesolo relay node add
blockNodesolo block node add

All flag names must be written in long form with double dashes (for example, --release-tag). Flags left empty ("") or matching their default value are ignored by Solo at argument expansion time.

Note: Not every flag listed here is relevant to every deployment. Use this page as a lookup when writing or debugging a values file. For a working example file, see the upstream reference at https://github.com/hiero-ledger/solo/tree/main/examples/one-shot-falcon.


Consensus Network Deploy — network

Flags passed to solo consensus network deploy.

FlagTypeDefaultDescription
--release-tagstringcurrent Hedera platform versionConsensus node release tag (e.g. v0.71.0).
--pvcsbooleanfalseEnable Persistent Volume Claims for consensus node storage. Required for node add operations.
--load-balancerbooleanfalseEnable load balancer for network node proxies.
--chart-dirstringPath to a local Helm chart directory for the Solo network chart.
--solo-chart-versionstringcurrent chart versionSpecific Solo testing chart version to deploy.
--haproxy-ipsstringStatic IP mapping for HAProxy pods (e.g. node1=127.0.0.1,node2=127.0.0.2).
--envoy-ipsstringStatic IP mapping for Envoy proxy pods.
--debug-node-aliasstringEnable the default JVM debug port (5005) for the specified node alias.
--domain-namesstringCustom domain name mapping per node alias (e.g. node1=node1.example.com).
--grpc-tls-certstringTLS certificate path for gRPC, per node alias (e.g. node1=/path/to/cert).
--grpc-web-tls-certstringTLS certificate path for gRPC Web, per node alias.
--grpc-tls-keystringTLS certificate key path for gRPC, per node alias.
--grpc-web-tls-keystringTLS certificate key path for gRPC Web, per node alias.
--storage-typestringminio_onlyStream file storage backend. Options: minio_only, aws_only, gcs_only, aws_and_gcs.
--gcs-write-access-keystringGCS write access key.
--gcs-write-secretsstringGCS write secret key.
--gcs-endpointstringGCS storage endpoint URL.
--gcs-bucketstringGCS bucket name.
--gcs-bucket-prefixstringGCS bucket path prefix.
--aws-write-access-keystringAWS write access key.
--aws-write-secretsstringAWS write secret key.
--aws-endpointstringAWS storage endpoint URL.
--aws-bucketstringAWS bucket name.
--aws-bucket-regionstringAWS bucket region.
--aws-bucket-prefixstringAWS bucket path prefix.
--settings-txtstringtemplatePath to a custom settings.txt file for consensus nodes.
--application-propertiesstringtemplatePath to a custom application.properties file.
--application-envstringtemplatePath to a custom application.env file.
--api-permission-propertiesstringtemplatePath to a custom api-permission.properties file.
--bootstrap-propertiesstringtemplatePath to a custom bootstrap.properties file.
--log4j2-xmlstringtemplatePath to a custom log4j2.xml file.
--genesis-throttles-filestringPath to a custom throttles.json file for network genesis.
--service-monitorbooleanfalseInstall a ServiceMonitor custom resource for Prometheus metrics.
--pod-logbooleanfalseInstall a PodLog custom resource for node pod log monitoring.
--quiet-modebooleanfalseSuppress confirmation prompts.
--values-filestringComma-separated Helm chart values file paths (not the Falcon values file).

Consensus Node Setup — setup

Flags passed to solo consensus node setup.

FlagTypeDefaultDescription
--release-tagstringcurrent Hedera platform versionConsensus node release tag. Must match network.--release-tag.
--local-build-pathstringPath to a local Hiero consensus node build (e.g. ~/hiero-consensus-node/hedera-node/data). Used for local development workflows.
--appstringHederaNode.jarName of the consensus node application binary.
--app-configstringPath to a JSON configuration file for the testing app.
--admin-public-keysstringComma-separated DER-encoded ED25519 public keys in node alias order.
--domain-namesstringCustom domain name mapping per node alias.
--devbooleanfalseEnable developer mode.
--quiet-modebooleanfalseSuppress confirmation prompts.
--cache-dirstring~/.solo/cacheLocal cache directory for downloaded artifacts.

Consensus Node Start — consensusNode

Flags passed to solo consensus node start.

FlagTypeDefaultDescription
--force-port-forwardbooleantrueForce port forwarding to access network services locally.
--stake-amountsstringComma-separated stake amounts in node alias order (e.g. 100,100,100). Required for multi-node deployments that need non-default stakes.
--state-filestringPath to a zipped state file to restore the network from.
--debug-node-aliasstringEnable JVM debug port (5005) for the specified node alias.
--appstringHederaNode.jarName of the consensus node application binary.
--quiet-modebooleanfalseSuppress confirmation prompts.

Mirror Node Add — mirrorNode

Flags passed to solo mirror node add.

FlagTypeDefaultDescription
--mirror-node-versionstringcurrent versionMirror node Helm chart version to deploy.
--enable-ingressbooleanfalseDeploy an ingress controller for the mirror node.
--force-port-forwardbooleantrueEnable port forwarding for mirror node services.
--pingerbooleanfalseEnable the mirror node Pinger service.
--mirror-static-ipstringStatic IP address for the mirror node load balancer.
--domain-namestringCustom domain name for the mirror node.
--ingress-controller-value-filestringPath to a Helm values file for the ingress controller.
--mirror-node-chart-dirstringPath to a local mirror node Helm chart directory.
--use-external-databasebooleanfalseConnect to an external PostgreSQL database instead of the chart-bundled one.
--external-database-hoststringHostname of the external database. Requires --use-external-database.
--external-database-owner-usernamestringOwner username for the external database.
--external-database-owner-passwordstringOwner password for the external database.
--external-database-read-usernamestringRead-only username for the external database.
--external-database-read-passwordstringRead-only password for the external database.
--storage-typestringminio_onlyStream file storage backend for the mirror node importer.
--storage-read-access-keystringStorage read access key for the mirror node importer.
--storage-read-secretsstringStorage read secret key for the mirror node importer.
--storage-endpointstringStorage endpoint URL for the mirror node importer.
--storage-bucketstringStorage bucket name for the mirror node importer.
--storage-bucket-prefixstringStorage bucket path prefix.
--storage-bucket-regionstringStorage bucket region.
--operator-idstringOperator account ID for the mirror node.
--operator-keystringOperator private key for the mirror node.
--quiet-modebooleanfalseSuppress confirmation prompts.
--values-filestringComma-separated Helm chart values file paths for the mirror node chart.

Explorer Add — explorerNode

Flags passed to solo explorer node add.

FlagTypeDefaultDescription
--explorer-versionstringcurrent versionHiero Explorer Helm chart version to deploy.
--enable-ingressbooleanfalseDeploy an ingress controller for the explorer.
--force-port-forwardbooleantrueEnable port forwarding for the explorer service.
--domain-namestringCustom domain name for the explorer.
--ingress-controller-value-filestringPath to a Helm values file for the ingress controller.
--explorer-chart-dirstringPath to a local Hiero Explorer Helm chart directory.
--explorer-static-ipstringStatic IP address for the explorer load balancer.
--enable-explorer-tlsbooleanfalseEnable TLS for the explorer. Requires cert-manager.
--explorer-tls-host-namestringexplorer.solo.localHostname used for the explorer TLS certificate.
--tls-cluster-issuer-typestringself-signedTLS cluster issuer type. Options: self-signed, acme-staging, acme-prod.
--mirror-node-idnumberID of the mirror node instance to connect the explorer to.
--mirror-namespacestringKubernetes namespace of the mirror node.
--solo-chart-versionstringcurrent versionSolo chart version used for explorer cluster setup.
--quiet-modebooleanfalseSuppress confirmation prompts.
--values-filestringComma-separated Helm chart values file paths for the explorer chart.

JSON-RPC Relay Add — relayNode

Flags passed to solo relay node add.

FlagTypeDefaultDescription
--relay-releasestringcurrent versionHiero JSON-RPC Relay Helm chart release to deploy.
--node-aliasesstringComma-separated node aliases the relay will observe (e.g. node1 or node1,node2).
--replica-countnumber1Number of relay replicas to deploy.
--chain-idstring298EVM chain ID exposed by the relay (Hedera testnet default).
--force-port-forwardbooleantrueEnable port forwarding for the relay service.
--domain-namestringCustom domain name for the relay.
--relay-chart-dirstringPath to a local Hiero JSON-RPC Relay Helm chart directory.
--operator-idstringOperator account ID for relay transaction signing.
--operator-keystringOperator private key for relay transaction signing.
--mirror-node-idnumberID of the mirror node instance the relay will query.
--mirror-namespacestringKubernetes namespace of the mirror node.
--quiet-modebooleanfalseSuppress confirmation prompts.
--values-filestringComma-separated Helm chart values file paths for the relay chart.

Block Node Add — blockNode

Flags passed to solo block node add.

Important: The blockNode section is only read when ONE_SHOT_WITH_BLOCK_NODE=true is set in the environment. Otherwise Solo skips the block node add step regardless of whether a blockNode section is present. Version requirements: Consensus node ≥ v0.72.0 and block node ≥ 0.29.0. Use --force to bypass version gating during testing.

FlagTypeDefaultDescription
--release-tagstringcurrent versionHiero block node release tag.
--image-tagstringDocker image tag to override the Helm chart default.
--enable-ingressbooleanfalseDeploy an ingress controller for the block node.
--domain-namestringCustom domain name for the block node.
--devbooleanfalseEnable developer mode for the block node.
--block-node-chart-dirstringPath to a local Hiero block node Helm chart directory.
--quiet-modebooleanfalseSuppress confirmation prompts.
--values-filestringComma-separated Helm chart values file paths for the block node chart.

Top-Level Falcon Command Flags

The following flags are passed directly on the solo one-shot falcon deploy command line. They are not read from the values file sections.

FlagTypeDefaultDescription
--values-filestringPath to the Falcon values YAML file.
--deploymentstringone-shotDeployment name for Solo’s internal state.
--namespacestringone-shotKubernetes namespace to deploy into.
--cluster-refstringone-shotCluster reference name.
--num-consensus-nodesnumber1Number of consensus nodes to deploy.
--deploy-mirror-nodebooleantrueDeploy or skip the mirror node.
--deploy-explorerbooleantrueDeploy or skip the explorer.
--deploy-relaybooleantrueDeploy or skip the JSON-RPC relay.
--no-rollbackbooleanfalseDisable automatic cleanup on deployment failure. Preserves partial resources for inspection.
--quiet-modebooleanfalseSuppress all interactive prompts.
--forcebooleanfalseForce actions that would otherwise be skipped.

3 - Step-by-Step Manual Deployment

Deploy each Solo network component individually for maximum control over configuration and debugging. Execute each step manually through the Solo CLI and integrate Solo into bespoke automation pipelines.

Overview

Manual deployment lets you deploy each Solo network component individually, giving you full control over configuration, sequencing, and troubleshooting. Use this approach when you need to customise specific steps, debug a component in isolation, or integrate Solo into a bespoke automation pipeline.


Prerequisites

Before proceeding, ensure you have completed the following:

  • System Readiness — your local environment meets all hardware and software requirements (Docker, kind, kubectl, helm, Solo).

  • Quickstart — you have a running Kind cluster and have run solo init at least once.

  • Set your environment variables if you have not already done so:

    export SOLO_CLUSTER_NAME=solo
    export SOLO_NAMESPACE=solo
    export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
    export SOLO_DEPLOYMENT=solo-deployment
    

Deployment Steps

1. Connect Cluster and Create Deployment

  • Connect Solo to the Kind cluster and create a new deployment configuration:

    # Connect to the Kind cluster
    solo cluster-ref config connect \
      --cluster-ref kind-${SOLO_CLUSTER_NAME} \
      --context kind-${SOLO_CLUSTER_NAME}
    
    # Create a new deployment
    solo deployment config create \
      -n "${SOLO_NAMESPACE}" \
      --deployment "${SOLO_DEPLOYMENT}"
    
  • Expected Output:

    ******************************* Solo *********************************************
    Version   : 0.63.0
    Kubernetes Context : kind-solo
    Kubernetes Cluster : kind-solo
    Current Command  : cluster-ref config connect --cluster-ref kind-solo --context kind-solo
    **********************************************************************************
    Initialize
    ✔ Initialize 
    Validating cluster ref: 
    ✔ Validating cluster ref: kind-solo 
    Test connection to cluster: 
    ✔ Test connection to cluster: kind-solo 
    Associate a context with a cluster reference: 
    ✔ Associate a context with a cluster reference: kind-solo
    

2. Add Cluster to Deployment

  • Attach the cluster to your deployment and specify the number of consensus nodes:

    1. Single node:

    solo deployment cluster attach \
      --deployment "${SOLO_DEPLOYMENT}" \
      --cluster-ref kind-${SOLO_CLUSTER_NAME} \
      --num-consensus-nodes 1
    

    2. Multiple nodes (e.g., –num-consensus-nodes 3):

    solo deployment cluster attach \
      --deployment "${SOLO_DEPLOYMENT}" \
      --cluster-ref kind-${SOLO_CLUSTER_NAME} \
      --num-consensus-nodes 3
    
  • Expected Output:

    solo-deployment_ADD_CLUSTER_OUTPUT
    

3. Generate Keys

  • Generate the gossip and TLS keys for your consensus nodes:

    solo keys consensus generate \
      --gossip-keys \
      --tls-keys \
      --deployment "${SOLO_DEPLOYMENT}"
    

    PEM key files are written to ~/.solo/cache/keys/.

  • Example output:

    ******************************* Solo *********************************************
    Version   : 0.63.0
    Kubernetes Context : kind-solo
    Kubernetes Cluster : kind-solo
    Current Command  : keys consensus generate --gossip-keys --tls-keys --deployment solo-deployment
    **********************************************************************************
    Initialize
    ✔ Initialize 
    Generate gossip keys
    Backup old files
    ✔ Backup old files 
    Gossip key for node: node1
    ✔ Gossip key for node: node1 [0.2s]
    ✔ Generate gossip keys [0.2s]
    Generate gRPC TLS Keys
    Backup old files
    TLS key for node: node1
    ✔ Backup old files 
    ✔ TLS key for node: node1 [0.3s]
    ✔ Generate gRPC TLS Keys [0.3s]
    Finalize
    ✔ Finalize
    

4. Set Up Cluster with Shared Components

  • Install shared cluster-level components (MinIO Operator, Prometheus CRDs, etc.) into the cluster setup namespace:

    solo cluster-ref config setup --cluster-setup-namespace "${SOLO_CLUSTER_SETUP_NAMESPACE}"
    
  • Example output:

    ******************************* Solo *********************************************
    Version   : 0.63.0
    Kubernetes Context : kind-solo
    Kubernetes Cluster : kind-solo
    Current Command  : cluster-ref config setup --cluster-setup-namespace solo-cluster
    **********************************************************************************
    Check dependencies
    Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependencies 
    Setup chart manager
    ✔ Setup chart manager [0.6s]
    Initialize
    ✔ Initialize 
    Install cluster charts
    Install pod-monitor-role ClusterRole
    -  ClusterRole pod-monitor-role already exists in context kind-solo, skipping
    ✔ Install pod-monitor-role ClusterRole 
    Install MinIO Operator chart
    ✔ MinIO Operator chart installed successfully on context kind-solo
    ✔ Install MinIO Operator chart [0.8s]
    ✔ Install cluster charts [0.8s]
    

5. Deploy the Network

  • Deploy the Solo network Helm chart, which provisions the consensus node pods, HAProxy, Envoy, and MinIO:

    solo consensus network deploy --deployment "${SOLO_DEPLOYMENT}"
    
  • Example output:

    ******************************* Solo *********************************************
    Version   : 0.63.0
    Kubernetes Context : kind-solo
    Kubernetes Cluster : kind-solo
    Current Command  : consensus network deploy --deployment solo-deployment --release-tag v0.66.0
    **********************************************************************************
    Check dependencies
    Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependencies 
    Setup chart manager
    ✔ Setup chart manager [0.7s]
    Initialize
    Acquire lock
    ✔ Acquire lock - lock acquired successfully, attempt: 1/10 
    ✔ Initialize [0.2s]
    Copy gRPC TLS Certificates
    Copy gRPC TLS Certificates [SKIPPED: Copy gRPC TLS Certificates]
    Prepare staging directory
    Copy Gossip keys to staging
    ✔ Copy Gossip keys to staging 
    Copy gRPC TLS keys to staging
    ✔ Copy gRPC TLS keys to staging 
    ✔ Prepare staging directory 
    Copy node keys to secrets
    Copy TLS keys
    Node: node1, cluster: kind-solo
    Copy Gossip keys
    ✔ Copy TLS keys 
    ✔ Copy Gossip keys 
    ✔ Node: node1, cluster: kind-solo 
    ✔ Copy node keys to secrets 
    Install monitoring CRDs
    Pod Logs CRDs
    ✔ Pod Logs CRDs 
    Prometheus Operator CRDs
    - Installed prometheus-operator-crds chart, version: 24.0.2
    ✔ Prometheus Operator CRDs [4s]
    ✔ Install monitoring CRDs [4s]
    Install chart 'solo-deployment'
    - Installed solo-deployment chart, version: 0.62.0
    ✔ Install chart 'solo-deployment' [2s]
    Check for load balancer
    Check for load balancer [SKIPPED: Check for load balancer]
    Redeploy chart with external IP address config
    Redeploy chart with external IP address config [SKIPPED: Redeploy chart with external IP address config]
    Check node pods are running
    Check Node: node1, Cluster: kind-solo
    ✔ Check Node: node1, Cluster: kind-solo [24s]
    ✔ Check node pods are running [24s]
    Check proxy pods are running
    Check HAProxy for: node1, cluster: kind-solo
    Check Envoy Proxy for: node1, cluster: kind-solo
    ✔ Check HAProxy for: node1, cluster: kind-solo 
    ✔ Check Envoy Proxy for: node1, cluster: kind-solo 
    ✔ Check proxy pods are running 
    Check auxiliary pods are ready
    Check MinIO
    ✔ Check MinIO 
    ✔ Check auxiliary pods are ready 
    Add node and proxies to remote config
    ✔ Add node and proxies to remote config 
    Copy wraps lib into consensus node
    Copy wraps lib into consensus node [SKIPPED: Copy wraps lib into consensus node]
    Copy block-nodes.json
    ✔ Copy block-nodes.json [1s]
    Copy JFR config file to nodes
    Copy JFR config file to nodes [SKIPPED: Copy JFR config file to nodes]
    

6. Set Up Consensus Nodes

  • Download the consensus node platform software and configure each node:

    export CONSENSUS_NODE_VERSION=v0.66.0
    
    solo consensus node setup \
      --deployment "${SOLO_DEPLOYMENT}" \
      --release-tag "${CONSENSUS_NODE_VERSION}"
    
  • Example output:

    ******************************* Solo *********************************************
    Version   : 0.63.0
    Kubernetes Context : kind-solo
    Kubernetes Cluster : kind-solo
    Current Command  : consensus node setup --deployment solo-deployment --release-tag v0.66.0
    **********************************************************************************
    Load configuration
    ✔ Load configuration [0.2s]
    Initialize
    ✔ Initialize [0.2s]
    Validate nodes states
    Validating state for node node1
    ✔ Validating state for node node1 - valid state: requested 
    ✔ Validate nodes states 
    Identify network pods
    Check network pod: node1
    ✔ Check network pod: node1 
    ✔ Identify network pods 
    Fetch platform software into network nodes
    Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ]
    ✔ Update node: node1 [ platformVersion = v0.66.0, context = kind-solo ] [3s]
    ✔ Fetch platform software into network nodes [3s]
    Setup network nodes
    Node: node1
    Copy configuration files
    ✔ Copy configuration files [0.3s]
    Set file permissions
    ✔ Set file permissions [0.4s]
    ✔ Node: node1 [0.8s]
    ✔ Setup network nodes [0.9s]
    setup network node folders
    ✔ setup network node folders [0.1s]
    Change node state to configured in remote config
    ✔ Change node state to configured in remote config
    

7. Start Consensus Nodes

  • Start all configured nodes and wait for them to reach ACTIVE status:

    solo consensus node start --deployment "${SOLO_DEPLOYMENT}"
    
  • Example output:

    ******************************* Solo *********************************************
    Version   : 0.63.0
    Kubernetes Context : kind-solo
    Kubernetes Cluster : kind-solo
    Current Command  : consensus node start --deployment solo-deployment
    **********************************************************************************
    Check dependencies
    Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependencies 
    Setup chart manager
    ✔ Setup chart manager [0.7s]
    Load configuration
    ✔ Load configuration [0.2s]
    Initialize
    ✔ Initialize [0.2s]
    Validate nodes states
    Validating state for node node1
    ✔ Validating state for node node1 - valid state: configured 
    ✔ Validate nodes states 
    Identify existing network nodes
    Check network pod: node1
    ✔ Check network pod: node1 
    ✔ Identify existing network nodes 
    Upload state files network nodes
    Upload state files network nodes [SKIPPED: Upload state files network nodes]
    Starting nodes
    Start node: node1
    ✔ Start node: node1 [0.1s]
    ✔ Starting nodes [0.1s]
    Enable port forwarding for debug port and/or GRPC port
    Using requested port 50211
    ✔ Enable port forwarding for debug port and/or GRPC port 
    Check all nodes are ACTIVE
    Check network pod: node1 
    ✔ Check network pod: node1  - status ACTIVE, attempt: 16/300 [20s]
    ✔ Check all nodes are ACTIVE [20s]
    Check node proxies are ACTIVE
    Check proxy for node: node1
    ✔ Check proxy for node: node1 [6s]
    ✔ Check node proxies are ACTIVE [6s]
    Wait for TSS
    Wait for TSS [SKIPPED: Wait for TSS]
    set gRPC Web endpoint
    Using requested port 30212
    set gRPC Web endpoint [3s]
    Change node state to started in remote config
    ✔ Change node state to started in remote config 
    Add node stakes
    Adding stake for node: node1
    ✔ Adding stake for node: node1 [4s]
    ✔ Add node stakes [4s]
    Stopping port-forward for port [30212]
    

8. Deploy Mirror Node

  • Deploy the Hedera Mirror Node, which indexes all transaction data and exposes a REST API and gRPC endpoint:

    solo mirror node add \
      --deployment "${SOLO_DEPLOYMENT}" \
      --cluster-ref kind-${SOLO_CLUSTER_NAME} \
      --enable-ingress \
      --pinger
    

    The --pinger flag keeps the mirror node’s importer active by regularly submitting record files. The --enable-ingress flag installs the HAProxy ingress controller for the mirror node REST API.

  • Example output:

    ******************************* Solo *********************************************
    Version   : 0.63.0
    Kubernetes Context : kind-solo
    Kubernetes Cluster : kind-solo
    Current Command  : mirror node add --deployment solo-deployment --cluster-ref kind-solo --enable-ingress --quiet-mode
    **********************************************************************************
    Check dependencies
    Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependencies 
    Setup chart manager
    ✔ Setup chart manager [0.6s]
    Initialize
    Using requested port 30212
    Acquire lock
    ✔ Acquire lock - lock acquired successfully, attempt: 1/10 [0.1s]
    ✔ Initialize [1s]
    Enable mirror-node
    Prepare address book
    ✔ Prepare address book 
    Install mirror ingress controller
    - Installed haproxy-ingress-1 chart, version: 0.14.5
    ✔ Install mirror ingress controller [0.7s]
    Deploy mirror-node
    - Installed mirror chart, version: v0.149.0
    ✔ Deploy mirror-node [3s]
    ✔ Enable mirror-node [4s]
    Check pods are ready
    Check Postgres DB
    Check REST API
    Check GRPC
    Check Monitor
    Check Web3
    Check Importer
    ✔ Check Postgres DB [32s]
    ✔ Check Web3 [46s]
    ✔ Check REST API [52s]
    ✔ Check GRPC [58s]
    ✔ Check Monitor [1m16s]
    ✔ Check Importer [1m32s]
    ✔ Check pods are ready [1m32s]
    Seed DB data
    Insert data in public.file_data
    ✔ Insert data in public.file_data [0.6s]
    ✔ Seed DB data [0.6s]
    Add mirror node to remote config
    ✔ Add mirror node to remote config 
    Enable port forwarding for mirror ingress controller
    Using requested port 8081
    ✔ Enable port forwarding for mirror ingress controller 
    Stopping port-forward for port [30212]
    

9. Deploy Explorer

  • Deploy the Hiero Explorer, a web UI for browsing transactions and accounts:

    solo explorer node add \
      --deployment "${SOLO_DEPLOYMENT}" \
      --cluster-ref kind-${SOLO_CLUSTER_NAME}
    
  • Example output:

    ******************************* Solo *********************************************
    Version   : 0.63.0
    Kubernetes Context : kind-solo
    Kubernetes Cluster : kind-solo
    Current Command  : explorer node add --deployment solo-deployment --cluster-ref kind-solo --quiet-mode
    **********************************************************************************
    Check dependencies
    Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependencies 
    Setup chart manager
    ✔ Setup chart manager [0.7s]
    Initialize
    Acquire lock
    ✔ Acquire lock - lock acquired successfully, attempt: 1/10 
    ✔ Initialize [0.5s]
    Load remote config
    ✔ Load remote config [0.2s]
    Install cert manager
    Install cert manager [SKIPPED: Install cert manager]
    Install explorer
    - Installed hiero-explorer-1 chart, version: 26.0.0
    ✔ Install explorer [0.8s]
    Install explorer ingress controller
    Install explorer ingress controller [SKIPPED: Install explorer ingress controller]
    Check explorer pod is ready
    ✔ Check explorer pod is ready [18s]
    Check haproxy ingress controller pod is ready
    Check haproxy ingress controller pod is ready [SKIPPED: Check haproxy ingress controller pod is ready]
    Add explorer to remote config
    ✔ Add explorer to remote config 
    Enable port forwarding for explorer
    No port forward config found for Explorer
    Using requested port 8080
    ✔ Enable port forwarding for explorer [0.1s]
    

10. Deploy JSON-RPC Relay

  • Deploy the Hiero JSON-RPC Relay to expose an Ethereum-compatible JSON-RPC endpoint for EVM tooling (MetaMask, Hardhat, Foundry, etc.):

    solo relay node add \
      -i node1 \
      --deployment "${SOLO_DEPLOYMENT}"
    
  • Example output:

    ******************************* Solo *********************************************
    Version   : 0.63.0
    Kubernetes Context : kind-solo
    Kubernetes Cluster : kind-solo
    Current Command  : relay node add --node-aliases node1 --deployment solo-deployment --cluster-ref kind-solo
    **********************************************************************************
    Check dependencies
    Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64]
    ✔ Check dependency: kind [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: helm [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependency: kubectl [OS: linux, Release: 6.8.0-106-generic, Arch: x64] 
    ✔ Check dependencies 
    Setup chart manager
    ✔ Setup chart manager [0.7s]
    Initialize
    Acquire lock
    ✔ Acquire lock - lock acquired successfully, attempt: 1/10 
    ✔ Initialize [0.4s]
    Check chart is installed
    ✔ Check chart is installed [0.1s]
    Prepare chart values
    Using requested port 30212
    ✔ Prepare chart values [1s]
    Deploy JSON RPC Relay
    - Installed relay-1 chart, version: 0.73.0
    ✔ Deploy JSON RPC Relay [0.7s]
    Check relay is running
    ✔ Check relay is running [16s]
    Check relay is ready
    ✔ Check relay is ready [21s]
    Add relay component in remote config
    ✔ Add relay component in remote config 
    Enable port forwarding for relay node
    Using requested port 7546
    ✔ Enable port forwarding for relay node [0.1s]
    Stopping port-forward for port [30212]
    


Cleanup

When you are done, destroy components in the reverse order of deployment.

Important: Always destroy components before destroying the network. Skipping this order can leave orphaned Helm releases and PVCs in your cluster.

1. Destroy JSON-RPC Relay

solo relay node destroy \
  -i node1 \
  --deployment "${SOLO_DEPLOYMENT}" \
  --cluster-ref kind-${SOLO_CLUSTER_NAME}

2. Destroy Mirror Node

solo mirror node destroy \
  --deployment "${SOLO_DEPLOYMENT}" \
  --force

3. Destroy Explorer

solo explorer node destroy \
  --deployment "${SOLO_DEPLOYMENT}" \
  --force

4. Destroy the Network

solo consensus network destroy \
  --deployment "${SOLO_DEPLOYMENT}" \
  --force

4 - Dynamically add, update, and remove Consensus Nodes

Learn how to dynamically add, update, and remove consensus nodes in a running Solo network without taking the network offline. Execute operations independently while the network remains operational.

Overview

This guide covers how to dynamically manage consensus nodes in a running Solo network - adding new nodes, updating existing ones, and removing nodes that are no longer needed. All three operations can be performed without taking the network offline.

Prerequisites

Before proceeding, ensure you have:

  • A running Solo network. If you don’t have one, deploy using one of the following methods:

    1. Quickstart - single command deployment using solo one-shot single deploy.
    2. Manual Deployment - step-by-step deployment with full control over each component.
  • Set the required environment variables as described below:

      export SOLO_CLUSTER_NAME=solo
      export SOLO_NAMESPACE=solo
      export SOLO_CLUSTER_SETUP_NAMESPACE=solo-cluster
      export SOLO_DEPLOYMENT=solo-deployment
    

Key and Storage Concepts

Before running any node operation, it helps to understand two concepts that appear in the prepare step.

  1. Cryptographic Keys

    Solo generates two types of keys for each consensus node:

    • Gossip keys — used for encrypted node-to-node communication within the network. Stored as s-private-node*.pem and s-public-node*.pem under ~/.solo/cache/keys/.
    • TLS keys — used to secure gRPC connections to the node. Stored as hedera-node*.crt and hedera-node*.key under ~/.solo/cache/keys/.

    When adding a new node, Solo generates a fresh key pair and stores it alongside the keys for existing nodes in the same directory. For more detail, see Where are my keys stored?.

  2. Persistent Volume Claims (PVCs)

    By default, consensus node storage is ephemeral - data stored by a node is lost if its pod crashes or is restarted. This is intentional for lightweight local testing where persistence is not required.

    The --pvcs true flag creates Persistent Volume Claims (PVCs) for the node, ensuring its state survives pod restarts. Enable this flag for any node that needs to persist across restarts or that will participate in longer-running test scenarios.

    Note: PVCs are not enabled by default. Enable them only if your node needs to persist state across pod restarts.

  3. Staging Directory

    The --output-dir context flag specifies a local staging directory where Solo writes all artifacts produced during prepare. Solo’s working files are stored under ~/.solo/ — if you use a relative path like context, the directory is created in your current working directory. Do not delete it until execute has completed successfully.

Adding a Node to an Existing Network

You can dynamically add a new consensus node to a running network without taking the network offline. This process involves three stages: preparing the node’s keys and configuration, submitting the on-chain transaction, and executing the addition.

Step 1: Prepare the new node

Generate the new node’s gossip and TLS keys, create its persistent volumes, and stage its configuration into an output directory:

    solo consensus dev-node-add prepare \
    --gossip-keys true \
    --tls-keys true \
    --deployment "${SOLO_DEPLOYMENT}" \
    --pvcs true \
    --admin-key <admin-key> \
    --node-alias node2 \
    --output-dir context
FlagDescription
–gossip-keysGenerate gossip keys for the new node.
–tls-keysGenerate gRPC TLS keys for the new node.
–pvcsCreate persistent volume claims for the new node.
–admin-keyThe admin key used to authorize the node addition transaction.
–node-aliasAlias for the new node (e.g., node2).
–output-dirDirectory where prepared context files are saved for use in subsequent steps.

Step 2: Submit the transaction to add the node

Submit the on-chain transaction to register the new node with the network:

  solo consensus dev-node-add submit-transaction \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

Step 3: Execute the node addition

Apply the node addition and bring the new node online:

  solo consensus dev-node-add execute \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

Note: For a complete walkthrough with expected outputs, see the Node Create Transaction example.

Updating a Node

You can update an existing consensus node - for example, to upgrade its software version or modify its configuration - without removing it from the network.

Step 1: Prepare the update

Stage the updated configuration and any new software version for the target node:

  solo consensus dev-node-update prepare \
  --deployment "${SOLO_DEPLOYMENT}" \
  --node-alias node1 \
  --release-tag v0.61.0 \
  --output-dir context
FlagDescription
–node-aliasAlias of the node to update (e.g., node1).
–release-tagThe consensus node software version to update to.
–output-dirDirectory where prepared context files are saved for use in subsequent steps.

Step 2: Submit the update transaction

Submit the on-chain transaction to register the node update with the network:

  solo consensus dev-node-update submit-transaction \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

Step 3: Execute the update

Apply the update and restart the node with the new configuration:

  solo consensus dev-node-update execute \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

Note: For a complete walkthrough with expected outputs, see the Node Update Transaction example.

Removing a Node from a Network

You can dynamically remove a consensus node from a running network without taking the remaining nodes offline.

Note: Removing a node permanently reduces the number of consensus nodes in the network. Ensure the remaining nodes meet the minimum threshold required for consensus before proceeding.

Step 1: Prepare the Node for Deletion

Stage the deletion context for the target node:

  solo consensus dev-node-delete prepare \
  --deployment "${SOLO_DEPLOYMENT}" \
  --node-alias node2 \
  --output-dir context
FlagDescription
–node-aliasAlias of the node to remove (e.g., node2).
–output-dirDirectory where prepared context files are saved for use in subsequent steps.

Step 2: Submit the delete transaction

Submit the on-chain transaction to deregister the node from the network:

  solo consensus dev-node-delete submit-transaction \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

Step 3: Execute the deletion

Remove the node and clean up its associated resources:

  solo consensus dev-node-delete execute \
  --deployment "${SOLO_DEPLOYMENT}" \
  --input-dir context

Note: For a complete walkthrough with expected outputs, see the Node Delete Transaction example.