Architecture Overview
This section describes the base components deployed as part of a HashSphere managed service engagement and how a customer connects into the HashSphere.
This document outlines the core components of HashSphere Managed Service. It also covers the standard connectivity model for customer applications.
A typical solution has two distinct parts:
The HashSphere: A private, managed Hedera ledger.
The Customer Application: The business system that interacts with the HashSphere.
A typical deployment is illustrated below.

HashSphere (managed platform)
The HashSphere is a private Hedera ledger. It runs on the same Hiero technology stack as Hedera mainnet.
HashSphere is deployed and operated by the HashSphere SRE team. The team configures the environment to match each customer’s requirements.
Key features
Managed Service: The HashSphere is managed exclusively by the HashSphere team. Customers do not have direct access to the underlying infrastructure's configuration or management.
Customer Isolation: Each HashSphere is deployed within a dedicated cloud account (AWS) or project (GCP). This isolates customers from one another, preventing resource competition and ensuring security.
High Availability: The HashSphere's compute components are managed within a Kubernetes cluster and are replicated across multiple Availability Zones (AZs) to ensure resilience against data center outages.
Managed Storage: Storage is provided by highly reliable managed services from the cloud provider, such as AWS S3/RDS or GCP Cloud Storage/Cloud SQL.
Load Balancing: A load balancer distributes API requests for the Mirror Node, Admin Dashboard, and HashScan across the replicated instances to ensure optimal performance and availability.
The following components are deployed within the HashSphere.
Consensus nodes
These nodes work together to achieve consensus on the ledger's state. A standard HashSphere deployment includes four consensus nodes, which allows the network to maintain consensus even if one node goes offline (tolerating f=1 failures in a network of 3f+1 nodes). For enhanced high-availability, more nodes can be configured. To protect against regional outages, nodes are distributed across the available AZs. For example, in a region with three AZs, the nodes are deployed in a 2-1-1 configuration.
For more information on consensus nodes, please refer to the Hedera documentation.
Mirror nodes
Mirror nodes provide an optimized, read-only view of the ledger. They store the complete and finalized state of the ledger, as well as all historical transactions. These nodes are replicated across Availability Zones for high availability. Each mirror node is backed by a managed Postgres database. Mirror nodes expose a REST API for querying ledger state and history, a gRPC endpoint for subscribing to Hedera Consensus Service (HCS) events, and a JSON-RPC relay for interacting with Hedera's EVM capabilities.
For more information on mirror nodes, please refer to the Hedera documentation.
HashSphere Console
The HashSphere Console provides visibility into the health and performance of the managed platform. It is intended for customer DevOps and support teams.
For setup and usage, see HashSphere Console.
Network explorer
This service provides a private block explorer for the HashSphere ledger, offering the same functionality as the public HashScan for the Hedera mainnet and testnet.
Customer application
The customer is responsible for managing their own cloud account/project and the business application that runs within their Virtual Private Cloud (VPC). The HashSphere team facilitates secure interaction by making the HashSphere APIs available within the customer's VPC.
Connectivity and responsibilities
Secure Connectivity: The connection between the customer's application and the HashSphere is established using private, secure endpoints, such as AWS PrivateLink or GCP Private Service Connect. The HashSphere team ensures that only the necessary IP addresses and ports for API access are opened, with no other access permitted.
Boundary of Responsibility: The API endpoints represent the clear boundary of responsibilities. The HashSphere team is responsible for the operational integrity and availability of the HashSphere and its APIs. The customer team is responsible for the development, deployment, and maintenance of their business application that consumes these APIs.
Access Management: The HashSphere APIs do not have built-in access management. Customers are encouraged to implement a proxy or API gateway in front of the endpoints to enforce their own access control policies.
Key Management: The customer is solely responsible for managing the private key material required to control their ledger accounts. This can be achieved through a third-party custody provider or by integrating a Hardware Security Module (HSM) into their application. The HashSphere team will securely transfer the necessary network account keys during the initial setup.
Ledger Interaction: While the HashSphere team maintains the private ledger infrastructure, the customer is responsible for all on-ledger activities, such as deploying smart contracts, managing digital assets, and submitting transactions.
Multi-VPC Connectivity: For use cases requiring multiple, distributed application instances, the HashSphere team can make the ledger endpoints available in multiple customer VPCs.
Supported cloud platforms
HashSphere can be deployed on both Amazon Web Services (AWS) and Google Cloud Platform (GCP). The customer's business application must be hosted on the same cloud platform as their HashSphere instance.
If you require a different cloud platform, contact the team via the Hashgraph contact page.
Last updated

