Scalaris | An upgraded version of Tendermint
Scalaris | An upgraded version of Tendermint
Scalaris | An upgraded version of Tendermint
  • Scalaris: High-Performance, Leaderless, Parallel, and MEV-Mitigated Consensus Framework
  • Overview
    • What is Scalaris?
    • Introduction
    • Background
    • DAG-based Consensus
    • Narwhal: Achieving Scalability and Throughput
    • The Bullshark Protocol
      • Fairness and Garbage Collection in DAG-Based BFT
    • Mysticeti: Enhanced Consensus Protocol for Scalaris
      • DAG Structure
      • Consensus Protocol
        • Decision Steps
        • Commit Phase
        • Summary
    • Scalaris Framework
      • Parallel Consensus
      • Scalaris Architecture
        • Compatibility with ABCI and Cosmos SDK
        • Support for EVM Execution
        • Support for Move Language
      • MEV Mitigation in the Scalaris Framework
        • Understanding MEV Attacks
        • MEV in Old BFT-Based Blockchains
        • Scalaris Framework Mitigation
      • Parallel Transaction Execution for EVM in Scalaris Framework
        • Challenges with Parallel Execution
        • Parallel Transaction Executor (PTE)
        • General Scheme
        • Modular Architecture
        • Construction Process of Transaction DAG
        • DAG Execution Process
    • Conclusion
  • Guides
    • Install Scalaris
    • Quick start
  • Apps
    • Using ABCI-CLI
    • Getting started
    • Indexing transactions
    • Application architecture guide
  • Core
    • Using Scalaris
    • Configuration
    • Running in production
    • Metrics
    • Validators
    • How to read logs
    • Subscribing to events via Websocket
    • Block structure
    • RPC
    • Block sync
    • State Sync
    • Mempool
    • Light client
  • Network
    • Docker compose
  • Tools
    • Debugging
    • Benchmarking
  • Spec
    • Core
    • ABCI++
    • Consensus
    • Light client
    • P2C
    • RPC
    • Blockchain
  • Scalaris Quality Assurance
  • RPC
Powered by GitBook
On this page
  1. Overview

Narwhal: Achieving Scalability and Throughput

PreviousDAG-based ConsensusNextThe Bullshark Protocol

Last updated 11 months ago

The key insight from the paper is that scalable, high-throughput consensus systems should separate data dissemination from the ordering mechanism for reliability.

Narwhal employs a highly scalable and efficient Directed Acyclic Graph (DAG) structure, which, in conjunction with Tusk/Bullshark, processes over 100,000 transactions per second (tps) in a geo-replicated environment while maintaining latency below three seconds. In contrast, Hotstuff handles fewer than 2,000 tps under similar conditions.

Narwhal's symmetric data dissemination among validators ensures fault resilience, as the system's performance is only impacted when faulty validators fail to disseminate data.

Each validator in Narwhal consists of multiple workers and a primary node. Workers continuously exchange data batches and forward batch digests to their primaries. The primaries construct a round-based DAG using these digests. Crucially, data dissemination by workers occurs at network speed, independent of the DAG construction pace by the primaries. Validators use a ensuring reliable communication with a linear message count on the critical path.

One of the key advantages of DAG-based consensus is that it introduces zero additional communication overhead. Each validator independently examines its local view of the DAG and fully orders all vertices without sending any extra messages. The structure of the DAG itself is interpreted as the consensus protocol, where a vertex represents a proposal and an edge signifies a vote.

A significant challenge in this context arises from the asynchronous nature of the network, meaning that different validators may see slightly different DAGs at any given time. The primary difficulty lies in ensuring that all validators ultimately agree on the same total order of transactions.

During each round (refer to Figure 2 in the Narwhal & Tusk paper for a visual guide):

  1. Every validator constructs and dispatches a message to all other validators containing the metadata for the DAG vertex, which includes batch digests and n-f references to vertices from the previous round.

  2. Upon receiving this message, a validator responds with a signature if:

  • Its workers have stored the data corresponding to the digests in the vertex (ensuring data availability).

  • It has not yet responded to this validator in the current round (ensuring non-equivocation).

  1. The sender aggregates these signatures to form a quorum certificate from n-f responses and includes this certificate in its vertex for the round.

  2. A validator progresses to the next round once it has received n-f vertices accompanied by valid certificates.

Quorum certificates serve three main purposes:

  1. Non-equivocation: Validators sign only one vertex per round, ensuring consistency in the DAG. Byzantine validators cannot generate conflicting quorum certificates for the same round.

  2. Data availability: Validators sign only if they have stored the corresponding data, guaranteeing that data can be retrieved later by any validator.

  3. History availability: Certificates of blocks from the previous round ensure the availability of the entire causal history. A new validator can join by learning the certificates from the prior round, simplifying data management.

By decoupling data dissemination from metadata ordering, Narwhal ensures network-speed throughput, regardless of the DAG construction or consensus latency. Data dissemination happens at network speed, while the DAG references all disseminated data. Once a DAG vertex is committed, its entire causal history (including most disseminated data) is ordered.

Narwhal & Tusk
broadcast protocol