Title: Communication-Efficient Byzantine Agreement without Erasures.
Abstract: Byzantine agreement (BA) is one of the most fundamental building blocks in distributed systems. A long-standing open problem is whether BA protocols with subquadratic communication complexity are possible in the presence of an adaptive attacker. A breakthrough result by King and Saia (JACM'11) showed how to overcome the quadradtic barrier, assuming \emph{erasures}---that is, honest player can untraceably erase memory contents---achieving a communication complexity of $n^{1.5}$ while tolerating a constant fraction of corrupt nodes. Another very recent result by Micali (ITCS'17) also considers the erasure model and shows how to achieve close-to-optimal communication complexity $O(n \cdot \poly\log n)$, and handling $f<(1-\epsilon)n/3$ corrupted players, but this time using cryptographic assumptions (the existence of unique signatures and a random oracle) and relying on a public key infrastructure (PKI). Both of these results are in the synchronous model of computation.
In this work, we present the first subquadratic protocols in the presence of a fully adaptive attacker \emph{without erasures}. We rely on the existence of an honestly generated PKI. We have: (1) an expected $O(1)$-round BA protocol with $O(n \cdot \poly\log n)$ communication complexity in the \emph{synchronous} model of communication, tolerating $f<(1-\epsilon) n/2$ adaptive corruptions; (2) a BA protocol with $O(n \cdot \poly\log n)$ communication complexity in the \emph{partially-synchronous} model of communication, tolerating $f<(1-\epsilon) n/3$ adaptive corruptions.
In the partially-synchronous model, the corruption bound $f<n/3$ is known to be tight, whereas in the synchronous model, the $f< n/2$ bound is the best known among expected $O(1)$-round protocols even for protocols with large communication complexity.
Publication Year: 2018
Publication Date: 2018-05-09
Language: en
Type: article
Access and Citation
Cited By Count: 3
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot