Challenges & Tradeoffs
Last updated
Last updated
To best consider the design tradeoffs between centralized and decentralized trading venues, it's worth understanding their strengths and more importantly their weaknesses.
Over the past decade, there's been significant innovation in decentralized trading venues that rely on blockchain networks to facilitate ordinary trading activity between a number of on-chain assets and derivatives. Exchanges that back into this infrastructure hold a great deal of promise in making a wide range of assets more accessible, but this is not without significant performance bottlenecks. These gaps have made it impossible to bring trading activity from traditional sectors like commodities, equities and other real world assets into a decentralized trading environment.
Latency
On-chain order books and AMMs suffer from a relatively high degree of latency when compared with centralized exchanges. This is ultimately because they are subject to the finality times of their underlying L1s - even the most performant chains with custom consensus engines and storage solutions massively underperform traditional venues.
Exchange | Infrastructure | Type | Round trip latency |
---|---|---|---|
From this vantage, it's easy to see why decentralized exchanges struggle to bring institutional traders into their ecosystems. Round trip latency for the fastest decentralized protocols we have available are at least an order of magnitude off of what a traditional venue can accomplish.
Interfaces
Sophisticated traders at both retail and institutional levels rarely route orders through native exchange user interfaces. Instead, they use what are known as Order Execution and Management Systems (OEMS) to manage the lifecycle of their trades. OEMS software consolidates trading activities and optimize order routing, execution, margins, spreads, positions and provide a host of sophisticated reporting features. Integration with these types of software typically require an API standard interface called Financial Information Exchange (FIX).
Traders without access to these types of standardized financial systems interfaces are unlikely to integrate.
Scale
Scaling existing L1 network designs to accommodate the throughput of a centralized venue comes up against a number of blockers.
Memory growth and access patterns are inefficient and have unbounded drops in performance over time.
Well known optimizations like optimistic executions and transaction pipelining still have expensive I/O cost.
L2 networks are loosely coupled to the underlying base network, suffering from data fragmentation, memory growth and slow communication times.
Conversely, centralized exchanges still hold the heavyweight titles when it comes to intrinsic metrics like latency and throughput. Exchanges like the CME, NASDAQ and the NYSE process many times more transactions, servicing 100+ billion transactions per day.
Market Access
Exchanges do not operate around the clock and are often difficult to access both domestically and abroad. This restricts the flexibility for global participants in different time zones and may impact their ability to execute timely trades. Unfair data distribution for traders often leads to advantages for those with lower latency or geographical advantages.
Clearing and Settlement
Clearing and settlement inefficiencies are prominent issues in centralized trading venues, where T+1 and T+2 are the standard. Clearing firms have to reconcile with banking rails that do not provide instant transfer - discrepancies in data across systems and regulatory compliance add further complexity, sometimes extending settlement cycles to several days, especially in cross-border trades. These inefficiencies impact liquidity, increase the potential for errors, and reduce market participants' ability to manage risk effectively in fast-paced trading environments.
Risk Management
Trading on margin introduces heightened risk due to its potential for magnified gains and losses. Margin calls become a crucial concern, as rapid market movements can trigger these calls, demanding additional funds from traders to cover potential losses or maintain required margins. The speed at which margin calls are executed becomes a critical factor, as delays can amplify losses or lead to forced liquidation of positions, adversely impacting traders’ financial stability. Additionally, the risk of margin defaults arises when traders fail to meet margin requirements, potentially disrupting market stability and necessitating swift intervention by exchanges or clearinghouses to mitigate systemic risks.
Centralization
Market centralization poses the most significant challenge, with a few dominant exchanges holding a long-term monopoly. This has led to lackluster innovation in trading instruments and fee structures that harm consumers. Exchanges can halt trading operations at their discretion, causing disruptions and hindering access during crucial market periods with no viable alternative markets to off load risk.
When considering how to design infrastructure that meets our Mission requirements, there are critical tradeoffs and constraints worth that affect our ability
Practical Limitations of Parallel Execution
There's a lot of work going on in the blockchain community regarding parallelization, which involves leveraging the power of multiple processors to speed up computation. Dividing tasks into smaller subtasks, processing them in parallel, and then combining the results provides some system gain. While the strategy holds promise, there are practical limitations when it comes to the design of high-throughput systems.
Amdahl's Law Resistance: Amdahl's Law states that the theoretical maximum speedup of a program using multiple processors is limited by the fraction of the program that cannot be parallelized. The more the inherently sequential portion of the algorithm, the less effective parallelization becomes.
Communication Overhead: Parallel computing introduces the challenge of managing communication between processors. As the number of processors increases, the coordination and data exchange overhead grows. Distributed systems suffer from network latency and bandwidth limitations, especially when dealing with large datasets or complex inter-processor communication patterns. This overhead can significantly impact the overall performance and efficiency of parallelization.
Algorithmic Complexity: Not all algorithms benefit equally from parallelization. Some algorithms exhibit inherently higher complexity or have limited parallelism potential. Monad's approach might struggle with algorithms that have complex interdependencies or require frequent global synchronization, making it challenging to achieve linear speedup with increased processors. For instance, Dijkstra's shortest path algorithm does not scale as there are only certain operations that can be parallelized.
Layer 2 and Sequencers
L2 solutions aim to enhance blockchain scalability by handling transactions off the main chain (L1). However, this often comes at the cost of decentralization. Many L2 protocols rely on a network of side chains, state channels, or other off-chain mechanisms, which can introduce centralization risks. The more complex the L2 architecture, the harder it becomes to maintain the core principles of blockchain decentralization.
L2s typically assume the security of the underlying L1 blockchain. They offload the burden of consensus and finality to the main chain, which can create a single point of failure. If the L1 is compromised or experiences delays, the security and integrity of L2 transactions may be at stake. This tradeoff between security and scalability is a delicate balance, and centralized sequencers might offer a temporary solution.
Centralized sequencers act as intermediaries, ordering and sequencing transactions before they are batched and submitted to the L1. This process can significantly enhance transaction throughput and reduce latency. By removing the need for immediate L1 interaction, centralized sequencers provide a performance boost, making blockchain interactions feel more akin to traditional centralized systems.
Users and algorithms in HFT environments crave immediate confirmation of their transactions. Centralized sequencers offer a sense of temporary finality, allowing users to consider their transactions finalized before they are fully settled on L1. This illusion of finality can improve the user experience, especially in time-sensitive applications, but it comes with trust assumptions and some degree of centralization.
Speed versus Decentralization
Every blockchain is engaged in a perpetual battle between the pursuit of speed and the preservation of decentralization. This dichotomy presents a complex tradeoff, where each step towards faster transaction processing might sacrifice the core principles of blockchain's distributed nature. Centralized solutions, such as sequencers or federated networks, offer impressive throughput and low latency by relying on trusted entities, but they introduce single points of failure and potential security risks. These systems prioritize performance, providing a seamless user experience akin to traditional centralized platforms.
On the other hand, decentralization ensures the blockchain's resilience, security, and censorship resistance. Truly decentralized networks, employing robust consensus mechanisms, protect against malicious attacks and control by any single entity. However, this security comes at the cost of speed, as reaching global consensus and maintaining network integrity takes time.
This is apparent by looking how a Blockchain like JPMorgan's Quorum scales as the network grows. Playing parlor tricks like colocating nearby come at the network decentralization and ultimately resilience.
Helix
Injective
On-chain order book
0.6s - 1s
DYDX
App-specific
On-chain order book
1s - 2s
Astroport
Sei
On-chain order book
0.41s - 1s
Jupiter
Solana
On-chain AMM
1s - 2s
Uniswap
Ethereum
On-chain AMM
5s - 10m
CME
App-specific
Centralized order book
500us - 10ms
NYSE
App-specific
Centralized order book
100us - 10ms
ICE
App-specific
Centralized order book
<100ms