Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
https://www.usv.com/writing/2016/08/fat-protocols/
Joel Monegro, Aug 2016
Here’s one way to think about the differences between the Internet and the Blockchain. The previous generation of shared protocols (TCP/IP, HTTP, SMTP, etc.) produced immeasurable amounts of value, but most of it got captured and re-aggregated on top at the applications layer, largely in the form of data (think Google, Facebook and so on). The Internet stack, in terms of how value is distributed, is composed of “thin” protocols and “fat” applications. As the market developed, we learned that investing in applications produced high returns whereas investing directly in protocol technologies generally produced low returns.
This relationship between protocols and applications is reversed in the blockchain application stack. Value concentrates at the shared protocol layer and only a fraction of that value is distributed along at the applications layer. It’s a stack with “fat” protocols and “thin” applications.
We see this very clearly in the two dominant blockchain networks, Bitcoin and Ethereum. The Bitcoin network has a $10B market cap yet the largest companies built on top are worth a few hundred million at best, and most are probably overvalued by “business fundamentals” standards. Similarly, Ethereum has a $1B market cap even before the emergence of a real breakout application on top and only a year after its public release.
There are two things about most blockchain-based protocols that cause this to happen: the first is the shared data layer, and the second is the introduction cryptographic “access” token with some speculative value.
But an open network and a shared data layer alone are not not enough of an incentive to promote adoption. The second component, the protocol token[1] which is used to access the service provided by the network (transactions in the case of Bitcoin, computing power in the case of Ethereum, file storage in the case of Sia and Storj, and so on) fills that gap.
Albert’s post will help you understand how tokens incentivize protocol development. Here, I’m going focus on how tokens incentivize protocol adoption and how they affect value distribution via what I will call the token feedback loop.
When a token appreciates in value, it draws the attention of early speculators, developers and entrepreneurs. They become stakeholders in the protocol itself and are financially invested in its success. Then some of these early adopters, perhaps financed in part by the profits of getting in at the start, build products and services around the protocol, recognizing that its success would further increase the value of their tokens. Then some of these become successful and bring in new users to the network and perhaps VCs and other kinds of investors. This further increases the value of the tokens, which draws more attention from more entrepreneurs, which leads to more applications, and so on.
There are two things I want to point out about this feedback loop. First is how much of the initial growth is driven by speculation. Because most tokens are programmed to be scarce, as interest in the protocol grows so does the price per token and thus the market cap of the network. Sometimes interest grows a lot faster than the supply of tokens and it leads to bubble-style appreciation.
With the exception of deliberately fraudulent schemes, this is a good thing. Speculation is often the engine of technological adoption [2]. Both aspects of irrational speculation — the boom and the bust — can be very beneficial to technological innovation. The boom attracts financial capital through early profits, some of which are reinvested in innovation (how many of Ethereum’s investors were re-investing their Bitcoin profits, or DAO investors their Ethereum profits?), and the bust can actually support the adoption long-term adoption of the new technology as prices depress and out-of-the-money stakeholders look to be made whole by promoting and creating value around it (just look at how many of today’s Bitcoin companies were started by early adopters after the crash of 2013).
The second aspect worth pointing out is what happens towards the end of the loop. When applications begin to emerge and show early signs of success (whether measured by increased usage or by the attention (or capital) paid by financial investors), two things happen in the market for a protocol’s token: new users are drawn to the protocol, increasing demand for tokens (since you need them to access the service — see Albert’s analogy of tickets in a fair), and existing investors hold onto their tokens anticipating future price increases, further constraining supply. The combination forces up the price (assuming sufficient scarcity in new token creation), the newly-increased market cap of the protocol attracts new entrepreneurs and new investors, and the loop repeats itself.
What’s significant about this dynamic is the effect it has on how value is distributed along the stack: the market cap of the protocol always grows faster than the combined value of the applications built on top, since the success of the application layer drives further speculation at the protocol layer. And again, increasing value at the protocol layer attracts and incentivises competition at the application layer. Together with a shared data layer, which dramatically lowers the barriers to entry, the end result is a vibrant and competitive ecosystem of applications and the bulk value distributed to a widespread pool of shareholders. This is how tokenized protocols become “fat” and its applications “thin”.
This is a big shift. The combination of shared open data with an incentive system that prevents “winner-take-all” markets changes the game at the application layer and creates an entire new category of companies with fundamentally different business models at the protocol layer. Many of the established rules about building businesses and investing in innovation don’t apply to this new model and today we probably have more questions than answers. But we’re quickly learning the ins and outs of this market through our blockchain portfolio and in typical USV fashion we’re going to share that knowledge as we go along.
Joel Monegro, Jan 2020
I predicted this architecture would dominate new online services within a decade as crypto took over the web. So it’s been fun to review how things have developed five years into that idea. The most obvious flaw was thinking we’d build everything on top of Bitcoin (Ethereum hadn’t launched). Now we have a multitude of blockchains to choose from, which is much better. We also refer to “overlay networks” as layer 2. And today, the “Web3 Application Stack” would be a better name. But the overall framework appears to stand.
A Cryptoservices Architecture
A cryptoservices architecture is great for startups. Entrepreneurs can launch new applications quickly and cheaply by outsourcing a lot of the functionality to various networks. And every app is on equal footing when it comes to protocol costs and resources (unlike web infrastructure like AWS where the smaller you are, the more expensive it is). The companies above stand out because they brought fully-featured products to the market before their first real rounds of funding. They’re a first look at the level of capital efficiency that’s possible for “thin” applications using this new model, compared to the increasing amount of capital web companies need to raise to compete with incumbents.
As a crypto user, you bring your own data. Nobody has monopoly control. When you log into a crypto app by connecting your wallet, you’re sharing the “keys” it needs to find your information across the relevant networks. You can share them with any app, so your data comes with you as you move from interface to interface. And you keep control of the “private key” (basically a password) needed to operate on it, like signing messages or authorizing transactions. So you have effective custody over your data and nobody can manipulate it without consent (unless you delegated your keys to a custodian).
For example, buying $10 million worth of ETH at ~$15 billion network value will get you just about 0.06% of the network (based on current supply), and to provide a 5x return (to $50 million) Ethereum needs to add $60 billion dollars to its market cap. Meanwhile, a $1 million seed investment in a successful application business for 10% of the company will yield the same $50 million with “just” another $490 million in value – and probably for less than $10 million when you account for follow-ons.
But networks and companies process value in very different ways. The forces that take a public network from $20 to $90 billion in value are very different from those which take a business from $10 to $500 million. Token prices are chaotically determined every time there is a trade in the public markets, where networks gain and lose more value per investment dollar faster than private companies can. The complexity adds a lot of leverage to every investment dollar flowing in and out. Meanwhile, business value may be a well-known function, but private, early-stage investments can be riskier in unpredictable ways.
Finally, the idea of user-staking is about leveraging tokens to distribute value and upside to users. In general, it works by having users stake an amount of the application’s own token (not a protocol’s) to unlock benefits like discounts or rewards – but there’s a lot of variations. At first glance, they look similar to the loyalty/reward systems used to retain customers in other hyper-commoditized markets like airlines and credit cards. Except those programs provide no upside at all. The innovation is in designing token models that allow users to profit from the application’s growth. It goes beyond marginal benefits like discounts by including users in the upside of the business.
I’m most fascinated by the user-staking models because they represent a genuine business model innovation. The examples above are built more like traditional web applications. They are more centralized and custodial than thinner apps like Zerion. But what I love about their staking models is how they change the user-service relationship. Web users are locked-in by force through the centralization of data. Crypto applications, even if they’re built more traditionally, don’t have that same ability to lock you in. But user-staking creates a kind of “opt-in” economic lock-in that benefits the user by turning them into stakeholders in the success of the service. It creates defensibility through user-ownership instead of user lock-in. This presents a universe of fascinating consequences, to be explored in future work.
. Though the post has gathered some dust since, the main point remains: by replicating and storing user data across an open and decentralized network rather than individual applications controlling access to disparate silos of information, we reduce the barriers to entry for new players and create a more vibrant and competitive ecosystem of products and services on top. As a concrete example, consider how easy it is to switch from to , or to any of the dozens of cryptocurrency exchanges out there, and vice-versa in large part because they all have equal and free access to the underlying data, blockchain transactions. Here you have several competing, non-cooperating services which are interoperable with each other by virtue of building their services on top of the same open protocols. This forces the market to find ways to reduce costs, build better products, and to succeed.
and wrote about this last week after we had a number discussions at USV about investing in blockchain-based networks. , as a way of funding research and development (via crowdsales), creating value for shareholders (via token value appreciation), or both.
[1] Also known as App Coins,
[2] writes a thorough and entertaining history of financial speculation and its place in society (you’ll be in awe by how similar cryptocurrency speculation today is to prior bursts of financial exuberance!) and describes the important role of bubbles in the development of new technologies by attracting financial capital to research and development.
When I started thinking about back in 2014, I described it as a hierarchical “stack” of functionality. That first iteration described a blockchain at the base with “overlay networks” that provide specific decentralized services on top, forming “shared” protocol and data layers. Above them, independent applications would consume those protocols and redistribute their services to users:
(2014)
Two years into it, this model took me to the idea of “fat protocols”. I suggested most of the market value in crypto would be captured at the “protocol layer” whereas on the web it’s captured at the “application layer”.
(2016)
This observation evolved from the application stack. Most of the “work” and data exists at the protocol layer, while applications tend to provide more limited interface services. In 2014, “business models” at the protocol layer were not obvious. But as we invested in early crypto at , the potential of tokens became more clear.
In 2014 and 2016, there weren’t a lot of real-world examples to observe. And it’s still “early days” in the grand scheme of the . But now we can observe hundreds of crypto protocols and applications across many markets. Going into 2020, and after testing different aspects of these ideas at Placeholder, it’s a good time to refine old ideas and consolidate what we’ve learned so far.
Big Web companies tend to and by locking users into proprietary interfaces. Cryptonetworks, on the other hand, tend to provide single services, and can’t “own” the interface because . Specialization helps because the more decentralized a network, the harder it is to coordinate a complete suite of services under a single interface like Google, Facebook, or Amazon do. So instead, consumer applications in crypto/Web3 are independently built on top of multiple “composable” protocols using what we could call a cryptoservices architecture (like , but with components).
In Decentralized Finance (DeFi) people call this “money legos”. Consider (a ), and . They are building similar crypto-finance apps using many of the same protocols, like , , , and . This allows them to deliver a complete suite of financial services (transactions, borrowing, lending, trading, investing, etc.) without building all that functionality, infrastructure and liquidity in-house. The protocols provide specific services across many interfaces and the apps on top share resources and data with no centralized platform risk. Sharing the infrastructure lowers the costs across the board. These same dynamics are showing up in corners of crypto like DAOs and games.
Non-custody is another way crypto cuts costs for applications. The Big Web business model relies on creating data monopolies because locking users into proprietary interfaces is the most profitable way to extract value from their information. They also compete more than they collaborate, so as a user you have to visit different platforms for slices of your information. And the increasing costs of security and new regulation turns data into a liability. This works to the advantage of companies that can these costs. But resource-constrained startups have to look for other models to compete.
For example, in the world, an artist can tokenize their work using and later sell it on . Then someone (like ) can buy and display it at their virtual gallery on . Similarly in DeFi, if you use Maker through Zerion and later log into Instadapp with the same keys, you can immediately interact with your Maker loan there as well. By building on the same and networks, these apps are interoperable by default and users can move freely between interfaces without losing information or functionality.
A cryptoservices architecture combined with a non-custodial data model allows startups to compete more effectively against centralized incumbents. This echoes the way . Giving ownership and control to the users offloads a lot of costs while fulfilling many of today’s consumer demands. It does require companies to give up a lot of what makes traditional online services defensible. But what you lose in control, you gain in potential efficiency and scale. Adopting these models allows businesses to run at very low costs, and applications benefit from each other’s success because they contribute to a shared pool of resources at the protocol level. As a network, thin applications can scale more effectively across markets. Every piece of digital art minted on Rare Art indirectly increases the utility value of OpenSea, activity on Instadapp benefits Zerion (and vice versa), etc. But what seems unclear to many is how exactly they can create long-term business value and defensibility when everything is open.
suggests crypto protocols will “capture” more value than the application interfaces. A common mistake is to conflate the idea of value capture with investment returns (it’s also a mistake to think that a token on Ethereum or another smart contract blockchain is itself an “application” – more often than not, they represent the value other protocols with their own application layers, not Ethereum’s application interfaces). Many concluded there are no returns in investing at the application layer of crypto despite the original text qualifying application-layer success as a requirement for protocol value growth. To be clear, that less overall value ends up at the application layer does not mean there are fewer outsized return opportunities available to application businesses. Nor does it mean there’s always returns in protocols. Value capture is more about and other macro elements, while returns vary by things like cost basis, growth rates, and ownership concentration. What’s different between protocols and applications is how these elements combine.
Looking at is a more precise way to think about value distribution. The basic principle is that, in markets, costs are a strong determinant of future value. So we can estimate a market’s value structure by studying its cost structure. In crypto, the networks at the protocol layer bear most of the costs of production so they require more investment – which means more of the value has to accrue to that layer to maintain equilibrium (or the investment doesn’t happen). Applications cost less to operate and require less investment, so they naturally demand less of the market’s value. However, the ownership and of networks are far more distributed than those of private companies. In general, investing in tokens will generally get you a smaller piece of what has to be a much bigger pie to cover your cost of capital.
Finally, we must also consider the combined value of all protocols underneath an application to assess relative values. For example, relies on Ethereum, Maker, Compound, Uniswap and others to operate. The combined value of these networks is far greater than the individual value of Zerion or its peers. But again, that has little to do with investment returns in the apps and companies that use these protocols. Cryptonetworks may scale to store trillions in value but, eventually, flatten in growth. Then, most of the market’s value may be stored in the protocol layer, while outsized returns on investment move to wherever there is more growth. But today we’re far from that state of equilibrium and we’re finding high-return opportunities in both layers.
Thin Applications are cheaper to run because they push many of the costs to protocols and users. But competitors can access the same production and data resources, so they can substitute each other in ways that are impossible on the traditional web. In a way, it’s similar to the retail model where storefronts act as “interfaces” to various commodity products and differentiate themselves by brand, curation and customer experience. But instead of “” think of crypto as P2B2C for Protocol to Business to Consumer.
Protocols provide specific services, which are bundled at the application layer for distribution to consumers. Like in retail, prices are determined by the cryptonetworks which produce the services (a la ), and fair competition at the application layer makes it difficult for anyone to mark them up unfairly. This setup is great for users and addresses many of our gripes with the web. But it raises new questions about defensibility at the application layer. How do you create long-term business value and defensibility when everything is open and competitors can so easily substitute each other?
In crypto, application businesses have to create value outside the protocols’ functions. In many cases, known business models like subscriptions or transaction fees make sense. But as the infrastructure matures and applications become thinner, we need new business models. There’s a lot of interesting experiments. , for example, innovates with “app mining” and includes a royalty fee structure for its developers (both evoking Amy James’ idea). I’m curious to see how they develop, though I am not yet convinced that protocols should dictate the economics of its applications. Covering the full range of experiments would take many more paragraphs so, here, I’ll focus on three general strategies: building cost moats, vertical integration, and user-staker models.
Building a cost moat means centralizing costs and externalities unaccounted for by the protocols. The scale of these costs is a kind of defensibility because it makes it very expensive for competitors to catch up. , for example, created a lot of business value by capturing two very expensive externalities of crypto that users are willing to outsource, fiat exchange and custody, and profit via classic . Nothing too new. The market won’t let Coinbase mark up crypto transaction fees, but they’ll get you with the exchange fees to cover the substantial investment they’ve had to make to provide these services. Thinner applications like Zerion, by contrast, don’t internalize those costs so they charge no extra fees – but as a result, they can’t use Coinbase’s business models or justify the same fees. It works, but it’s expensive.
Vertical integration in crypto explores the possibility that successful applications may amass enough users to “become their own supply”. They could do this by turning themselves into “supply-siders” (e.g. miners) in the protocols they integrate and servicing their users directly. We saw this in the old retail model with store brands and it’s happening again as Amazon promotes over the competition’s. Amazon grew by taking virtually no margins on the items sold on its storefront, to then leverage that platform and its perfect demand data to create its own supply with unprecedented efficiency. Could crypto applications pull off a similar move? What if the application later forks the network? Will the market allow that? It is undesirable for the application layer to capture too much control of the protocols. That’s what happened to the Web. But it’s a possible outcome.
For example, and , which offer crypto-collateralized loans, use tokens this way. Nexo offers discounted interest rates when you pay them back with NEXO tokens. At Celsius you unlock better rates the more CEL you stake and can earn better interest rates on your deposits if you opt to earn in CEL. Because NEXO and CEL tokens have a limited supply, they may appreciate in value as the usage of these applications grows and more people buy and use the tokens. So there’s value in the upside beyond a simple discount. We’re even seeing this model with SAAS companies like , which give you a discount on monthly fees if you stake their token. How can we take this concept further into the mainstream?
https://www.preethikasireddy.com/post/what-do-we-mean-by-blockchains-are-trustless
Preethi Kasireddy, Feb 2019
Many of us are guilty of describing blockchains as “trustless” systems. However, I’ve come to realize that the term “trustless” is ambiguous, confusing, and most importantly, inaccurate.
Blockchains don’t actually eliminate trust. What they do is minimize the amount of trust required from any single actor in the system. They do this by distributing trust among different actors in the system via an economic game that incentivizes actors to cooperate with the rules defined by the protocol.
Let me explain in more detail.
A truly trustless transactional system would look something like this:
Two people who are interested in transacting with one another change hands directly. They are physically present, and therefore can easily verify
Authenticity: the actual sender is handing over the money, and
No double spending: the money is not fake, it’s a real $10 bill
While theoretically flawless, this transactional system is limited. Consider: two individuals may trade with one another only when they are in close physical proximity. For economies to function at scale, a transactional system should enable transfers with anyone in the world, regardless of distance.
So, what we really want is this:
As you can see from the diagram above, the way we achieve this aim is by having an intermediary who can facilitate the transfer of value to make sure that the actual sender is sending the money and the money is real.
This begs the question: who serves as the wholly trustworthy intermediary?
In modern day transactional systems, the intermediary can be a bank (e.g. Chase Bank); a payment provider (e.g. Paypal); a remittance company (e.g. Western Union); a credit card (e.g. Visa), and so on.
In this centralized model, the bank authenticates you, and guarantees the recipient that they are getting real money.
In other words, unless there is a direct physical transfer of value from one individual to the other, there must be some intermediary that exists that we “trust”.
Blockchains are no different.
Blockchains define a protocol that allows two individuals to transact with one another in a “peer-to-peer” manner over the Internet. When you digitally transfer value from one account to another on the blockchain, you’re trusting the underlying blockchain system to both enable that transfer and ensure sender authenticity and currency validity.
In a “centralized” system, we trust a single third party (e.g. Chase Bank) to act as the intermediary who guarantees those two properties; in a “decentralized” system, our trust is placed elsewhere, namely in public-key cryptography and a “consensus mechanism” that allows us to determine the truth.
Public key cryptography (or asymmetrical cryptography) uses:
a set of public keys visible to anyone, and
a set of private keys visible only to the owner
The private key generates a “digital signature” for each blockchain transaction that a user sends out. The signature ensures authenticity by:
confirming that the transaction is coming from the user, and
preventing the transaction from being altered by anyone once it has been issued
Changing the transaction message in any way will cause verification to fail.
Okay, so we’ve figured out that public-key cryptography helps us authenticate users in a peer-to-peer system. But to ensure no double spending, we need to keep track of who has what so that we can know whether someone is sending real digital money or fake digital money.
This is where the “consensus system” — which allows us to preserve a digitally shared truth — must come into play.
Blockchains have a shared ledger that gives us the absolute truth of the state of the system. It use mathematics, economics, and game theory to incentivize all parties in the system to reach a “consensus”, or coming to an agreement on a single state of this ledger.
Let’s take Bitcoin, for example. The Bitcoin protocol has a consensus algorithm called “Proof of Work” that holds the system together. For a transaction to be settled between two consumers, the algorithm requires that a set of nodes (called “miners”) compete to validate transactions by solving a complex algorithmic problem. In other words, Bitcoin “economically incentivizes” miners to purchase and use compute power to solve complex problems. These economic incentives include:
miners earning a transaction fee that users pay for carrying out a transaction, and
miners earning new Bitcoins for successfully solving the puzzle
Because of these economic incentives, miners are constantly watching the network so that they can gather a new set of transactions to fit into a new “block.” Then they use their computing resources to solve the complex algorithm in order to “prove” that they did some work.
The first miner to solve the algorithm adds the proof and the new block (and all the transactions in it) to the blockchain and broadcasts it to the network. At that point, everyone else in the network syncs the latest blockchain because it’s a “truth” everyone believes in.
Since miners are competing to run computations, there are times when multiple blocks get solved at the same time. This then creates a “fork” of multiple chains:
When there are forks like this, the network’s “canonical” chain is the one which is the “longest” — the one which the most amount of miners trusted and continued to work on.
Every new block that’s added to the blockchain in this manner adds more security to the system because an attacker who wants to create new blocks that overwrite a party of history would need to consistently solve for the puzzle faster than anyone else in the network. This is practically impossible to do, making it’s impossible to reverse engineer or alter the data inside these blocks. This is why users trust continue to trust the system.
So when we transact with one another on the blockchain, we are anchoring our trust in the miners who are giving up their resources to do some work to ensure no double spending.
Of course, even if the machine consensus works perfectly, we can never guarantee a 100% probability of reaching consensus on other important aspects required to maintain trust in the network. For example, when the underlying network needs to be upgraded, improved, or repaired, we need some way to trust that the network and all its constituents can appropriately handle the changes. In such cases, it’s very much a coordination effort amongst constituents, or what I would call a “social consensus” (e.g. governance).
For example, if the blockchain requires an improvement (e.g. better transaction logs), we need a governance mechanism that coordinates the interests of all parties involved (users, developers, investors, etc.) in coming up with the best solution. Or if there’s a controversy on the best path forward (e.g. a contentious fork), then a community needs to form a consensus on what to do next. If an agreement can’t be reached, the network forks, and people are forced to choose one side over another instead of everyone believing in a shared truth. Users would lose trust in the system because they would be unable to reasonably determine which chain was the “valid” chain.
When we say blockchains are “trustless,” what we mean is that there are mechanisms in place by which all parties in the system can reach a consensus on what the canonical truth is. Power and trust is distributed (or shared) among the network’s stakeholders (e.g. developers, miners, and consumers), rather than concentrated in a single individual or entity (e.g. banks, governments, and financial institutions).
Perhaps a more accurate way to describe blockchains is not as “trustless,” but as built on the basis of distributed trust: We are trusting everyone in aggregate.
Of course, this assumes that we trust that a majority of the power held in the system belongs to stakeholders who share similar values. Unfortunately, I don’t think we can claim — at least, not yet — to have figured out exactly what those shared values consist of. Hence the proliferation of blockchains and contentious forks in the past year … but that’s a long-winded topic for another day! 😊
https://news.earn.com/quantifying-decentralization-e39db233c28e
Balaji Srinivasan, Jul 2017
Measure the extent of a given system’s decentralization
Determine how much a given system modification improves or reduces decentralization
Design optimization algorithms and architectures to maximize decentralization
The basic idea is to (a) enumerate the essential subsystems of a decentralized system, (b) determine how many entities one would need to be compromised to control each subsystem, and (c) then use the minimum of these as a measure of the effective decentralization of the system. The higher the value of this minimum Nakamoto coefficient, the more decentralized the system is.
To motivate this definition, we begin by giving some background on the related concepts of the Gini coefficient and Lorenz curve, and then display some graphs and calculations to look at the current state of centralization in the cryptocurrency ecosystem as a whole according to these measures. We then discuss the concept of measuring decentralization as an aggregate measure over the essential subsystems of Bitcoin and Ethereum. We conclude by defining the minimum Nakamoto coefficient as a proposed measure of system-wide decentralization, and discuss ways to improve this coefficient.
Even though they are typically concerns of different political factions, there are striking similarities between the concepts of “too much inequality” and “too much centralization”. Specifically, we can think of a non-uniform distribution of wealth as highly unequal and a non-uniform distribution of power as highly centralized.
The equation for the Gini coefficient can be calculated from the areas under the Lorenz curve and the so-called “Line of Equality” as shown below:
Intuitively, the more uniform the distribution of resources, the closer the Gini coefficient is to zero. Conversely, the more skewed to one party the distribution of resources, the closer the Gini coefficient is to one.
This captures our intuitive notions of centralization: in a highly centralized system with G=1, there is one decision maker and/or one entity to capture in order to compromise the system. Conversely, in a highly decentralized system with G=0, there are a multiplicity of decision makers who need to be captured in order to compromise the system. Hence, a low Gini coefficient means a high degree of decentralization.
To build intuition, let’s look at the Lorenz curve and Gini coefficient for a simple example: the distribution of wealth across cryptocurrency market capitalizations. To do this, we took a snapshot of market capitalizations on July 15 2017 for the top 100 digital currencies, calculated the percentage of market share for each, and graphed it as a Lorenz curve with associated Gini coefficient:
If we measure the centralization of market capitalization across the top 100 cryptocurrencies, the Gini coefficient is 0.91. This fits our intuition as about 70% of the market capitalization as of July 2017 is held by the top two cryptocurrencies, namely Bitcoin and Ethereum.
To apply this concept to the space of public blockchains, we need to make a distinction between a decentralized system and a decentralized subsystem. Specifically, a decentralized system (like Bitcoin) is composed of a set of decentralized subsystems (like mining, exchanges, nodes, developers, clients, and so on). Here are six of the subsystems that compose Bitcoin:
We will use these six subsystems to illustrate how to measure the decentralization of Bitcoin or Ethereum. Please note: you may decide to use different subsystems based on which ones you consider essential to decentralization of the system as a whole.
Now, one can argue that some of these decentralized subsystems may be more essential than others; for example, mining is absolutely required for Bitcoin to function, whereas exchanges (as important as they are) are not actually part of the Bitcoin protocol.
Let’s assume however that a given individual can draw a line that identifies the essential decentralized subsystems of a decentralized system. We can then stipulate that one can compromise a decentralized system if one can compromise any of its essential decentralized subsystems.
Given these definitions, let’s now calculate Lorenz curves and Gini coefficients for the Bitcoin and Ethereum mining, client, developer, exchange, node, and owner subsystems. We can see how centralized each of them are according to the Gini coefficient and Lorenz curve measures.
Here are the curves for Bitcoin:
And here they are for Ethereum:
Let’s discuss each of these subsystems in turn by reference to the six panels in each of the figures above.
With that said, two points. First, while one would not want a Gini coefficient of exactly 1.0 for BTC or ETH (as then only one person would have all of the digital currency, and no one would have an incentive to help boost the network), in practice it appears that a very high level of wealth centralization is still compatible with the operation of a decentralized protocol. Second, as we show below, we think the Nakamoto coefficient is a better metric than the Gini coefficient for measuring holder concentration in particular as it obviates the issue of arbitrarily choosing a threshold.
Can we combine these sample measures of subsystem decentralization into a measure of system decentralization? A simple first approach would simply be to take the maximum Gini coefficient over all essential subsystems, as shown below:
So, by this measure, both Bitcoin and Ethereum have a maximum Gini Coefficient of ~0.92, because both of them have nodes with clients that are very highly concentrated in one codebase (Bitcoin Core for Bitcoin, and geth for Ethereum).
Crucially, a different choice of essential subsystems will change these values. For example, one may believe that the presence of a single codebase is not an impediment toward practical decentralization. If so, then Bitcoin’s maximum Gini coefficient would improve to 0.84, and the new decentralization bottleneck would be the node distribution across countries.
We certainly don’t argue that the particular choice of six subsystems here is the perfect one for measuring decentralization; we just wanted to gather some data to show what this kind of calculation would look like. We do argue that the maximum Gini coefficient metric starts to point in the right direction of identifying possible decentralization bottlenecks.
However, the maximum Gini coefficient has one obvious issue: while a high value tracks with our intuitive notion of a “more centralized” system, the fact that each Gini coefficient is restricted to a 0–1 scale means that it does not directly measure the number of individuals or entities required to compromise a system.
Specifically, for a given blockchain suppose you have a subsystem of exchanges with 1000 actors with a Gini coefficient of 0.8, and another subsystem of 10 miners with a Gini coefficient of 0.7. It may turn out that compromising only 3 miners rather than 57 exchanges may be sufficient to compromise this system, which would mean the maximum Gini coefficient would have pointed to exchanges rather than miners as the decentralization bottleneck.
There are various ways to surmount this difficulty. For example, we might be able to come up with principled weights for the Gini coefficients of different subsystems prior to combining them.
An alternative approach is to define a spiritually similar metric based on the Lorenz curve from which the Gini coefficient is calculated, which we dub the “Nakamoto coefficient”. A visual is below. In this example, the Nakamoto coefficient of the given subsystem is 8, as it would require 8 entities to get to 51% control.
That is, we define the Nakamoto coefficient as the minimum number of entities in a given subsystem required to get to 51% of the total capacity. Aggregating this measure by taking the minimum of the minimum across subsystems then gives us the “minimum Nakamoto coefficient”, which is the number of entities we need to compromise in order to compromise the system as a whole.
The Nakamoto coefficient represents the minimum number of entities to compromise a given subsystem. The minimum Nakamoto coefficient is the minimum of the Nakamoto coefficients over all subsystems.
We can also define a “modified Nakamoto coefficient” if 51% is not the operative threshold across each subsystem. For example, perhaps one might require 75% of exchanges to be compromised in order to seriously degrade the system, but only 51% of miners.
That illustrates the concept. Here are graphs for all of the subsystems for Bitcoin and Ethereum again, this time with the Nakamoto coefficients calculated:
And here’s a table where we’ve assembled the Nakamoto coefficients of each subsystem:
As we can see, given these essential subsystems, we can say that the Nakamoto coefficient is 1 for both Bitcoin and Ethereum. Specifically, the compromise of the Bitcoin Core or geth codebases would compromise more than 51% of clients, which would result in compromise of their respective networks.
Increasing this for Ethereum would mean achieving a much higher market share for non-geth clients like Parity, after which point developer or mining centralization would become the next bottleneck. Increasing this for Bitcoin would similarly require widespread adoption of clients like btcd, bcoin, and the like.
For example, if one considers “founder and spokesman” an essential subsystem, then the minimum Nakamoto coefficient for Ethereum would trivially be 1, as the compromise of Vitalik Buterin would compromise Ethereum.
Conversely, if one considers “number of distinct countries with substantial mining capacity” an essential subsystem, then the minimum Nakamoto coefficient for Bitcoin would again be 1, as the compromise of China (in the sense of a Chinese government crackdown on mining) would result in >51% of mining being compromised.
The selection of which essential subsystems best represent a particular decentralized system will be a topic of some debate that we consider outside the scope of this post. However, it is worth observing that the “founder and spokesman” and “China miner” compromises are two different kinds of attacks for two different chains. As such, if one thinks about comparing the minimum Nakamoto coefficient across coins, some degree of ecosystem diversity may quantitatively improve decentralization.
Many have said that decentralization is the most important property of systems like Bitcoin and Ethereum. If this is true, then it is critical to be able to quantify decentralization. The minimum Nakamoto coefficient is one such measure; as it increases, the minimum number of entities required to compromise the system increases. We believe this corresponds to the intuitive notion of decentralization.
The reason an explicit measure for quantifying decentralization is important is three-fold.
Measurement. First, quantitative measures like this can be computed unambiguously, recorded over time, and displayed in dashboards. This gives us the ability to track historical trends toward decentralization within subsystems and at the system level.
Improvement. Second, just like we measure performance, a measure like the Nakamoto coefficient allows us to begin measuring improvements and/or reductions of decentralization. This then allows us to begin attributing changes in decentralization to individual deployments of code or other kinds of network activities. For example, given scarce resources, we can measure whether deploying 1000 nodes or hiring two new client developers would provide a greater improvement in decentralization.
We recognize that there is plenty of room for debate over which subsystems of a decentralized system are essential. However, given a proposed essential subsystem we can now generate a Lorenz curve and a Nakamoto coefficient, and see whether this is plausibly a decentralization bottleneck for the system as a whole.
As such, we think the minimum Nakamoto coefficient is a useful first step towards quantifying decentralization.
As I described in a (bullet #6), there are many different models for blockchain governance and it remains an area of active research in the community. Blockchain governance is an incredibly tricky problem and finding a balance between centralized and distributed control will be essential to maintaining everyone’s trust in the system.
The primary advantage of Bitcoin and Ethereum over their legacy alternatives decentralization. However, despite the widely acknowledged importance of this property, most discussion on the topic lacks quantification. If we could agree upon a quantitative measure, it would allow us to:
In this post we propose the minimum Nakamoto coefficient as a simple, quantitative measure of a system’s decentralization, motivated by the well-known and .
Economists have long employed two tools for measuring non-uniformity within a population: the Lorenz curve and the Gini coefficient. The basic concept of the Lorenz curve is illustrated in the below:
The Lorenz curve is shown in red above. As the cumulative distribution diverges from a straight line, the Gini coefficient (G) increases from 0 to 1. Figure from .
The and the Gini Coefficient.
The coefficient can also be calculated from the individual shares of entities for both continuous and discrete distributions (see for equations).
Source:
As shown in the top left panel of the figures, Bitcoin mining is surprisingly decentralized as by block reward over the past 24 hours. Ethereum mining is somewhat more . There is fairly high variance in these value, so we can track it over time or smooth the result by taking a 7 or 30 day average.
As seen in the top middle panel of each figure, Most Bitcoin users use Bitcoin Core, with Bitcoin Unlimited being the second most popular client. This means a fairly high degree of centralization (Gini = 0.92) as by the number of different client codebases. For Ethereum clients by codebase, most of the clients (76%) run geth, and another 16% run Parity, which also gives a Gini coefficient of 0.92 as two codebases account for most of the ecosystem.
In the top right panel, we can see that the Bitcoin Core reference client has a number of engineers who have made . Though raw commits are certainly an imprecise measure of contribution, directionally it appears that a relatively small number of engineers have done most of the work on Bitcoin Core. For the geth reference client of Ethereum, development is even more concentrated, with two developers doing the lion’s share of commits.
The volume of and traded across exchanges varies a great deal, as do the corresponding Gini coefficients. But we calculated snapshots of the Gini coefficient over the past 24 hours for illustrative purposes in the bottom left panels.
Another measure of decentralization (bottom middle panels) is to determine what the node distribution is across countries for and .
In the last panels in the lower right, we look at how decentralized and ownership is, as measured by addresses. One important point: if we actually include all 7 billion people on the earth, most of whom have zero BTC or Ethereum, the Gini coefficient is essentially 0.99+. And if we just include all balances, we include many dust balances which would again put the Gini coefficient at 0.99+. Thus, we need some kind of threshold here. The imperfect threshold we picked was the Gini coefficient among accounts with ≥185 BTC per address, and ≥2477 ETH per address. So this is the distribution of ownership among the Bitcoin and Ethereum rich with >$500k as of July 2017.
In what kind of situation would a thresholded metric like this be interesting? Perhaps in a scenario similar to the ongoing , where the IRS is seeking information on all holders with balances >$20,000. Conceptualized in terms of an attack, a high Gini coefficient would mean that a government would only need to round up a few large holders in order to acquire a large percentage of outstanding cryptocurrency — and with it the ability to tank the price.
We can now use the Lorenz curves from the preceding section to calculate the Nakamoto coefficients for both Ethereum and Bitcoin. Here’s an example of the calculation for Ethereum’s reference client, geth. As we can see, with 2 developers we get to 51% of the commits to , so the Nakamoto coefficient is 2.
We recognize that some may contend that a high level of concentration in a single reference client for Bitcoin does not impinge upon its decentralization, or that this level of concentration is a . We take no position on this issue, because with alternate essential subsystem definitions, one can arrive at different measures of decentralization.
Optimization. Finally, and most importantly, a quantifiable (in the mathematical sense) determines the outcome of any procedure. Superficially similar objective functions can produce very different solutions. If our goal is to optimize the level of decentralization both across and within decentralized systems, we are going to need quantitative metrics like the Lorenz curve, the Gini coefficient, and the Nakamoto coefficient.
https://medium.com/loom-network/understanding-blockchain-fundamentals-part-1-byzantine-fault-tolerance-245f46fe8419
Georgios Konstantopoulos, Dec 2017
Blockchains are inherently decentralized systems which consist of different actors who act depending on their incentives and on the information that is available to them.
Whenever a new transaction gets broadcasted to the network, nodes have the option to include that transaction to their copy of their ledger or to ignore it. When the majority of the actors which comprise the network decide on a single state, consensus is achieved.
These processes are described as consensus.
What happens when an actor decides to not follow the rules and to tamper with the state of his ledger?
What happens when these actors are a large part of the network, but not the majority?
In order to create a secure consensus protocol, it must be fault tolerant.
Firstly, we will talk briefly about the unsolvable Two Generals Problem. Then we will extend that to the Byzantine Generals’ Problem and discuss Byzantine Fault Tolerance in distributed and decentralized systems. Finally, we will discuss how all this relates to the blockchain space.
In order for them to communicate and decide on a time, General 1 has to send a messenger across the enemy’s camp that will deliver the time of the attack to General 2. However, there is a possibility that the messenger will get captured by the enemies and thus the message won’t be delivered. That will result in General 1 attacking while General 2 and his army hold their grounds.
There is no way to guarantee the second requirement that each general be sure the other has agreed to the attack plan. Both generals will always be left wondering whether their last messenger got through.
Since the possibility of the message not getting through is always > 0, the generals can never reach an aggrement with 100% confidence.
The leader-follower paradigm described in the Two Generals Problem is transformed to a commander-lieutenant setup. In order to achieve consensus here, the commander and every lieutenant must agree on the same decision (for simplicity attack or retreat).
Adding to IC2., it gets interesting that if the commander is a traitor, consensus must still be achieved. As a result, all lieutenants take the majority vote.
The algorithm to reach consensus in this case is based on the value of majority of the decisions a lieutenant observes.
Theorem: For any m, Algorithm OM(m) reaches consensus if there are more than 3m generals and at most m traitors.
This implies that the algorithm can reach consensus as long as 2/3 of the actors are honest. If the traitors are more than 1/3, consensus is not reached, the armies do not coordinate their attack and the enemy wins.
m = 0 → no traitors, each lieutenant obeys | m > 0 → each lieutenant’s final choice comes from the majority of all lieutenant’s choices
This should be more clear with a visual example from Lieutentant 2’s point of view— Let C be Commander and L{i} be Lieutenant i:
OM(1): Lieutenant 3 is a traitor — L2 point of view
Steps:
Commander sends v to all Lieutenants
L1 sends v to L2 | L3 sends x to L2
L2 ← majority(v,v,x) == v
The final decision is the majority vote from L1, L2, L3 and as a result consensus has been achieved
The important thing to remember is that the goal is for the majority of the lieutenants to choose the same decision, not a specific one.
Let’s examine the case of the commander being a traitor:
OM(1): Commander is a traitor
Steps:
Commander sends x, y, z to L1, L2, L3 respectively
L1 sends x to L2, L3 | L2 sends y to L1, L3 | L3 sends z to L1, L2
L1 ← majority(x,y,z) | L2 ← majority(x,y,z) | L3 ← majority(x,y,z)
They all have the same value and thus consensus is reached. Take a moment here to reflect that even if x, y, z are all different the value of majority(x, y, z) is the same for all 3 Lieutenants. In the case x,y,z are totally different commands, we can assume that they act on the default option retreat.
The algorithm mentioned in the previous section is Byzantine Fault Tolerant as long as the number of traitors do not exceed one third of the generals. Other variations exist which make solving the problem easier, including the use of digital signatures or by imposing communication restrictions between the peers in the network.
Blockchains are decentralized ledgers which, by definition, are not controlled by a central authority. Due to the value stored in these ledgers, bad actors have huge economic incentives to try and cause faults. That said, Byzantine Fault Tolerance, and thus a solution to the Byzantine Generals’ Problem for blockchains is much needed.
In the absence of BFT, a peer is able to transmit and post false transactions effectively nullifying the blockchain’s reliability. To make things worse, there is no central authority to take over and repair the damage.
In this article, we discussed some basic concepts of consensus in distributed systems.
In the next article, we will discuss and compare some the algorithms that are used in blockchains in order to achieve Byzantine Fault Tolerance.
A fundamental problem in and is to achieve overall system reliability in the presence of a number of faulty processes. This often requires processes to agree on some data value that is needed during computation.
This problem (first in 1975 and given its name in 1978) describes a scenario where two generals are attacking a common enemy. General 1 is considered the leader and the other is considered the follower. Each general’s army on its own is not enough to defeat the enemy army successfully, thus they need to cooperate and attack at the same time. This seems like a simple scenario, but there is one caveat:
Even if the first message goes through, General 2 has to acknowledge (ACK, notice the similarity to the 3-way handshake of ) that he received the message, so he sends a messenger back, thus repeating the previous scenario where the messenger can get caught. This extends to infinite ACK’s and thus the generals are unable to reach an agreement.
The Two Generals Problem has been to be unsolvable.
Famously described , it is a generalized version of the Two Generals Problem with a twist. It describes the same scenario, where instead more than two generals need to agree on a time to attack their common enemy. The added complication here is that one or more of the generals can be a traitor, meaning that they can lie about their choice (e.g. they say that they agree to attack at 0900 but instead they do not).
page 3,
For a more hands-on approach and a more complex example with 7 generals and 2 traitors, I suggest you read .
Byzantine Fault Tolerance is the characteristic which defines a system that tolerates the class of failures that belong to the Byzantine Generals’ Problem. Byzantine Failure is the most difficult class of . It implies no restrictions, and makes no assumptions about the kind of behavior a node can have (e.g. a node can generate any kind of arbitrary data while posing as an honest actor).
Byzantine Faults are the most severe and difficult to deal with. Byzantine Fault Tolerance has been needed in airplane engine systems, nuclear power plants and pretty much any system whose actions depend on the results of a large amount of sensors. Even SpaceX was for their systems.
The big breakthrough when was invented, was the use of as a probabilistic solution to the Byzantine Generals Problem as described in depth by Satoshi Nakamoto in this .
DLT consensus
PoS vs. PoW
https://medium.com/@VitalikButerin/the-meaning-of-decentralization-a0c92b76a274
Vitalik Buterin, Feb 2017
“Decentralization” is one of the words that is used in the cryptoeconomics space the most frequently, and is often even viewed as a blockchain’s entire raison d’être, but it is also one of the words that is perhaps defined the most poorly. Thousands of hours of research, and billions of dollars of hashpower, have been spent for the sole purpose of attempting to achieve decentralization, and to protect and improve it, and when discussions get rivalrous it is extremely common for proponents of one protocol (or protocol extension) to claim that the opposing proposals are “centralized” as the ultimate knockdown argument.
But there is often a lot of confusion as to what this word actually means. Consider, for example, the following completely unhelpful, but unfortunately all too common, diagram:
When people talk about software decentralization, there are actually three separate axes of centralization/decentralization that they may be talking about. While in some cases it is difficult to see how you can have one without the other, in general they are quite independent of each other. The axes are as follows:
Architectural (de)centralization — how many physical computers is a system made up of? How many of those computers can it tolerate breaking down at any single time?
Political (de)centralization — how many individuals or organizations ultimately control the computers that the system is made up of?
Logical (de)centralization— does the interface and data structures that the system presents and maintains look more like a single monolithic object, or an amorphous swarm? One simple heuristic is: if you cut the system in half, including both providers and users, will both halves continue to fully operate as independent units?
We can try to put these three dimensions into a chart:
Note that a lot of these placements are very rough and highly debatable. But let’s try going through any of them:
Traditional corporations are politically centralized (one CEO), architecturally centralized (one head office) and logically centralized (can’t really split them in half)
Civil law relies on a centralized law-making body, whereas common law is built up of precedent made by many individual judges. Civil law still has some architectural decentralization as there are many courts that nevertheless have large discretion, but common law have more of it. Both are logically centralized (“the law is the law”).
BitTorrent is logically decentralized similarly to how English is. Content delivery networks are similar, but are controlled by one single company.
Blockchains are politically decentralized (no one controls them) and architecturally decentralized (no infrastructural central point of failure) but they are logically centralized (there is one commonly agreed state and the system behaves like a single computer)
Architectural centralization often leads to political centralization, though not necessarily — in a formal democracy, politicians meet and hold votes in some physical governance chamber, but the maintainers of this chamber do not end up deriving any substantial amount of power over decision-making as a result. In computerized systems, architectural but not political decentralization might happen if there is an online community which uses a centralized forum for convenience, but where there is a widely agreed social contract that if the owners of the forum act maliciously then everyone will move to a different forum (communities that are formed around rebellion against what they see as censorship in another forum likely have this property in practice).
The next question is, why is decentralization useful in the first place? There are generally several arguments raised:
Fault tolerance— decentralized systems are less likely to fail accidentally because they rely on many separate components that are not likely.
Collusion resistance — it is much harder for participants in decentralized systems to collude to act in ways that benefit them at the expense of other participants, whereas the leaderships of corporations and governments collude in ways that benefit themselves but harm less well-coordinated citizens, customers, employees and the general public all the time.
All three arguments are important and valid, but all three arguments lead to some interesting and different conclusions once you start thinking about protocol decisions with the three individual perspectives in mind. Let us try to expand out each of these arguments one by one.
Do blockchains as they are today manage to protect against common mode failure? Not necessarily. Consider the following scenarios:
All nodes in a blockchain run the same client software, and this client software turns out to have a bug.
All nodes in a blockchain run the same client software, and the development team of this software turns out to be socially corrupted.
The research team that is proposing protocol upgrades turns out to be socially corrupted.
In a proof of work blockchain, 70% of miners are in the same country, and the government of this country decides to seize all mining farms for national security purposes.
The majority of mining hardware is built by the same company, and this company gets bribed or coerced into implementing a backdoor that allows this hardware to be shut down at will.
In a proof of stake blockchain, 70% of the coins at stake are held at one exchange.
A holistic view of fault tolerance decentralization would look at all of these aspects, and see how they can be minimized. Some natural conclusions that arise are fairly obvious:
Note that the fault tolerance requirement in its naive form focuses on architectural decentralization, but once you start thinking about fault tolerance of the community that governs the protocol’s ongoing development, then political decentralization is important too.
However, once you adopt a richer economic model, and particularly one that admits the possibility of coercion (or much milder things like targeted DoS attacks against nodes), decentralization becomes more important. If you threaten one person with death, suddenly $50 million will not matter to them as much anymore. But if the $50 million is spread between ten people, then you have to threaten ten times as many people, and do it all at the same time. In general, the modern world is in many cases characterized by an attack/defense asymmetry in favor of the attacker — a building that costs $10 million to build may cost less than $100,000 to destroy, but the attacker’s leverage is often sublinear: if a building that costs $10 million to build costs $100,000 to destroy, a building that costs $1 million to build may realistically cost perhaps $30,000 to destroy. Smaller gives better ratios.
Finally, we can get to perhaps the most intricate argument of the three, collusion resistance. Collusion is difficult to define; perhaps the only truly valid way to put it is to simply say that collusion is “coordination that we don’t like”. There are many situations in real life where even though having perfect coordination between everyone would be ideal, one sub-group being able to coordinate while the others cannot is dangerous.
Blockchain advocates also make the point that blockchains are more secure to build on because they can’t just change their rules arbitrarily on a whim whenever they want to, but this case would be difficult to defend if the developers of the software and protocol were all working for one company, were part of one family and sat in one room. The whole point is that these systems should not act like self-interested unitary monopolies. Hence, you can certainly make a case that blockchains would be more secure if they were more discoordinated.
There are three ways to answer this:
Don’t bother mitigating undesired coordination; instead, try to build protocols that can resist it.
Try to find a happy medium that allows enough coordination for a protocol to evolve and move forward, but not enough to enable attacks.
Try to make a distinction between beneficial coordination and harmful coordination, and make the former easier and the latter harder.
The third is a social challenge more than anything else; solutions in this regard may include:
Social interventions that try to increase participants’ loyalty to the community around the blockchain as a whole and substitute or discourage the possibility of the players on one side of a market becoming directly loyal to each other.
Promoting communication between different “sides of the market” in the same context, so as to reduce the possibility that either validators or developers or miners begin to see themselves as a “class” that must coordinate to defend their interests against other classes.
Designing the protocol in such a way as to reduce the incentive for validators/miners to engage in one-to-one “special relationships”, centralized relay networks and other similar super-protocol mechanisms.
Clear norms about what the fundamental properties that the protocol is supposed to have, and what kinds of things should not be done, or at least should be done only under very extreme circumstances.
This third kind of decentralization, decentralization as undesired-coordination-avoidance, is thus perhaps the most difficult to achieve, and tradeoffs are unavoidable. Perhaps the best solution may be to rely heavily on the one group that is guaranteed to be fairly decentralized: the protocol’s users.
Now, consider the two answers on Quora for “”. The first essentially parrots the above diagram, whereas the second makes the entirely different claim that “distributed means not all the processing of the transactions is done in the same place”, whereas “decentralized means that not one single entity has control over all the processing”. Meanwhile, the top answer on the Ethereum stack exchange gives a , but with the words “decentralized” and “distributed” switched places! Clearly, a clarification is in order.
Languages are logically decentralized; the English spoken between Alice and Bob and the English spoken between Charlie and David do not need to agree at all. There is no centralized infrastructure required for a language to exist, and the rules of English grammar are not created or controlled by any one single person (whereas Esperanto was originally invented by , though now it functions more like a living language that evolves incrementally with no authority)
Many times when people talk about the virtues of a blockchain, they describe the convenience benefits of having “one central database”; that centralization is logical centralization, and it’s a kind of centralization that is arguably in many cases good (though Juan Benet from IPFS would also push for logical decentralization wherever possible, because logically decentralized systems tend to be good at surviving network partitions, work well in regions of the world that have poor connectivity, etc; see also explicitly advocating logical decentralization).
Logical centralization makes architectural decentralization harder, but not impossible — see how decentralized consensus networks have already been proven to work, but are more difficult than maintaining BitTorrent. And logical centralization makes political decentralization harder — in logically centralized systems, it’s harder to by simply agreeing to “live and let live”.
Attack resistance— decentralized systems are more expensive to attack and destroy or manipulate because they lack that can be attacked at much lower cost than the economic size of the surrounding system.
Regarding fault tolerance, the core argument is simple. What’s less likely to happen: one single computer failing, or five out of ten computers all failing at the same time? The principle is uncontroversial, and is used in real life in many situations, including jet engines, particularly in places like hospitals, military infrastructure, financial portfolio diversification, and yes, computer networks.
However, this kind of decentralization, while still effective and highly important, often turns out to be far less of a panacea than a naive mathematical model would sometimes predict. The reason is . Sure, four jet engines are less likely to fail than one jet engine, but what if all four engines were made in the same factory, and a fault was introduced in all four by the same rogue employee?
It is crucially important to have .
The of the be , so that more people can feel comfortable in research discussions and criticizing protocol changes that are clearly bad.
Core developers and researchers should be employed by (or, alternatively, many of them can be volunteers).
Mining algorithms should be designed in a way that
Ideally we use to move away from hardware centralization risk entirely (though we should also be cautious of new risks that pop up due to proof of stake).
Now, let’s look at attack resistance. In some pure economic models, you sometimes get the result that decentralization does not even matter. If you create a protocol where the validators are guaranteed to lose $50 million if a 51% attack (ie. finality reversion) happens, then it doesn’t really matter if the validators are controlled by one company or 100 companies — $50 million economic security margin is $50 million economic security margin. In fact, there are why centralization may even maximize this notion of economic security (the transaction selection model of existing blockchains reflects this insight, as transaction inclusion into blocks through miners/block proposers is actually a very rapidly rotating dictatorship).
What does this reasoning lead to? First of all, it pushes strongly in favor of proof of stake over proof of work, as computer hardware is easy to detect, regulate, or attack, whereas coins can be much more easily hidden (proof of stake also has strong attack resistance ). Second, it is a point in favor of having widely distributed development teams, including geographic distribution. Third, it implies that both the economic model and the fault-tolerance model need to be looked at when designing consensus protocols.
One simple example is antitrust law — deliberate regulatory barriers that get placed in order to make it more difficult for participants on one side of the marketplace to come together and act like a monopolist and get outsided profits at the expense of both the other side of the marketplace and general social welfare. Another example is , though those have proven difficult to enforce in practice. A much smaller example is a rule in preventing two players from playing many games against each other to try to raise one player’s score. No matter where you look, attempts to prevent undesired coordination in sophisticated institutions are everywhere.
In the case of blockchain protocols, the mathematical and economic reasoning behind the safety of the consensus often relies crucially on the uncoordinated choice model, or the assumption that the game consists of many small actors that make decisions independently. If any one actor gets more than 1/3 of the mining power in a proof of work system, they can gain outsized profits by . However, can we really say that the uncoordinated choice model is realistic when 90% of the Bitcoin network’s mining power is well-coordinated enough to show up together at the same conference?
However, this presents a fundamental paradox. Many communities, including Ethereum’s, are often praised for having a strong community spirit and being able to coordinate quickly on implementing, releasing and to fix denial-of-service issues in the protocol within six days. But how can we foster and improve this good kind of coordination, but at the same time prevent “bad coordination” that consists of miners trying to screw everyone else over by repeatedly coordinating 51% attacks?
The first approach makes up a of the Casper design philosophy. However, it by itself is insufficient, as relying on economics alone fails to deal with the other two categories of concerns about decentralization. The second is difficult to engineer explicitly, especially for the long term, but it does often happen accidentally. For example, the fact that bitcoin’s core developers generally speak English but miners generally speak Chinese can be viewed as a happy accident, as it creates a kind of “bicameral” governance that makes coordination more difficult, with the side benefit of reducing the risk of common mode failure, as the English and Chinese communities will reason at least somewhat separately due to distance and communication difficulties and are therefore less likely to both make the same mistake.
Welcome to the Haruko digital assets knowledge portal.
We are excited to share a curated database of the best crypto thought pieces and resources, which we hope will be helpful in your digital asset journey.
https://cdixon.org/2018/02/18/why-decentralization-matters
Chris Dixon, Feb 2018
During the first era of the internet — from the 1980s through the early 2000s — internet services were built on open protocols that were controlled by the internet community. This meant that people or organizations could grow their internet presence knowing the rules of the game wouldn’t change later on. Huge web properties were started during this era including Yahoo, Google, Amazon, Facebook, LinkedIn, and YouTube. In the process, the importance of centralized platforms like AOL greatly diminished.
During the second era of the internet, from the mid 2000s to the present, for-profit tech companies — most notably Google, Apple, Facebook, and Amazon (GAFA) — built software and services that rapidly outpaced the capabilities of open protocols. The explosive growth of smartphones accelerated this trend as mobile apps became the majority of internet use. Eventually users migrated from open services to these more sophisticated, centralized services. Even when users still accessed open protocols like the web, they would typically do so mediated by GAFA software and services.
The good news is that billions of people got access to amazing technologies, many of which were free to use. The bad news is that it became much harder for startups, creators, and other groups to grow their internet presence without worrying about centralized platforms changing the rules on them, taking away their audiences and profits. This in turn stifled innovation, making the internet less interesting and dynamic. Centralization has also created broader societal tensions, which we see in the debates over subjects like fake news, state sponsored bots, “no platforming” of users, EU privacy laws, and algorithmic biases. These debates will only intensify in the coming years.
One response to this centralization is to impose government regulation on large internet companies. This response assumes that the internet is similar to past communication networks like the phone, radio, and TV networks. But the hardware-based networks of the past are fundamentally different than the internet, a software-based network. Once hardware-based networks are built, they are nearly impossible to rearchitect. Software-based networks can be rearchitected through entrepreneurial innovation and market forces.
Decentralization is a commonly misunderstood concept. For example, it is sometimes said that the reason cryptonetwork advocates favor decentralization is to resist government censorship, or because of libertarian political views. These are not the main reasons decentralization is important.
Let’s look at the problems with centralized platforms. Centralized platforms follow a predictable life cycle. When they start out, they do everything they can to recruit users and 3rd-party complements like developers, businesses, and media organizations. They do this to make their services more valuable, as platforms (by definition) are systems with multi-sided network effects. As platforms move up the adoption S-curve, their power over users and 3rd parties steadily grows.
When they hit the top of the S-curve, their relationships with network participants change from positive-sum to zero-sum. The easiest way to continue growing lies in extracting data from users and competing with complements over audiences and profits. Historical examples of this are Microsoft vs. Netscape, Google vs. Yelp, Facebook vs. Zynga, and Twitter vs. its 3rd-party clients. Operating systems like iOS and Android have behaved better, although still take a healthy 30% tax, reject apps for seemingly arbitrary reasons, and subsume the functionality of 3rd-party apps at will.
For 3rd parties, this transition from cooperation to competition feels like a bait-and-switch. Over time, the best entrepreneurs, developers, and investors have become wary of building on top of centralized platforms. We now have decades of evidence that doing so will end in disappointment. In addition, users give up privacy, control of their data, and become vulnerable to security breaches. These problems with centralized platforms will likely become even more pronounced in the future.
In short, cryptonetworks align network participants to work together toward a common goal — the growth of the network and the appreciation of the token. This alignment is one of the main reasons Bitcoin continues to defy skeptics and flourish, even while new cryptonetworks like Ethereum have grown alongside it.
Today’s cryptonetworks suffer from limitations that keep them from seriously challenging centralized incumbents. The most severe limitations are around performance and scalability. The next few years will be about fixing these limitations and building networks that form the infrastructure layer of the crypto stack. After that, most of the energy will turn to building applications on top of that infrastructure.
It’s one thing to say decentralized networks should win, and another thing to say they will win. Let’s look at specific reasons to be optimistic about this.
Software and web services are built by developers. There are millions of highly skilled developers in the world. Only a small fraction work at large technology companies, and only a small fraction of those work on new product development. Many of the most important software projects in history were created by startups or by communities of independent developers.
Decentralized networks can win the third era of the internet for the same reason they won the first era: by winning the hearts and minds of entrepreneurs and developers.
The lesson is that when you compare centralized and decentralized systems you need to consider them dynamically, as processes, instead of statically, as rigid products. Centralized systems often start out fully baked, but only get better at the rate at which employees at the sponsoring company improve them. Decentralized systems start out half-baked but, under the right conditions, grow exponentially as they attract new contributors.
In the case of cryptonetworks, there are multiple, compounding feedback loops involving developers of the core protocol, developers of complementary cryptonetworks, developers of 3rd party applications, and service providers who operate the network. These feedback loops are further amplified by the incentives of the associated token, which — as we’ve seen with Bitcoin and Ethereum — can supercharge the rate at which crypto communities develop (and sometimes lead to negative outcomes, as with the excessive electricity consumed by Bitcoin mining).
The question of whether decentralized or centralized systems will win the next era of the internet reduces to who will build the most compelling products, which in turn reduces to who will get more high quality developers and entrepreneurs on their side. GAFA has many advantages, including cash reserves, large user bases, and operational infrastructure. Cryptonetworks have a significantly more attractive value proposition to developers and entrepreneurs. If they can win their hearts and minds, they can mobilize far more resources than GAFA, and rapidly outpace their product development.
Centralized platforms often come bundled at launch with compelling apps: Facebook had its core socializing features and the iPhone had a number of key apps. Decentralized platforms, by contrast, often launch half-baked and without clear use cases. As a result, they need to go through two phases of product-market fit: 1) product-market fit between the platform and the developers/entrepreneurs who will finish the platform and build out the ecosystem, and 2) product-market fit between the platform/ecosystem and end users. This two-stage process is what causes many people — including sophisticated technologists — to consistently underestimate the potential of decentralized platforms.
Decentralized networks aren’t a silver bullet that will fix all the problems on the internet. But they offer a much better approach than centralized systems.
Or consider the problem of network governance. Today, unaccountable groups of employees at large platforms decide how information gets ranked and filtered, which users get promoted and which get banned, and other important governance decisions. In cryptonetworks, these decisions are made by the community, using open and transparent mechanisms. As we know from the offline world, democratic systems aren’t perfect, but they are a lot better than the alternatives.
Centralized platforms have been dominant for so long that many people have forgotten there is a better way to build internet services. Cryptonetworks are a powerful way to develop community-owned networks and provide a level playing field for 3rd-party developers, creators, and businesses. We saw the value of decentralized systems in the first era of the internet. Hopefully we’ll get to see it again in the next.
VIEWS EXPRESSED IN “CONTENT” (INCLUDING POSTS, PODCASTS, VIDEOS) LINKED ON THIS WEBSITE OR POSTED IN SOCIAL MEDIA AND OTHER PLATFORMS (COLLECTIVELY, “CONTENT DISTRIBUTION OUTLETS”) ARE MY OWN AND ARE NOT THE VIEWS OF AH CAPITAL MANAGEMENT, L.L.C. (“A16Z”) OR ITS RESPECTIVE AFFILIATES. AH CAPITAL MANAGEMENT IS AN INVESTMENT ADVISER REGISTERED WITH THE SECURITIES AND EXCHANGE COMMISSION. REGISTRATION AS AN INVESTMENT ADVISER DOES NOT IMPLY ANY SPECIAL SKILL OR TRAINING. THE POSTS ARE NOT DIRECTED TO ANY INVESTORS OR POTENTIAL INVESTORS, AND DO NOT CONSTITUTE AN OFFER TO SELL -- OR A SOLICITATION OF AN OFFER TO BUY -- ANY SECURITIES, AND MAY NOT BE USED OR RELIED UPON IN EVALUATING THE MERITS OF ANY INVESTMENT.
THE CONTENT SHOULD NOT BE CONSTRUED AS OR RELIED UPON IN ANY MANNER AS INVESTMENT, LEGAL, TAX, OR OTHER ADVICE. YOU SHOULD CONSULT YOUR OWN ADVISERS AS TO LEGAL, BUSINESS, TAX, AND OTHER RELATED MATTERS CONCERNING ANY INVESTMENT. ANY PROJECTIONS, ESTIMATES, FORECASTS, TARGETS, PROSPECTS AND/OR OPINIONS EXPRESSED IN THESE MATERIALS ARE SUBJECT TO CHANGE WITHOUT NOTICE AND MAY DIFFER OR BE CONTRARY TO OPINIONS EXPRESSED BY OTHERS. ANY CHARTS PROVIDED HERE ARE FOR INFORMATIONAL PURPOSES ONLY, AND SHOULD NOT BE RELIED UPON WHEN MAKING ANY INVESTMENT DECISION. CERTAIN INFORMATION CONTAINED IN HERE HAS BEEN OBTAINED FROM THIRD-PARTY SOURCES. WHILE TAKEN FROM SOURCES BELIEVED TO BE RELIABLE, I HAVE NOT INDEPENDENTLY VERIFIED SUCH INFORMATION AND MAKES NO REPRESENTATIONS ABOUT THE ENDURING ACCURACY OF THE INFORMATION OR ITS APPROPRIATENESS FOR A GIVEN SITUATION. THE CONTENT SPEAKS ONLY AS OF THE DATE INDICATED.
Let's jump in
lll
lll
The internet is the ultimate software-based network, consisting of a relatively simple connecting billions of fully programmable computers at the edge. Software is simply the encoding of human thought, and as such has an almost unbounded design space. Computers connected to the internet are, by and large, free to run whatever software their owners choose. Whatever can be dreamt up, with the right set of incentives, can quickly propagate across the internet. Internet architecture is where technical creativity and incentive design intersect.
The internet is still early in its evolution: the core internet services will likely be almost entirely rearchitected in the coming decades. This will be enabled by crypto-economic networks, a generalization of the ideas first introduced in and further developed in . Cryptonetworks combine the best features of the first two internet eras: community-governed, decentralized networks with capabilities that will eventually exceed those of the most advanced centralized services.
Cryptonetworks are networks built on top of the internet that 1) use consensus mechanisms such as blockchains to maintain and update state, 2) use cryptocurrencies (coins/tokens) to incentivize consensus participants (miners/validators) and other network participants. Some cryptonetworks, such as Ethereum, are general programming platforms that can be used for almost any purpose. Other cryptonetworks are special purpose, for example Bitcoin is intended primarily for storing value, for performing computations, and for decentralized file storage.
Early internet protocols were technical specifications created by working groups or non-profit organizations that relied on the alignment of interests in the internet community to gain adoption. This method worked well during the very early stages of the internet but since the early 1990s very few new protocols have gained widespread adoption. these problems by providing economics incentives to developers, maintainers, and other network participants in the form of tokens. They are also much more technically robust. For example, they are able to keep state and do arbitrary transformations on that state, something past protocols could never do.
Cryptonetworks use multiple mechanisms to ensure that they stay neutral as they grow, preventing the bait-and-switch of centralized platforms. First, the contract between cryptonetworks and their participants is enforced in open source code. Second, they are kept in check through mechanisms for Participants are given voice through community governance, both “on chain” (via the protocol) and “off chain” (via the social structures around the protocol). Participants can exit either by leaving the network and selling their coins, or in the extreme case by forking the protocol.
“No matter who you are, most of the smartest people work for someone else.” —
An illustrative analogy is the rivalry in the 2000s between Wikipedia and its centralized competitors like Encarta. If you compared the two products in the early 2000s, Encarta was a far better product, with better topic coverage and higher accuracy. But Wikipedia improved at a much faster rate, because it had an active community of volunteer contributors who were attracted to its decentralized, community-governed ethos. By 2005, Wikipedia was the most reference site on the internet. Encarta was shut down in 2009.
“If you asked people in 1989 what they needed to make their life better, it was unlikely that they would have said a decentralized network of information nodes that are linked using hypertext.” —
Compare the problem of Twitter spam to the problem of email spam. Since Twitter their network to 3rd-party developers, the only company working on Twitter spam has been Twitter itself. By contrast, there were hundreds of companies that tried to fight email spam, financed by billions of dollars in venture capital and corporate funding. Email spam isn’t solved, but it’s a lot better now, because 3rd parties knew that the was decentralized, so they could build businesses on top of it without worrying about the rules of the game changing later on.
Originally published on .
Next post: Previous post:
UNDER NO CIRCUMSTANCES SHOULD ANY POSTS OR OTHER INFORMATION PROVIDED ON THIS WEBSITE -- OR ON ASSOCIATED CONTENT DISTRIBUTION OUTLETS -- BE CONSTRUED AS AN OFFER SOLICITING THE PURCHASE OR SALE OF ANY SECURITY OR INTEREST IN ANY POOLED INVESTMENT VEHICLE SPONSORED, DISCUSSED, OR MENTIONED BY A16Z PERSONNEL. NOR SHOULD IT BE CONSTRUED AS AN OFFER TO PROVIDE INVESTMENT ADVISORY SERVICES; AN OFFER TO INVEST IN AN A16Z-MANAGED POOLED INVESTMENT VEHICLE WILL BE MADE SEPARATELY AND ONLY BY MEANS OF THE CONFIDENTIAL OFFERING DOCUMENTS OF THE SPECIFIC POOLED INVESTMENT VEHICLES -- WHICH SHOULD BE READ IN THEIR ENTIRETY, AND ONLY TO THOSE WHO, AMONG OTHER REQUIREMENTS, MEET CERTAIN QUALIFICATIONS UNDER FEDERAL SECURITIES LAWS. SUCH INVESTORS, DEFINED AS ACCREDITED INVESTORS AND QUALIFIED PURCHASERS, ARE GENERALLY DEEMED CAPABLE OF EVALUATING THE MERITS AND RISKS OF PROSPECTIVE INVESTMENTS AND FINANCIAL MATTERS. THERE CAN BE NO ASSURANCES THAT A16Z’S INVESTMENT OBJECTIVES WILL BE ACHIEVED OR INVESTMENT STRATEGIES WILL BE SUCCESSFUL. ANY INVESTMENT IN A VEHICLE MANAGED BY A16Z INVOLVES A HIGH DEGREE OF RISK INCLUDING THE RISK THAT THE ENTIRE AMOUNT INVESTED IS LOST. ANY INVESTMENTS OR PORTFOLIO COMPANIES MENTIONED, REFERRED TO, OR DESCRIBED ARE NOT REPRESENTATIVE OF ALL INVESTMENTS IN VEHICLES MANAGED BY A16Z AND THERE CAN BE NO ASSURANCE THAT THE INVESTMENTS WILL BE PROFITABLE OR THAT OTHER INVESTMENTS MADE IN THE FUTURE WILL HAVE SIMILAR CHARACTERISTICS OR RESULTS. A LIST OF INVESTMENTS MADE BY FUNDS MANAGED BY A16Z IS AVAILABLE AT . EXCLUDED FROM THIS LIST ARE INVESTMENTS FOR WHICH THE ISSUER HAS NOT PROVIDED PERMISSION FOR A16Z TO DISCLOSE PUBLICLY AS WELL AS UNANNOUNCED INVESTMENTS IN PUBLICLY TRADED DIGITAL ASSETS. PAST RESULTS OF ANDREESSEN HOROWITZ’S INVESTMENTS, POOLED INVESTMENT VEHICLES, OR INVESTMENT STRATEGIES ARE NOT NECESSARILY INDICATIVE OF FUTURE RESULTS. PLEASE SEE FOR ADDITIONAL IMPORTANT INFORMATION.
https://www.gemini.com/cryptopedia/trustless-meaning-blockchain-non-custodial-smart-contracts
Cryptopedia, Aug 2021
Expecting no one to eat the cake you stored in the fridge isn’t an exercise in trustlessness. “Trust” and “trustless” are related — yet different — concepts.
While trustless is defined by Merriam Webster as “not deserving of trust,” in the blockchain space it means something entirely different. Trustlessness in the blockchain industry simply means you do not need to place your sole trust in any one stranger, institution, or other third party in order for a network or payment system to function. Trustless systems work and achieve consensus mainly through the code, asymmetric cryptography, and protocols of the blockchain network itself. The trustless environments that blockchains have created enable the peer-to-peer (P2P) sending and receiving of transactions, smart contract agreements, and more.
The implications for global commerce are significant. Furthermore, the trustless component of blockchains goes far beyond payments and includes implications for crypto self-custody, smart contracts, and asset trading solutions.
While such an exchange transaction is not trustless in the crypto sense, you may feel more comfortable using a crypto custodian in the same way that you feel more comfortable using a bank to store large sums of money. Holding your own wallet and keys comes with its own set of challenges and requirements to ensure security and access. However, after purchasing on a custodied exchange, you could also choose to withdraw your funds to a trustless non-custodial wallet where you have sole control of your cryptocurrency.
In crypto, you don’t necessarily have to trust any other person (or institution), but there is someone you must trust: yourself. While the self-custody of crypto is referred to as trustless, you must decide if you can trust yourself with this responsibility. This entails safely storing any passwords, having a recovery phrase, and following other best practices. If your passwords are lost or stolen, you might not be able to recover your funds.
One important area of your life where trust plays a crucial role is your personal finances. Most people in the U.S. feel comfortable trusting a third party to store their savings. Likewise, while some worry about the fluctuations in their stock portfolio, they usually aren’t worried about their assets disappearing from their account. This is because there’s a widespread baseline of trust in the banking and sector.
However, the advent of blockchain tech and cryptocurrencies has brought about a new understanding of trust. Blockchain-enabled allow you to trust in a process or transaction without having to trust the entity with which you are transacting. This simple, yet revolutionary concept has major implications on our relationship with our personal finances, and far beyond into our everyday lives.
The concept of trustlessness is a core element of blockchain, crypto payments, and . “Trustless” means that you don’t have to trust a third party: a bank, a person, or any intermediary that could operate between you and your cryptocurrency transactions or holdings. Depending on how you choose to store, move, and trade your assets, you may have a trustless set-up or a set-up that requires the trust of a third party.
“Don’t trust; verify!” and “In Bitcoin we trust” are trustlessness-related phrases you may come across as you explore the cryptocurrency space. If you are new to crypto, you may ask yourself: “How can I trust code?” If you’re familiar with , you likely already know the answer. Essentially, these networks are censorship-resistant and , featuring enhanced security protocols. When you , it’s permanent, and the sender can’t reverse the payment. If you’ve ever been on the receiving end of a bounced check, reverted credit card payment, or reversed Paypal transaction, you can likely see how revolutionary this is.
A trustless crypto wallet is a . This means your crypto wallet contains the that control the crypto funds associated with them. Since only you control these funds, it’s generally considered trustless.
On the other hand, a custodial wallet isn’t generally considered trustless. You are trusting the “custodian” to hold your assets on your behalf. When you buy crypto on a like Gemini, Huobi, or Kraken, your purchases are automatically stored and secured in your .
One trustless option that has been evolving in the crypto trading sphere is the . When a centralized exchange uses an , , or to facilitate your trades, they must be trusted as an intermediary to oversee and transact the trade. Considering they already control your funds via your centralized exchange wallet, most traders aren’t too concerned about this.
For those who have withdrawn their funds to a non-custodial wallet, how do they trade while maintaining trustlessness? This is possible via trading on a DEX where the trades are executed via smart contracts. One popular example of this is , where you can swap while maintaining control of your private keys.
These trustless trades can be executed through the use of , smart contracts that rely almost exclusively on decentralized code for enforcement rather than on a third party. In short, both sets of crypto assets must be submitted into a smart contract for it to execute the transaction. This structure facilitates trustless trades between strangers. If one party inputs funds and the other does not, generally the funds are just returned to the sender automatically.
For some users, there are potential downsides to relying on such types of trustless systems as well. For example, in a DEX all trades must generally be executed . This means trades may in some cases be subject to higher transaction fees than would be the case with a CEX. Even failed orders must be validated on-chain, triggering further fees.
While many expound upon how blockchains have eliminated the need for trust, others argue that this trust has simply been transferred — from one set of people and systems to another set of code and consensus mechanisms that run these networks and everything built on them. For example, you have to trust the code to be bug-free. At various points in the development of the burgeoning space, users of trustless systems have experienced malfunctions in trustless platforms, and have even lost their funds due to malicious hackers exploiting vulnerabilities in the code of trustless systems.
Proponents of trustless systems would counter that these blockchains have become increasingly safer and more robust as the DeFi space has evolved. Since blockchain networks generally do not have a central point of failure, trustless systems are practically impossible to shut down. , bug , and better coding procedures in the industry are making these networks increasingly resilient to hacking and malicious actors. Indeed, with billions of U.S. dollars locked in DeFi protocols, it seems that an increasing number of people feel comfortable putting trust in these trustless systems. You can also purchase smart contract insurance to help protect yourself against potential losses.
https://hackernoon.com/wtf-is-the-blockchain-1da89ba19348
Mohit Mamoria, Jun 2017
Unless you’re hiding under the rock, I am sure you’d have heard of Bitcoins and Blockchain. After all, they are the trending and media’s favorite topics these days — the buzzwords of the year. Even the people who’ve never mined a cryptocurrency or understand how it works, are talking about it. I have more non-technical friends than technical ones. They have been bugging me for weeks to explain this new buzzword to them. I guess there are thousands out there who feel the same. And when that happens, there comes a time to write something to which everyone can point the other lost souls to — that’s the purpose of this post — written in plain english that any regular internet user understands.
“For every complex problem there is an answer that is clear, simple, and wrong.” — H. L. Mencken
Unlike every other post on the internet, instead of first defining the Blockchain, we’ll understand the problem it solves.
Imagine, Joe is your best friend. He is traveling overseas, and on the fifth day of his vacation, he calls you and says, “Dude, I need some money. I have run out of it.”
You reply, “Sending some right away,” and hung up.
You then call your account manager at your bank and tell him, “Please transfer $1000 from my account to Joe’s account.”
Your account manager replies, “Yes, sir.”
He opens up the register, checks your account balance to see if you have enough balance to transfer $1000 to Joe. Because you’re a rich man, you have plenty; thus, he makes an entry in the register like the following:
The Transaction Register
Note: We’re not talking about computers only to keep things simple.
You call Joe and tell him, “I’ve transferred the money. Next time, you’d go to your bank, you can withdraw the $1000 that I have just transferred.”
What just happened? You and Joe both trusted the bank to manage your money. There was no real movement of physical bills to transfer the money. All that was needed was an entry in the register. Or more precisely, an entry in the register that neither you nor Joe controls or owns.
And that is the problem of the current systems.
To establish trust between ourselves, we depend on individual third-parties.
For years, we’ve depended on these middlemen to trust each other. You might ask, “what is the problem depending on them?”
The problem is that they are singular in number. If a chaos has to be injected in the society, all it requires is one person/organization to go corrupt, intentionally or unintentionally.
What if that register in which the transaction was logged gets burnt in a fire?
What if, by mistake, your account manager had written $1500 instead of $1000?
What if he did that on purpose?
For years, we have been putting all our eggs in one basket and that too in someone else’s.
Could there be a system where we can still transfer money without needing the bank?
To answer this question, we’ll need to drill down further and ask ourselves a better question (after all, only better questions lead to better answers).
Think about it for a second, what does transferring money means? Just an entry in the register. The better question would then be —
Is there a way to maintain the register among ourselves instead of someone else doing it for us?
Now, that is a question worth exploring. And the answer is what you might have already guessed. The blockchain is the answer to the profound question.
It is a method to maintain that register among ourselves instead of depending on someone else to do it for us.
Are you still with me? Good. Because now, when several questions have started popping in your mind, we will learn how this distributed register works.
The requirement of this method is that there must be enough people who would like not to depend on a third-party. Only then this group can maintain the register on their own.
“It might make sense just to get some Bitcoin in case it catches on. If enough people think the same way, that becomes a self-fulfilling prophecy.” — Satoshi Nakamoto in 2009
How many are enough? At least three. For our example, we will assume ten individuals want to give up on banks or any third-party. Upon mutual agreement, they have details of each other’s accounts all the time — without knowing the other’s identity.
1. An Empty Folder
Everyone contains an empty folder with themselves to start with. As we’ll progress, all these ten individuals will keep adding pages to their currently empty folders. And this collection of pages will form the register that tracks the transactions.
2. When A Transaction Happens
Next, everyone in the network sits with a blank page and a pen in their hands. Everyone is ready to write any transaction that occurs within the system.
Now, if #2 wants to send $10 to #9.
To make the transaction, #2 shouts and tells everyone, “I want to transfer $10 to #9. So, everyone, please make a note of it on your pages.”
Everyone checks whether #2 has enough balance to transfer $10 to #9. If she has enough balance, everyone then makes a note of the transaction on their blank pages.
First transaction on the page
The transaction is then considered to be complete.
3. Transactions Continue Happening
As the time passes, more people in the network feel the need to transfer money to others. Whenever they want to make a transaction, they announce it to everyone else. As soon as a person listens to the announcement, (s)he writes it on his/her page.
This exercise continues until everyone runs out of space on the current page. Assuming a page has space to record ten transactions, as soon as the tenth transaction is made, everybody runs out of the space.
When page gets filled
It’s time to put the page away in the folder and bring out a new page and repeat the process from the step 2 above.
4. Putting Away The Page
Before we put away the page in our folders, we need to seal it with a unique key that everyone in the network agrees upon. By sealing it, we will make sure that no one can make any changes to it once its copies have been put away in everyone’s folder — not today, not tomorrow and not even after a year. Once in the folder, it will always stay in the folder — sealed. Moreover, if everyone trusts the seal, everyone trusts the contents of the page. And this sealing of the page is the crux of this method.
[Jargon Box] It is called ‘mining’ on the page to secure it, but for the simplicity of it, we’ll keep calling it ‘sealing.’
Earlier the third-party/middleman gave us the trust that whatever they have written in the register will never be altered. In a distributed and decentralized system like ours, this seal will provide the trust instead.
Before we learn how we can seal the page, we’ll know how the seal works, in general. And as a pre-requisite to it is learning about something that I like to call…
The Magic Machine
Imagine a machine surrounded by thick walls. If you send a box with something inside it from the left, it will spit out a box containing something else.
[Jargon Box] This machine is called ‘Hash Function,’ but we aren’t in a mood to be too technical. So, for today, these are ‘The Magic Machines.’
The Magic Machine (aka Hashing Function)
Suppose, you send the number 4 inside it from the left, we’d find that it spat out the following word on its right: ‘dcbea.’
How did it convert the number 4 to this word? No one knows. Moreover, it is an irreversible process. Given the word, ‘dcbea,’ it is impossible to tell what the machine was fed on the left. But every time you’d feed the number 4 to the machine, it will always spit out the same word, ‘dcbea.’
hash(4) == dcbea
Given the word, ‘dcbea,’ it is impossible to tell what the machine was fed on the left. But every time you’d feed the number 4 to the machine, it will always spit out the same word, ‘dcbea.’
Let’s try sending in a different number. How about 26?
hash(26) == 94c8e
We got ‘94c8e’ this time. Interesting! So, the words can contain the numbers too.
What if I ask you the following question now:
“Can you tell me what should I send from the left side of the machine such that I get a word that starts with three leading zeroes from the right side of it? For example, 000ab or 00098 or 000fa or anything among the others.”
Predicting the input
Think about the question for a moment.
I’ve told you the machine has a property that we cannot calculate what we must send from the left after we’re given the expected output on the right. With such a machine given to us, how can we answer the question I asked?
I can think of one method. Why not try every number in the universe one by one until we get a word that starts with three leading zeroes?
Try everything to calculate the input
Being optimistic, after several thousand attempts, we’ll end up with a number that will yield the required output on the right.
It was extremely difficult to calculate the input given the output. But at the same time, it will always be incredibly easy to verify if the predicted input yields the required output. Remember that the machine spits out the same word for a number every time.
How difficult do you think the answer is if I give you a number, say 72533, and ask you the question, “Does this number, when fed into the machine, yields a word that starts with three leading zeroes?”
All you need to do is, throw the number in the machine and see what did you get on the right side of it. That’s it.
The most important property of such machines is that — “Given an output, it is extremely difficult to calculate the input, but given the input and the output, it is pretty easy to verify if the input leads to the output.”
We’ll remember this one property of the Magic Machines (or Hash Functions) through the rest of the post:
Given an output, it is extremely difficult to calculate the input, but given an input and output, it is pretty easy to verify if the input leads to the output.
How to use these machines to seal a page?
We’ll use this magic machine to generate a seal for our page. Like always, we’ll start with an imaginary situation.
Imagine I give you two boxes. The first box contains the number 20893. I, then, ask you, “Can you figure out a number that when added to the number in the first box and fed to the machine will give us a word that starts with three leading zeroes?”
This is a similar situation as we saw previously and we have learned that the only way to calculate such a number is by trying every number available in the entire universe.
After several thousand attempts, we’ll stumble upon a number, say 21191, which when added to 20893 (i.e. 21191 + 20893 = 42084) and fed to the machine, will yield a word that satisfies our requirements.
In such a case, this number, 21191 becomes the seal for the number 20893. Assume there is a page that bears the number 20893 written on it. To seal that page (i.e. no one can change the contents of it), we will put a badge labeled ‘21191’ on top of it. As soon as the sealing number (i.e. 21191) is stuck on the page, the page is sealed.
The sealed number
[Jargon Box] The sealing number is called ‘Proof Of Work,’ meaning that this number is the proof that efforts had been made to calculate it. We are good with calling it ‘sealing number’ for our purposes.
If anyone wants to verify whether the page was altered, all he would have to do is — add the contents of the page with the sealing number and feed to the magic machine. If the machine gives out a word with three leading zeroes, the contents were untouched. If the word that comes out doesn’t meet our requirements, we can throw away the page because its contents were compromised, and are of no use.
We’ll use a similar sealing mechanism to seal all our pages and eventually arrange them in our respective folders.
Finally, sealing our page…
To seal our page that contains the transactions of the network, we’ll need to figure out a number that when appended to the list of transactions and fed to the machine, we get a word that starts with three leading zeroes on the right.
Note: I have been using the phrase ‘word starting with three leading zeroes’ only as an example. It illustrates how Hashing Functions work. The real challenges are much more complicated than this.
Once that number is calculated after spending time and electricity on the machine, the page is sealed with that number. If ever, someone tries to change the contents of the page, the sealing number will allow anyone to verify the integrity of the page.
Now that we know about sealing the page, we will go back to the time when we had finished writing the tenth transaction on the page, and we ran out of space to write more.
As soon as everyone runs out of the page to write further transactions, they indulge in calculating the sealing number for the page so that it can be tucked away in the folder. Everyone in the network does the calculation. The first one in the network to figure out the sealing number announces it to everyone else.
Immediately on hearing the sealing number, everyone verifies if it yields the required output or not. If it does, everyone labels their pages with this number and put it away in their folders.
But what if for someone, say #7, the sealing number that was announced doesn’t yield the required output? Such cases are not unusual. The possible reasons for this could be:
He might have misheard the transactions that were announced in the network
He might have miswritten the transactions that were announced in the network
He might have tried to cheat or be dishonest when writing transactions, either to favor himself or someone else in the network
No matter what the reason is, #7 has only one choice — to discard his page and copy it from someone else so that he too can put it in the folder. Unless he doesn’t put his page in the folder, he cannot continue writing further transactions, thus, forbidding him to be part of the network.
Whatever sealing number the majority agrees upon, becomes the honest sealing number.
Then why does everyone spend resources doing the calculation when they know that someone else will calculate and announce it to them? Why not sit idle and wait for the announcement?
Great question. This is where the incentives come in the picture. Everyone who is the part of the Blockchain is eligible for rewards. The first one to calculate the sealing number gets rewarded with free money for his efforts (i.e. expended CPU power and electricity).
Simply imagine, if #5 calculates the sealing number of a page, he gets rewarded with some free money, say $1, that gets minted out of thin air. In other words, the account balance of #5 gets incremented with $1 without decreasing anyone else’s account balance.
That’s how Bitcoin got into existence. It was the first currency to be transacted on a Blockchain (i.e. distributed registers). And in return, to keep the efforts going on in the network, people were awarded Bitcoins.
When enough people possess Bitcoins, they grow in value, making other people wanting Bitcoins; making Bitcoins grow in value even further; making even more people wanting Bitcoins; making them grow in value even further; and so on.
The rewards make everyone keep working in the network.
And once everyone tucks away the page in their folders, they bring out a new blank page and repeat the whole process all over again — doing it forever.
[Jargon Box] Think of a single page as a Block of transactions and the folder as the Chain of pages (Blocks), therefore, turning it into a Blockchain.
And that, my friends, is how Blockchain works.
Except that there’s one tiny thing I didn’t tell you. Yet.
Imagine there are five pages in the folder already — all sealed with a sealing number. What if I go back to the second page and modify a transaction to favor myself? The sealing number will let anyone detect the inconsistency in the transactions, right? What if I go ahead and calculate a new sealing number too for the modified transactions and label the page with that instead?
To prevent this problem of someone going back and modifying a page (Block) as well as the sealing number, there’s a little twist to how a sealing number is calculated.
Remember how I told you that I had given you two boxes — one containing the number 20893 and another empty for you to calculate? In reality, to calculate the sealing number in a Blockchain, instead of two boxes, there are three — two pre-filled and one to be calculated.
And when the contents of all those three boxes are added and fed to the machine, the answer that comes out from the right side must satisfy the required conditions.
We already know that one box contains the list of transactions and one box will contain the sealing number. The third box contains the output of the magic machine for the previous page.
With this neat little trick, we have made sure that every page depends on its previous page. Therefore, if someone has to modify a historical page, he would also have to change the contents and the sealing number of all the pages after that, to keep the chain consistent.
If one individual, out of the ten we imagined in the beginning, tries to cheat and modify the contents of the Blockchain (the folder containing the pages with the list of transactions), he would have to adjust several pages and also calculate the new sealing numbers for all those pages. We know how difficult it is to calculate the sealing numbers. Therefore, one dishonest guy in the network cannot beat the nine honest guys.
What will happen is, from the page the dishonest guy tries to cheat, he would be creating another chain in the network, but that chain would never be able to catch up with the honest chain — simply because one guy’s efforts and speed cannot beat cumulative efforts and speed of nine. Hence, guaranteeing that the longest chain in a network is the honest chain.
Longest chain is the honest chain.
Longest chain is the honest chain.
When I told you that one dishonest guy cannot beat nine honest guys, did it ring any bell in your head?
In that case, the protocol will fall flat on its face. And it is known as “51% Attack”. If the majority of the individuals in the network decides to turn dishonest and cheat the rest of the network, the protocol will fail its purpose.
And that’s the only vulnerable reason why Blockchains might collapse if they ever will. Know that, it is unlikely to happen but we must all know the vulnerable points of the system. It is built on the assumption that the majority of a crowd is always honest.
And that, my friends, is all there is about Blockchains. If you ever find someone feeling left behind and wondering, “WTF is the Blockchain?” you know where you can point them to. Bookmark the link.
Can think of someone right now who should read this? The ‘Share’ button is all yours.
—
About the author
By the way, I am curator of a weekly newsletter, , which delivers one idea from the future to your inboxes.
Mohit Mamoria is the curator of a weekly newsletter, , which delivers one idea from the future to your inboxes.
http://www.daviddfriedman.com/Academic/privacy_chapter/privacy.htm
David Friedman, 2005
An old science fiction novel features a device that surrounds its bearer with an impenetrable bubble of force. The inventor rapidly discovers that every government and political faction on the planet wants what he has and is prepared to use any means, from persuasion to brute force, to get it. Our hero spends most of the book alternately listening to arguments, trying to decide who are the good guys and using his invention to help him escape attempts to capture him.
After about a hundred and fifty pages he realizes that he has been asking the wrong question. The answer to "what faction can be trusted with a monopoly over the shield" is "no." The right question is how the shield will affect the world--how it will alter the balance between freedom and oppression, individual and state, small and big. The answer to that is easy. A world where the random individual is armored against anything short of an atomic explosion will be, on net, a better and freer world than the one he is currently living in. He writes out an explanation of how the shield works and spends two days distributing the information to people all over the world. By the time Military Security--the most formidable of his pursuers--catches up with him, it is too late. The cat is out of the bag.
Poul Anderson's shield is fiction. The nearest real world equivalent is privacy--my control over other people's access to information about me. Neither my government nor my neighbor can punish my thoughts, because neither can read my mind. That is why thoughts are free. However much other people are offended by what I write, they cannot retaliate unless they know who wrote it, what he looks like, where he lives. That is why Salmon Rushdie is still alive despite the death sentence passed on the author of The Satanic Verses fifteen years ago by Iranian authorities.
Defensive weapons can be used for bad purposes; an impenetrable shield would be very useful for a bank robber. But it would be even more useful for the bank teller. Robbing banks would be harder in a world where everyone had the shield than in a world where nobody did.
The ability to control other people's access to information about you can be used for bad purposes too. That is the usual argument against privacy--"If you haven't done anything wrong, what do you have to hide?" The ability to conceal past crimes from the police and potential victims is useful to a robber. But the ability to conceal what I have that is worth stealing, where it is, how it is protected, is equally useful to the potential victim. Broadly stated, privacy gives each of us more control over his own life--which on average, if not in every case, is likely to lead to a freer world.
If I am a bad guy, the police are not the only people I might want to keep secrets from. When courting a wealthy widow, it helps if she does not know that my last three wives drowned in their bath tubs after taking out large life insurance policies. When borrowing money, it helps if the lender does not know that I have declared bankruptcy twice already.
But in a world of voluntary transactions--such as loans and marriages--my privacy does not require you to take me on faith. You have the option of not taking me. I have the power to keep my past defaults secret from a potential lender but he has the power to refuse to lend to me if I do. Privacy is my ability to control other people's access to information about me. That does not mean that they cannot get the information--only that they cannot get it without my permission. Someone who offers to take care of my children but refuses to allow me access to the records that would show whether or not he has ever been convicted of child abuse has already told me all I need to know.
In some contexts I am willing to let other people know things about me. In others I am eager to. If only lenders knew a little more about my finances I would not be interrupted at dinner by phone calls from people offering to refinance my nonexistent mortgage. If sellers were better informed about what sorts of things I was interested in buying, advertisements would be less of a nuisance and more of a service. Even in a world where I could keep information secret, I often would choose not to. Privacy provides me protection when I want it and only when I want it.
Privacy and Government
"Government is not reason. It is not eloquence. It is a force, like fire:
a dangerous servant and a terrible master."
George Washington
Privacy includes the ability to keep things secret from the government. The better I can do that, the less able government is to help me--I might be keeping secret my weakness for alcohol, or heroin, or gambling or pornography and so preventing the government from stepping in to protect me from myself. And the better other people can keep secrets from the government, the harder it is for the government to protect me from them. If you view government as a benevolent super being watching over you--a wise and kindly uncle with a long white beard--you will and should reject much of what I am saying.
But government is not Uncle Sam or a philosopher king. Government is a set of institutions through which human beings act for human purposes. Its special feature--what differentiates political action from the other ways in which we try to get what we want--is that government is permitted to use force to make people do things. A firm can try to fool me into giving it my money. A tax collector uses more direct methods. A preacher can try to persuade me to renounce my sins. The Drug Enforcement Administration, with the help of the local police, can arrange to have me locked up until I do.
Part of the genius of American political culture is the recognition that making it hard for governments to control people is not always a bad thing. Political mechanisms, even in a democracy, give us only very limited control over what government can do to us. Reducing government's ability to do bad things to us, at the cost of limiting its ability to protect us from bad things done to us by ourselves or by other people, may not be such a bad deal. And since government, unlike a private criminal, has overwhelming superiority of physical force, control over what information it can get about me is one of the few ways in which I can limit its ability to control me.
I have defined privacy and sketched the reasons why I think it is, on the whole, a good thing. The obvious next questions are where privacy comes from--what determines how much of it we have--and what we can and should do to get more of it.
Where Does Privacy Come From?
One of the things that determines how much control I have over other people's access to information about me is technology. If someone invents a mind reading machine or a reliable truth drug, my thoughts will no longer be as private as they now are. Or as free.
Another is custom--systems of social norms. The more willing my friends and neighbors are to gossip about something, the easier it is for information about that something to get from those who have it to those who want it. That is one reason why Israelis are better informed about how much money their friends and relations make than Americans are and modern Americans better informed about other people's sex lives than nineteenth century Britons were.
A final factor is law. In the U.S., the Fourth Amendment to the Constitution prohibits "unreasonable searches and seizures" and requires that search warrants shall only be issued with probable cause. The more narrowly courts interpret that restriction, the easier it is to keep secrets from the police. One important example is the series of cases that applied the restriction to wiretaps as well as physical searches. Later cases have ruled on to what extent the use of high tech devices to figure out what people are doing inside their houses--infrared photographs to spot illegal greenhouses growing marijuana, for example--is a search and so requires a warrant.
Law and technology interact in complicated ways. For your neighbor's nosy fifteen year old to use a scanner to listen to the phone calls you make on your wireless phone and tell his friends about them is illegal. It is also easy, making that particular legal protection of privacy in practice unenforceable. The substitute is technology--encryption of the signal from the handset to the base station. Similarly with cell phones.
As these examples suggest, technological developments can both decrease and increase privacy. So can law. Legal rules that ban or limit technologies for learning things about other people, such as laws against wiretaps, increase privacy. Legal rules that ban or limit technologies for preventing other people from learning things about us, such as restrictions on the use of encryption, decrease it.
Privacy and Technology: The Dark Side of the Force
It used to be that one reason to move from a village to the big city was to get more privacy. Walls were no higher in the city, windows no less transparent. But there were so many more people. In the village, interested neighbors could keep track of what who was doing with whom. In the city, nobody could keep track of everyone.
That form of privacy--privacy through obscurity--is doomed. I cannot keep track of the million people who share the city I live in. But the computer on my desk has enough space on its hard drive to hold a hundred pages of information on every man, woman and child in San Jose. With a few hundred dollars worth of additional storage, I could do it for everyone in California, for a few thousand, everyone in the country. And I can do more than store the information. If I had it I could search it--produce, in a matter of seconds, a list of those of my fellow citizens who are left handed gun owners with more than six children. Privacy through obscurity cannot survive modern data processing.
As it happens, I do not have a hundred pages worth of information on each of my fellow citizens. But with a little time and effort--too much for a single individual, but not too much for a government, a police department, or a large firm--I could. It is hard to pass through the world without leaving tracks. Somewhere there is a record of every car I have registered, every tax form I have filed, two marriages, one divorce, the birth of three children, thousands of posts to online forums on a wide variety of subjects, four published books, medical records and a great deal more.
Much such information, although not all of it, was publicly available in the past. But actually digging it up was a lot of work. The result was that most of us went through life reasonably sure that most of the people we met did not know much about us beyond what we chose to tell them. That will not be true in the future.
Data processing is one technology with the potential to sharply reduce privacy. Another is surveillance. One form--already common in England--is a video camera on a pole.
A video camera in a park connected to a screen with a police officer watching it is, at first glance, no more a violation of privacy than the same police officer standing in the park watching what is going on. It merely lets the officer do his watching somewhere warm and out of the wet. Add a video recorder and it is arguably an improvement, since the evidence it produces is less subject to mistake or misrepresentation than the memory of the policeman. And, judging by British experience, such surveillance cameras are an effective way of reducing crime. What's the problem?
To see the answer, add one more technology--face recognition software. Combine that with a database, put up enough cameras, and we have a record of where everyone was any time of the day and--with suitable cameras--night. The arresting officer, or the prosecuting attorney, no longer has to ask the defendant where he was at eight P.M. of July ninth. All he has to do is enter the defendant's social security number and the date and the computer will tell him. And, if the defendant was in a public place at the time, show him.
For a slightly lower tech version of the same issue, consider the humble phone tap. In the past, the main limit on how many phones got tapped by police was not the difficulty of getting a court order but the cost of implementing it. Phone taps are labor intensive--someone has to listen to a lot of phone calls in order to find the ones that matter.
That problem has now been solved. Voice recognition software originated by companies such as Dragon Systems and IBM lets computers convert speech into text--a boon for computer users who are slow typists. The same technology means that the police officer listening to someone else's phone calls can now be replaced by a computer. Only when it gets a hit, spots the words or phrases it has been programmed to listen for, does it need to call in a human being. Computers work cheap.
In an old comedy thriller (The President's Analyst, starring James Coburn) the hero, having temporarily escaped his pursuers and made it to a phone booth, calls a friendly CIA agent to come rescue him. When he tries to leave the booth, the door won't open. Down the road comes a phone company truck loaded with booths. The truck's crane picks up the one containing the analyst, deposits it in the back, replaces it with an empty booth and drives off.
A minute later a helicopter descends containing the CIA agent and a KGB agent who is his temporary ally. They look in astonishment at the empty phone booth. The American speaks first:
"It can't be. Every phone in America tapped?"
The response (you will have to imagine the Russian accent)
"Where do you think you are≠Russia?"
A great scene in a very funny movie--but it may not be a joke much longer. The digital wiretap bill, pushed through Congress by the FBI a few years ago, already requires phone companies to provide law enforcement with the ability to simultaneously tap one percent of all phones in a selected area. There is no obvious reason why that cannot be expanded in the future. My current estimate is that the dedicated hardware to do the listening part of the job--for every phone call in the U.S.--would cost less than a billion dollars. And it is getting cheaper.
So far I have been discussing technologies that already exist. Fast forward a little further and surveillance need no longer be limited to public places. Video cameras are getting smaller. It should not be all that long before we can build one with the size--and the aerodynamic characteristics--of a mosquito.
Here again, if we regard government law enforcement agents as unambiguously good guys, there is no problem. The better our record of where everyone was when, the easier it will be to catch and convict criminals.
The same technology would make keeping track of dissidents, or political opponents, or members of an unpopular religion, or people with the wrong sexual tastes, or people who read the wrong books, or anyone else, a great deal easier than it now is. It is true that the random government is rather less likely to have bad intentions than the random criminal. But if it does have bad intentions it can do a great deal more damage.
The technologies I have been discussing so far--database and face recognition software, surveillance hardware--have the potential to make this a much less private world. So do other technologies that I have not covered: improvements in lie detectors and interrogation drugs to learn what we think, biometric identification by fingerprints, retinal patterns, DNA to learn who we are, with or without our permission. The future implications of such developments are sufficiently strong to have convinced at least one thoughtful observer that the best we can hope for in the future is a transparent society, a world without privacy where the police can watch us but we can also watch them (Brin 1999). I would find the symmetry of that future more appealing if it did not conceal an important asymmetry: They can arrest us and we cannot arrest them.
But there are other technologies.
Encryption: A World of Strong Privacy
We start with an old problem: How to communicate with someone without letting other people know what you are saying. There are a number of familiar solutions. If worried about eavesdroppers, check under the eaves. To be safer still, hold your private conversation in the middle of a large, open field, or a boat in the middle of a lake. The fish are not interested and nobody else can hear.
That no longer works. The middle of a lake is still within range of a shotgun mike. Eaves do not have to contain eavesdroppers≠just a microphone and a transmitter. Phone lines can be tapped, cordless or cell phone messages intercepted. An email bounces through multiple computers on its way to its destinationãanyone controlling one of those computers can save a copy for himself.
The solution is encryption. Scramble the message. Provide the intended recipient with the formula for unscrambling it. Now it does not matter if someone intercepts your mail. He can't read it.
There is still a problem. In order to read my scrambled message you need the key≠the formula describing how to unscramble it. If I do not have a safe way of sending you messages, I may not have a safe way of sending you the key either. If I sent it by a trusted messenger but made a small mistake as to who he was really working for, someone else now has a copy and can use it decrypt my future messages to you.
About twenty-five years ago, this problem was solved. The solution is public key encryption. It works by using two keys, each of which decrypts what the other encrypts. One of the two--my public key--I make available to anyone who might want to send me a message. The other never leaves my hands. Someone who wants to communicate with me encrypts his messages with my public key. I use my private key to decrypt them.
Public key encryption provides a free bonus--digital signatures. In order to prove that a message was sent by me I can encrypt it using my private key. The recipient decrypts it using my public key. The fact that what comes out is text rather than gibberish proves it was encrypted with the matching private key--which only I have. Hence, unless I have been very careless, the message is from me.
Imagine a world where public key encryption is in general use. Add in related technologies such as anonymous digital money, to permit payments that leave no paper trail, and anonymous remailers, to keep who I am talking to, as well as what I am saying, private--for details see (Friedman 1996). In that world I can email someone--anyone--with reasonable certainty that nobody else can read the message. I can have telephone conversations without worrying about who may be listening. In that world I can if I wish establish an online persona--an identity defined by my digital signature--while keeping control over the link between that and my realspace persona. However much my online persona offends someone--even the rulers of Iran--there is very little anyone can do about it. It is hard to murder someone when you don't know his name, what he looks like, or what continent he is on.
I have been describing things we already know how to do. Most can already be done--using free software that runs on the computers most of us have. I now take a small step forward to add one more element to the mix: Virtual reality. Using goggles and earphones--if we are willing to step further into science fiction, direct links between mind and computer--we create the illusion of seeing, hearing, perhaps tasting and touching. The world of strong privacy expands from text messages and phone conversations to something very much like the real world we currently live in. Just let your fingers do the walking.
Combine and Stir
I have described two clusters of technologies. One--database, voice and text recognition, surveillance--has the potential to reduce privacy to the point where those who control the technology know very nearly everything that everyone does. The other--encryption, online communication, virtual reality--has the potential to increase privacy to the point where individuals have nearly total control over other people's access to information about them. What if we get both?
It will be an interesting world. Everything you do in realspace will be known to the authorities, perhaps to everyone--David Brin's Transparent Society. But most of the important stuff--all transactions involving information, ideas, arguments, beliefs--will have been moved to cyberspace, protected by the strong privacy of encryption. Freedom of speech will no longer depend on how the Supreme Court interprets the First Amendment. It will be protected, instead, by the laws of mathematics--which so far, at least, heavily favor defense over offense, encryption over cracking.
There will be--already have been--attempts to use law to block both futures. Supporters of privacy will try to get laws restricting the ability of law enforcement--and other people--to use technology to learn our secrets. Opponents of privacy will try to get laws restricting the ability of private individuals to use encryption to protect their secrets.
Technology and Law
There are at least two legal approaches to preserving privacy in the face of technologies such as computer databases and surveillance. One is to use law to prevent other people from getting information--a data base is of no use if there is nothing in it. The other is to permit other people to get information but use law to limit what they can do with it.
An example of the first approach is regulation of wire tapping and other forms of surveillance--both laws against private surveillance and laws restricting surveillance by law enforcement agents. Such restrictions can keep some information about me from getting to other people. But they do nothing to protect the vast amount of information that I generate by going about my daily life in the public view--buying and selling, marrying and getting divorced, writing and talking.
An example of the second approach is the web of restrictions, legal, contractual, and customary, on the use of confidential information. I cannot keep my doctor from having access to the medical information he creates when he examines me and uses when he prescribes for me. But I can, to some limited degree, prevent him from sharing that information with other people. Credit bureaus are free to collect information on people in order to advise other people as to whether to lend them money but, under current Federal law, they are only permitted to release that information in response to requests from people who have a legitimate need for it.
As the example of credit bureaus suggests, there are practical difficulties with protecting privacy by letting other people have information and then controlling what they do with it. Credit agencies could not serve their intended purpose at any reasonable cost if they engaged in an extensive investigation of everyone who asked for information. And even if the agency limits itself to giving the information to people who can prove they are entitled to it, there is no way it can control who they then give it to. It is probably prudent to assume that what the credit agency knows about you any else can know if he really wants to. The forms you sign when you shift to a new doctor include an extensive list of people to whom and circumstances under which your medical information will be made available, so it might be equally prudent not to rely too much on your medical privacy.
As long as we limit our options to current technologies for protecting privacy, the outlook does not look good. We might succeed in restricting the use of surveillance, wiretapping, and similar technologies, although attempts to restrict their use by law enforcement face serious opposition by those concerned with the threat of crime and terrorism. But most information about us is public, and once information is out it is hard to control how other people use it or who they give it to.
The technologies of strong privacy offer at least a partial solution. If I make a purchase with a credit card, I create a paper trail--someone, somewhere, knows what I bought. Even if I use cash, a purchase in real space requires me to walk into a store where someone sees me--the information about what I bought is now his as well as mine. In a world where the relevant software is a little better than it now is--say ten years in the future--that someone is a store video camera linked to facial recognition software linked to a database. Stores, after all, like to know who their customers are.
If, however, I buy something over the phone or over the internet, using the digital equivalent of cash--anonymous digital currency--only I know that I bought it. If the something is not a physical object that must be delivered to me but information--music, data, software--I can collect my purchase online without ever revealing my identity or location.
Thus the technologies of encryption and computer networking can permit us, to a considerable extent, to move through the world without leaving footprints behind. If I want to receive advertising based on my past purchases--as it happens I often do--I can choose to make those purchases under my real name and provide my real address. If I want to receive the advertising without making my acts publicly observable--perhaps I am purchasing pornography--I can do it via an online identity. The link that ties my realspace body to my cyberspace persona is under my control. I have privacy--control over other people's access to information about me.
If we go a little further into science fiction I could even have privacy from my doctor. He knows the information that an examination--via remote controlled devices--revealed about me. He does not need to know what my name is, my face looks like, or where I live. It is not likely that I would want to carry my privacy that far--but I could.
So far I have been considering ways in which we might preserve privacy against the threat posed by technology. But there is another side to the story. For those who think that we already have too much privacy, what I view as the solution may look more like the problem. There have already been attempts to restrict the use of encryption to protect privacy. There will be more.
Suppose I concede, at least for the purposes of argument, that it is possible to have too much privacy as well as too little. Further, and less plausibly, suppose I believed that the strong privacy provided by encryption is a serious problem. How might one use law to solve it?
One difficulty is that encryption regulation poses the problem summed up in the slogan--"when guns are outlawed, only outlaws have guns." The mathematics of public key encryption have been public for decades. The software to do it already exists in a variety of forms, some of them freely available. Given the nature of software, once you have a program you can make an unlimited number of copies. Keeping encryption software out of the hands of spies, terrorists, and competent criminals is not a practical option. They probably have it already, and if they don't they can easily get it. The only people affected by a law against encryption software are the law abiding.
What about banning or restricting the use of encryption--at least encryption that cannot be broken by law enforcement agents? To enforce such a ban law enforcement agencies could randomly monitor all communication systems, looking for illegally encrypted messages. One practical problem is the enormous volume of information flowing over computer networks. A second and even more intractable problem is that while it is easy enough to tell whether a message consists of text written in English, it is very much harder--in practice impossible--to identify other sorts of content well enough to be sure that they do not contain encrypted messages.
Consider a three million pixel digital photo. To conceal a million character long encrypted message--an average sized novel--I replace the least significant bit of each of the numbers describing the color of a pixel with one bit of the message. The photo is now a marginally worse picture than it was--but there is no way an FBI agent, or a computer working for an FBI agent, can know precisely what the photo ought to look like.
Short of banning communication over computer networks or at least restricting it to text messages, there is no way that law enforcement can keep sophisticated criminals, spies, or terrorists from using encryption. What can be done is to put limits on the encryption software used by the rest of us≠to insist, for example, that if AOL or Microsoft builds encryption into their programs it must contain a back door permitting properly authorized persons to read the message without the key.
This still leaves the problem of how to give law enforcement what it wants without imposing unacceptably high costs on the rest of us. Consider the description of adequate regulation given by Louis Freeh, at the time the head of the FBI--the ability to crack any encrypted message in half an hour. The equivalent in realspace would be legal rules that let properly authorized law enforcement agents open any lock in the country in half an hour. That includes not only the lock on your front door but the locks protecting bank vaults, trade secrets, lawyers' records, lists of contributors to unpopular causes, and much else.
Encryption provides the locks for cyberspace. If all legal encryption comes with a mandatory back door accessible in half an hour to any police officer with a court order, everything in cyberspace is vulnerable to a private criminal with the right contacts. Those locks have billions of dollars worth of stuff behind them≠money in banks, trade secrets in computers and in messages. If being a police officer gives you access to locks with billions of dollars behind them, in cash, diamonds, or information, some cops will become criminals and some criminals will become cops.
In one important way, the consequence for cyberspace is even worse than the equivalent in realspace. If a police officer opens a safe and pockets a stack of cash or a bag of diamonds, the owner can see that something is missing and demand it back. But when information is copied the original is still there. If the officer who has decrypted your communications or stored data assures you that he found nothing relevant to his investigation and so took nothing away, there is no way to prove he is lying.
For encryption regulation to be useful it must either prevent the routine use of encryption or make it easy for law enforcement agents to access encrypted data and messages. Not only would that seriously handicap routine transactions, it would make computer crime easier by restricting the technology best suited to defend against it. And what we get in exchange is protection not against the use of encryption by sophisticated criminals and terrorists≠there is no way of providing that≠but only against its use by ordinary people and unsophisticated criminals. It does not look like a very attractive deal.
Privacy, Freedom and Government
Some years ago Professor Etzioni, who has contributed a chapter to this volume, published a book arguing for some restrictions on privacy as ways of promoting the common good. In reading it, I was struck by two differences between our views that explain much of the difference in our conclusions.
The first was that I did, and he did not, define privacy in the context of freedom of association. Consider the question of airlines requiring their pilots to be tested for drugs and alcohol. Professor Etzioni regards that as a (desirable) restriction on the pilots' privacy. I agree that it is desirable but not that it restricts privacy.
In a society where privacy is protected you have a right not to be tested. You do not have a right to be hired to fly airplanes--and, if you choose to exercise your right not to be tested, you should not be surprised if the airline exercises its right not to hire you. The background legal principle is not that I have a right to be hired as a pilot or that United Airlines has a right to have me fly their planes. The background principle is that they can hire me to fly their planes if and only if both they and I agree. Given that principle of free association many--although not all--of the problems that Professor Etzioni sees with privacy vanish.
The second difference has to do with our different views of government. While Professor Etzioni makes occasional references to the risk of some future oppressive government misusing information, he does not take seriously similar concerns with regard to our current government. His implicit assumption is that government is to be viewed as a benevolent agent standing above the human struggle, not as a mechanism through which individuals seek to achieve their goals, often at the expense of other individuals. That is not a view that strikes me as realistic.
Conclusion
Privacy, like almost anything else, can be used for good or bad purposes. My thesis in this chapter is that, on net, more privacy makes the world a better place. It does so because it is an essentially defensive weapon, a way of reducing the ability of other people to control us.
Reducing the ability of other people to control us is not always a good thing--someone may, after all, want to control me for my own good or control you to keep you from hurting me. But we live in a world where too much control is more of a problem than too little. In the entire world over the past century, something on the order of ten million people have been killed by private murderers. Between a hundred and two hundred million have been killed by the governments that ruled them (Rummel (1999) estimates about 170 million from 1900 to 1987). Quite a lot of individual pain, suffering, injustice has been due to the acts of private individuals; some could have been prevented by better law enforcement. But mass pain, suffering and injustice has been very nearly a monopoly of governments. If governments were better able to control us, there would have been more of it. And at the individual level, while privacy can be used to protect criminals against police, it can also be used to protect victims against criminals.
It is tempting to try for the best of both worlds--to restrict the privacy of bad people while protecting that of good, permit governments to collect detailed information about us but only allow it to be used for good purposes. But somebody must decide who are the good people and the bad, what purposes are worthy or unworthy. Whoever that somebody is will have his own agenda, his own purposes. Angels are in short supply.
To put the matter differently, "cannot" is better protection than "may not." If we permit law enforcement agents to know everything about everybody but forbid them fro using that information against individuals with unpopular views or political opponents of the party in power, we are protected only by a "may not." The same is true if private parties are able to collect information but restricted in what they may do with it. If the law keeps the information from being collected in the first place, we are protected by a cannot--however corrupt or dishonest they are, or however convinced that they are working for a greater good, people cannot use information they do not have.
"Cannot" at one level may depend on "may not" at another. You cannot use information that you do not have. You do not have it because you may not collect it. But even if the law forbids wiretaps or unauthorized surveillance, a sufficiently determined agency--or a sufficiently competent private criminal--can violate the law. That is where technologies that support privacy come into the picture. In a world where encryption is routine it does you no good to tap my phone because you cannot understand what I am saying. It does no good to intercept my email because you cannot read it. "Cannot" is better than "may not."
We can and should fight a delaying action against the use of technology to restrict privacy. But in the long run technology--useful technology--is hard to stop. In the long run, the real battle will be the one fought in defense of technologies that protect privacy. That one we might win.
Bibliography
References
Brin, David (1999), The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? Perseus (The first chapter is webbed at http://www.kithrup.com/brin/tschp1.html)
Etzioni, Amitai (1999), The Limits of Privacy, Basic Books.
Friedman, David (1996) ≥A World of Strong Privacy: Promises and Perils of Encryption,≤ Social Philosophy and Policy, pp. 212-228. Webbed at http://www.daviddfriedman.com/Academic/Strong_Privacy/Strong_Privacy.html
Rummel, Rudolph J. (1999), Statistics of democide: Genocide and mass murder since 1900, Lit Verlag.
Further Reading
Anderson, Poul, Shield
Ellen Frankel Paul, Fred D. Miller, Jr., & Jeffrey Paul (Eds). (2000) The Right to Privacy, Cambridge University Press.
http://www.mega.nu:8080/ampp/rummel/20th.htm (detailed statistics on 20th century democide)
http://www.daviddfriedman.com/future_imperfect_draft/future_imperfect.html (Much more detailed account of encryption, surveillance, and much else.)
https://www.activism.net/cypherpunk/manifesto.html
Eric Hughes, Mar 1993
Privacy is necessary for an open society in the electronic age. Privacy is not secrecy. A private matter is something one doesn't want the whole world to know, but a secret matter is something one doesn't want anybody to know. Privacy is the power to selectively reveal oneself to the world.
If two parties have some sort of dealings, then each has a memory of their interaction. Each party can speak about their own memory of this; how could anyone prevent it? One could pass laws against it, but the freedom of speech, even more than privacy, is fundamental to an open society; we seek not to restrict any speech at all. If many parties speak together in the same forum, each can speak to all the others and aggregate together knowledge about individuals and other parties. The power of electronic communications has enabled such group speech, and it will not go away merely because we might want it to.
Since we desire privacy, we must ensure that each party to a transaction have knowledge only of that which is directly necessary for that transaction. Since any information can be spoken of, we must ensure that we reveal as little as possible. In most cases personal identity is not salient. When I purchase a magazine at a store and hand cash to the clerk, there is no need to know who I am. When I ask my electronic mail provider to send and receive messages, my provider need not know to whom I am speaking or what I am saying or what others are saying to me; my provider only need know how to get the message there and how much I owe them in fees. When my identity is revealed by the underlying mechanism of the transaction, I have no privacy. I cannot here selectively reveal myself; I must always reveal myself.
Therefore, privacy in an open society requires anonymous transaction systems. Until now, cash has been the primary such system. An anonymous transaction system is not a secret transaction system. An anonymous system empowers individuals to reveal their identity when desired and only when desired; this is the essence of privacy.
Privacy in an open society also requires cryptography. If I say something, I want it heard only by those for whom I intend it. If the content of my speech is available to the world, I have no privacy. To encrypt is to indicate the desire for privacy, and to encrypt with weak cryptography is to indicate not too much desire for privacy. Furthermore, to reveal one's identity with assurance when the default is anonymity requires the cryptographic signature.
We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy out of their beneficence. It is to their advantage to speak of us, and we should expect that they will speak. To try to prevent their speech is to fight against the realities of information. Information does not just want to be free, it longs to be free. Information expands to fill the available storage space. Information is Rumor's younger, stronger cousin; Information is fleeter of foot, has more eyes, knows more, and understands less than Rumor.
We must defend our own privacy if we expect to have any. We must come together and create systems which allow anonymous transactions to take place. People have been defending their own privacy for centuries with whispers, darkness, envelopes, closed doors, secret handshakes, and couriers. The technologies of the past did not allow for strong privacy, but electronic technologies do.
We the Cypherpunks are dedicated to building anonymous systems. We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money.
Cypherpunks write code. We know that someone has to write software to defend privacy, and since we can't get privacy unless we all do, we're going to write it. We publish our code so that our fellow Cypherpunks may practice and play with it. Our code is free for all to use, worldwide. We don't much care if you don't approve of the software we write. We know that software can't be destroyed and that a widely dispersed system can't be shut down.
Cypherpunks deplore regulations on cryptography, for encryption is fundamentally a private act. The act of encryption, in fact, removes information from the public realm. Even laws against cryptography reach only so far as a nation's border and the arm of its violence. Cryptography will ineluctably spread over the whole globe, and with it the anonymous transactions systems that it makes possible.
For privacy to be widespread it must be part of a social contract. People must come and together deploy these systems for the common good. Privacy only extends so far as the cooperation of one's fellows in society. We the Cypherpunks seek your questions and your concerns and hope we may engage you so that we do not deceive ourselves. We will not, however, be moved out of our course because some may disagree with our goals.
The Cypherpunks are actively engaged in making the networks safer for privacy. Let us proceed together apace.
Onward.
9 March 1993
Juan Ibanez, Mar 1993
It is often claimed that Bitcoin is revolutionary because it is the first triple-entry accounting system in history. But is this really so?
To answer this, we must first ask what triple-entry accounting is. A quick Google search will provide an answer like the following: triple-entry accounting is using blockchain to build a common ledger between two parties that would typically maintain two separate, redundant, double-entry ledgers.
This is not exactly right. Triple-entry accounting is a design for a common ledger, yes, but not all common ledgers are triple-entry. Triple-entry accounting is a particular model to build a common ledger through signed messages.
Blockchain technology, introduced by Bitcoin, presents a way to replace Ivan with a decentralized community of nodes, making the entire system more “trustless” (one does not need to personally trust a third party but only the architecture of the system itself). But is Bitcoin itself triple-entry?
February’s model shows the receipt as signed by Alice only. While February argues that this is called triple-entry accounting because there is a signed receipt held by three parties, this could be seen as deviating from triple-entry accounting as we described before. Signed receipts are held by three parties, yes, but shouldn’t they also be signed thrice?
Bookkeeping is keeping records of transactions, which are a bilateral event. In a sale, for example, somebody pays and somebody else delivers something in return. Bitcoin, however, is just a payment system. This means it only records (typically) one-half of the transaction, making the second signature almost unnecessary.
Todd Boyle’s triple-entry accounting looked at wide patterns of transactions as bilateral events in trade cycles (offer, acceptance, delivery, invoice, payment, etcetera). This made Bob’s acceptance necessary, e.g. for draft invoices, as there were two parties which were both giving something up. A signature was thus needed to show that both parties were consenting to give this up.
However, for a payment system in which payment is not linked to delivery, it makes little sense to attest Bob’s consent. It could be seen as a waste of time. This was the case of Ian Grigg’s original triple-entry accounting model: the Ricardo Payment System.
One can therefore argue that Bitcoin is triple-entry accounting even in the absence of a second signature, because this what triple-entry accounting looks like for a payment-only system.
Nevertheless, a second signature is found in payment systems: in the event of the receiver deciding to spend the value by signing another payment over it. In the act of signing a spend, the receiver confirms the receipt of previous funds, and in effect creates an asynchronous signature. This is broadly true of payment systems, whether they be Bitcoin, Ricardo or high street banks.
In summary, in the act of later spending the amount received, the receiver of a payment sends a signed message confirming acceptance of that payment. This also applies to Bitcoin’s UTXO (Unspent Transaction Output) model.
In UTXO, the transaction input (amount sent by the payer) or inputs must equal the transaction output (amount the payee receives). If it exceeds it, it is spent anyway and returned to the payer as a new output: change. As explained in the aforementioned paper:
“The UTXO model necessarily requires the new payer (and former payee) to sign off on a message that confirms receipt of a prior payment output in its totality (or of a mining reward which resulted in the creation of new bitcoins through a coinbase transaction).”
So, second signatures can be found even in the absence of schemes to make Bitcoin transactions bilateral. What about the third ones?
Admittedly, we are using a broad concept of signature: a signature is any token that attests agreement at some point. Nonetheless, this goes in line with Ian Grigg’s design, not against it.
A triple-entry accounting system requires a signed receipt to be held by three parties in three places. If the transaction pattern requires this, all of the parties must have signed the receipt. Bitcoin does this and, in this sense, it is triple-entry.
https://www.preethikasireddy.com/post/lets-take-a-crack-at-understanding-distributed-consensus
Preeth Kasireddy, Nov 2018
Distributed systems can be difficult to understand, mainly because the knowledge surrounding them is distributed. But don’t worry, I’m well aware of the irony. While teaching myself distributed computing, I fell flat on my face many times. Now, after many trials and tribulations, I’m finally ready to explain the basics of distributed systems to you.
Blockchains have forced engineers and scientists to re-examine and question firmly entrenched paradigms in distributed computing.
I also want to discuss the profound effect that blockchain technology has had on the field. Blockchains have forced engineers and scientists to re-examine and question firmly entrenched paradigms in distributed computing. Perhaps no other technology has catalyzed progress faster in this area of study than blockchain.
Distributed systems are by no means new. Scientists and engineers have spent decades researching the subject. But what does blockchain have to do with them? Well, all the contributions that blockchain has made wouldn’t have been possible if distributed systems hadn’t existed first.
Unfortunately, much of the literature on distributed computing is either difficult to comprehend or dispersed across way too many academic papers. To make matters more complex, there are hundreds of architectures, all of which serve different needs. Boiling this down into a simple-to-understand framework is quite difficult.
Because the field is vast, I had to carefully choose what I could cover. I also had to make generalizations to mask some of the complexity. Please note, my goal is not to make you an expert in the field. Instead, I want to give you enough knowledge to jump-start your journey into distributed systems and consensus.
After reading this post, you’ll walk away with a stronger grasp of:
What a distributed system is,
The properties of a distributed system,
What it means to have consensus in a distributed system,
An understanding of foundational consensus algorithms (e.g. DLS and PBFT), and
Why Nakamoto Consensus is a big deal.
I hope you’re ready to learn, because class is now in session.
A distributed system involves a set of distinct processes (e.g., computers) passing messages to one another and coordinating to accomplish a common objective (i.e., solving a computational problem).
A distributed system is a group of computers working together to achieve a unified goal.
Simply put, a distributed system is a group of computers working together to achieve a unified goal. And although the processes are separate, the system appears as a single computer to end-user(s).
As I mentioned, there are hundreds of architectures for a distributed system. For example, a single computer can also be viewed as a distributed system: the central control unit, memory units, and input-output channels are separate processes collaborating to complete an objective.
In the case of an airplane, these discrete units work together to get you from Point A to Point B:
In this post, we’ll focus on distributed systems in which processes are spatially-separated computers.
Note: I may use the terms “node,” “peer,” “computer,” or “component” interchangeably with “process.” They all mean the same thing for the purposes of this post. Similarly, I may use the term “network” interchangeably with “system.”
Every distributed system has a specific set of characteristics. These include:
A) Concurrency
The processes in the system operate concurrently, meaning multiple events occur simultaneously. In other words, each computer in the network executes events independently at the same time as other computers in the network.
This requires coordination.
Lamport, L (1978). Time, Clocks and Ordering of Events in a Distributed System
B) Lack of a global clock
For a distributed system to work, we need a way to determine the order of events. However, in a set of computers operating concurrently, it is sometimes impossible to say that one of two events occurred first, as computers are spatially separated. In other words, there is no single global clock that determines the sequence of events happening across all computers in the network.
Messages are sent before they are received.
Each computer has a sequence of events.
However, if we base the order entirely upon events heard by each individual computer, we can run into situations where this order differs from what a user external to the system perceives. Thus, the paper shows that the algorithm can still allow for anomalous behavior.
Finally, Lamport discusses how such anomalies can be prevented by using properly synchronized physical clocks.
Essentially, Lamport’s paper demonstrates that time and order of events are fundamental obstacles in a system of distributed computers that are spatially separated.
C) Independent failure of components
A critical aspect of understanding distributed systems is acknowledging that components in a distributed system are faulty. This is why it’s called “fault-tolerant distributed computing.”
It’s impossible to have a system free of faults. Real systems are subject to a number of possible flaws or defects, whether that’s a process crashing; messages being lost, distorted, or duplicated; a network partition delaying or dropping messages; or even a process going completely haywire and sending messages according to some malevolent plan.
It’s impossible to have a system free of faults.
These failures can be broadly classified into three categories:
Crash-fail: The component stops working without warning (e.g., the computer crashes).
Omission: The component sends a message but it is not received by the other nodes (e.g., the message was dropped).
Byzantine: The component behaves arbitrarily. This type of fault is irrelevant in controlled environments (e.g., Google or Amazon data centers) where there is presumably no malicious behavior. Instead, these faults occur in what’s known as an “adversarial context.” Basically, when a decentralized set of independent actors serve as nodes in the network, these actors may choose to act in a “Byzantine” manner. This means they maliciously choose to alter, block, or not send messages at all.
With this in mind, the aim is to design protocols that allow a system with faulty components to still achieve the common goal and provide a useful service.
Given that every system has faults, a core consideration we must make when building a distributed system is whether it can survive even when its parts deviate from normal behavior, whether that’s due to non-malicious behaviors (i.e., crash-fail or omission faults) or malicious behavior (i.e., Byzantine faults).
Broadly speaking, there are two types of models to consider when making a distributed system:
1) Simple fault-tolerance
In a simple fault-tolerant system, we assume that all parts of the system do one of two things: they either follow the protocol exactly or they fail. This type of system should definitely be able to handle nodes going offline or failing. But it doesn’t have to worry about nodes exhibiting arbitrary or malicious behavior.
2A) Byzantine fault-tolerance
A simple fault-tolerant system is not very useful in an uncontrolled environment. In a decentralized system that has nodes controlled by independent actors communicating on the open, permissionless internet, we also need to design for nodes that choose to be malicious or “Byzantine.” Therefore, in a Byzantine fault-tolerant system, we assume nodes can fail or be malicious.
2B) BAR fault-tolerance
More formally, this is defined as the BAR model — one that specifies for both Byzantine and rational failures. The BAR model assumes three types of actors:
Byzantine: Byzantine nodes are malicious and trying to screw you.
Altruistic: Honest nodes always follow the protocol.
Rational: Rational nodes only follow the protocol if it suits them.
D) Message passing
As I noted earlier, computers in a distributed system communicate and coordinate by “message passing” between one or more other computers. Messages can be passed using any messaging protocol, whether that’s HTTP, RPC, or a custom protocol built for the specific implementation. There are two types of message-passing environments:
1) Synchronous
In a synchronous system, it is assumed that messages will be delivered within some fixed, known amount of time.
Synchronous message passing is conceptually less complex because users have a guarantee: when they send a message, the receiving component will get it within a certain time frame. This allows users to model their protocol with a fixed upper bound of how long the message will take to reach its destination.
However, this type of environment is not very practical in a real-world distributed system where computers can crash or go offline and messages can be dropped, duplicated, delayed, or received out of order.
2) Asynchronous
In an asynchronous message-passing system, it is assumed that a network may delay messages infinitely, duplicate them, or deliver them out of order. In other words, there is no fixed upper bound on how long a message will take to be received.
So far, we’ve learned about the following properties of a distributed system:
Concurrency of processes
Lack of a global clock
Faulty processes
Message passing
Next, we’ll focus on understanding what it means to achieve “consensus” in a distributed system. But first, it’s important to reiterate what we alluded to earlier: there are hundreds of hardware and software architectures used for distributed computing.
The most common form is called a replicated state machine.
Replicated State Machine
A replicated state machine is a deterministic state machine that is replicated across many computers but functions as a single state machine. Any of these computers may be faulty, but the state machine will still function.
In a replicated state machine, if a transaction is valid, a set of inputs will cause the state of the system to transition to the next state. A transaction is an atomic operation on a database. This means the operations either complete in full or never complete at all. The set of transactions maintained in a replicated state machine is known as a “transaction log.”
The logic for transitioning from one valid state to the next is called the “state transition logic.”
In other words, a replicated state machine is a set of distributed computers that all start with the same initial value. For each state transition, each of the processes decides on the next value. Reaching “consensus” means that all the computers must collectively agree on the output of this value.
In turn, this maintains a consistent transaction log across every computer in the system (i.e., they “achieve a common goal”). The replicated state machine must continually accept new transactions into this log (i.e., “provide a useful service”). It must do so despite the fact that:
Some of the computers are faulty.
The network is not reliable and messages may fail to deliver, be delayed, or be out of order.
There is no global clock to help determine the order of events.
And this, my friends, is the fundamental goal of any consensus algorithm.
The Consensus Problem, Defined
An algorithm achieves consensus if it satisfies the following conditions:
Agreement: All non-faulty nodes decide on the same output value.
Termination: All non-faulty nodes eventually decide on some output value.
Note: Different algorithms have different variations of the conditions above. For example, some divide the Agreement property into Consistency and Totality. Some have a concept of Validity or Integrity or Efficiency. However, such nuances are beyond the scope of this post.
Broadly speaking, consensus algorithms typically assume three types of actors in a system:
Proposers, often called leaders or coordinators.
Acceptors, processes that listen to requests from proposers and respond with values.
Learners, other processes in the system which learn the final values that are decided upon.
Generally, we can define a consensus algorithm by three steps:
Step 1: Elect
Processes elect a single process (i.e., a leader) to make decisions.
The leader proposes the next valid output value.
Step 2: Vote
The non-faulty processes listen to the value being proposed by the leader, validate it, and propose it as the next valid value.
Step 3: Decide
The non-faulty processes must come to a consensus on a single correct output value. If it receives a threshold number of identical votes which satisfy some criteria, then the processes will decide on that value.
Otherwise, the steps start over.
It’s important to note that every consensus algorithm has different:
Terminology (e.g., rounds, phases),
Procedures for how votes are handled, and
Criteria for how a final value is decided (e.g., validity conditions).
Nonetheless, if we can use this generic process to build an algorithm that guarantees the general conditions defined above, then we have a distributed system which is able to achieve consensus.
Simple enough, right?
… Not really. But you probably saw that coming!
Recall how we described the difference between a synchronous system and asynchronous system:
In synchronous environments, messages are delivered within a fixed time frame
In asynchronous environments, there’s no guarantee of a message being delivered.
This distinction is important.
Reaching consensus in a synchronous environment is possible because we can make assumptions about the maximum time it takes for messages to get delivered. Thus, in this type of system, we can allow the different nodes in the system to take turns proposing new transactions, poll for a majority vote, and skip any node if it doesn’t offer a proposal within the maximum time limit.
But, as noted earlier, assuming we are operating in synchronous environments is not practical outside of controlled environments where message latency is predictable, such as data centers which have synchronized atomic clocks.
In reality, most environments don’t allow us to make the synchronous assumption. So we must design for asynchronous environments.
If we cannot assume a maximum message delivery time in an asynchronous environment, then achieving termination is much harder, if not impossible. Remember, one of the conditions that must be met to achieve consensus is “termination,” which means every non-faulty node must decide on some output value.
This is formally known as the “FLP impossibility result.” How did it get this name? Well, I’m glad you asked!
Even a single faulty process makes it impossible to reach consensus among deterministic asynchronous processes.
This result was a huge bummer for the distributed computing space. Nonetheless, scientists continued to push forward to find ways to circumvent FLP impossibility.
At a high level, there are two ways to circumvent FLP impossibility:
Use synchrony assumptions.
Use non-determinism.
Let’s take a deep dive into each one right now.
I know what you’re thinking: What the heck does this even mean?
Let’s revisit our impossibility result. Here’s another way to think about it: the FLP impossibility result essentially shows that, if we cannot make progress in a system, then we cannot reach consensus. In other words, if messages are asynchronously delivered, termination cannot be guaranteed. Recall that termination is a required condition that means every non-faulty node must eventually decide on some output value.
But how can we guarantee every non-faulty process will decide on a value if we don’t know when a message will be delivered due to asynchronous networks?
To be clear, the finding does not state that consensus is unreachable. Rather, due to asynchrony, consensus cannot be reached in a fixed time. Saying that consensus is “impossible” simply means that consensus is “not always possible.” It’s a subtle but crucial detail.
One way to circumvent this is to use timeouts. If no progress is being made on deciding the next value, we wait until a timeout, then start the steps all over again. As we’re about to see, this is what consensus algorithms like Paxos and Raft essentially did.
Paxos
Paxos works like this:
Phase 1: Prepare request
The proposer chooses a new proposal version number (n) and sends a “prepare request” to the acceptors.
If acceptors receive a prepare request (“prepare,” n) with n greater than that of any prepare request they had already responded to, the acceptors send out (“ack,” n, n’, v’) or (“ack,” n, ^ , ^).
Acceptors respond with a promise not to accept any more proposals numbered less than n.
Acceptors suggest the value (v) of the highest-number proposal that they have accepted, if any. Or else, they respond with ^.
Phase 2: Accept request
If the proposer receives responses from a majority of the acceptors, then it can issue an accept request (“accept,” n, v) with number n and value v.
n is the number that appeared in the prepare request.
v is the value of the highest-numbered proposal among the responses.
If the acceptor receives an accept request (“accept,” n, v), it accepts the proposal unless it has already responded to a prepare request with a number greater than n.
Phase 3: Learning phase
Whenever an acceptor accepts a proposal, it responds to all learners (“accept,” n, v).
Learners receive (“accept,” n, v) from a majority of acceptors, decide v, and send (“decide,” v) to all other learners.
Learners receive (“decide,” v) and the decided v.
Phew! Confused yet? I know that was a quite a lot of information to digest.
But wait… There’s more!
As we now know, every distributed system has faults. In this algorithm, if a proposer failed (e.g., because there was an omission fault), then decisions could be delayed. Paxos dealt with this by starting with a new version number in Phase 1, even if previous attempts never ended.
I won’t go into details, but the process to get back to normal operations in such cases was quite complex since processes were expected to step in and drive the resolution process forward.
The main reason Paxos is so hard to understand is that many of its implementation details are left open to the reader’s interpretation: How do we know when a proposer is failing? Do we use synchronous clocks to set a timeout period for deciding when a proposer is failing and we need to move on to the next rank? 🤷
This design choice ended up becoming one of the biggest downsides of Paxos. It’s not only incredibly difficult to understand but difficult to implement as well. In turn, this made the field of distributed systems incredibly hard to navigate.
By now, you’re probably wondering where the synchrony assumption comes in.
In Paxos, although timeouts are not explicit in the algorithm, when it comes to the actual implementation, electing a new proposer after some timeout period is necessary to achieve termination. Otherwise, we couldn’t guarantee that acceptors would output the next value, and the system could come to a halt.
Raft
One important new thing we learned from Raft is the concept of using a shared timeout to deal with termination. In Raft, if you crash and restart, you wait at least one timeout period before trying to get yourself declared a leader, and you are guaranteed to make progress.
But Wait… What about ‘Byzantine’ Environments?
While traditional consensus algorithms (such as Paxos and Raft) are able to thrive in asynchronous environments using some level of synchrony assumptions (i.e. timeouts), they are not Byzantine fault-tolerant. They are only crash fault-tolerant.
Crash-faults are easier to handle because we can model the process as either working or crashed — 0 or 1. The processes can’t act maliciously and lie. Therefore, in a crash fault-tolerant system, a distributed system can be built where a simple majority is enough to reach a consensus.
In an open and decentralized system (such as public blockchains), users have no control over the nodes in the network. Instead, each node makes decisions toward its individual goals, which may conflict with those of other nodes.
In a Byzantine system where nodes have different incentives and can lie, coordinate, or act arbitrarily, you cannot assume a simple majority is enough to reach consensus. Half or more of the supposedly honest nodes can coordinate with each other to lie.
For example, if an elected leader is Byzantine and maintains strong network connections to other nodes, it can compromise the system. Recall how we said we must model our system to either tolerate simple faults or Byzantine faults. Raft and Paxos are simple fault-tolerant but not Byzantine fault-tolerant. They are not designed to tolerate malicious behavior.
The ‘Byzantine General’s Problem’
Here’s why:
If x nodes are faulty, then the system needs to operate correctly after coordinating with n minus x nodes (since x nodes might be faulty/Byzantine and not responding). However, we must prepare for the possibility that the x that doesn’t respond may not be faulty; it could be the x that does respond. If we want the number of non-faulty nodes to outnumber the number of faulty nodes, we need at least n minus x minus x > x. Hence, n > 3x + 1 is optimal.
However, the algorithms demonstrated in this paper are only designed to work in a synchronous environment. Bummer! It seems we can only get one or the other (Byzantine or Asynchronous) right. An environment that is both Byzantine and Asynchronous seems much harder to design for.
Why?
In short, building a consensus algorithm that can withstand both an asynchronous environment and a Byzantine one is… well, that would sort of be like making a miracle happen.
Algorithms like Paxos and Raft were well-known and widely used. But there was also a lot of academic work that focused more on solving the consensus problem in a Byzantine + asynchronous setting.
So buckle your seatbelts…
We’re going on a field trip…
To the land of…
Theoretical academic papers!
Okay, okay — I’m sorry for building that up. But you should be excited! Remember that whole “making a miracle” thing we discussed earlier? We’re going to take a look at two algorithms (DLS and PBFT) that brought us closer than ever before to breaking the Byzantine + asynchronous barrier.
The DLS Algorithm
As you may recall, in a synchronous system, there is a known fixed upper bound on the time required for a message to be sent from one processor to another. In an asynchronous system, no fixed upper bounds exist. Partial synchrony lies somewhere between those two extremes.
The paper explained two versions of the partial synchrony assumption:
Assume that fixed bounds exist for how long messages take to get delivered. But they are not known a priori. The goal is to reach consensus regardless of the actual bounds.
Assume the upper bounds for message delivery are known, but they’re only guaranteed to hold starting at some unknown time (also called “Global Standardization Time,” GST). The goal is to design a system that can reach consensus regardless of when this time occurs.
Here’s how the DLS algorithm works:
A series of rounds are divided into “trying” and “lock-release” phases.
Each round has a proposer and begins with each of the processes communicating the value they believe is correct.
The proposer “proposes” a value if at least N − x processes have communicated that value.
When a process receives the proposed value from the proposer, it must lock on the value and then broadcast that information.
If the proposer receives messages from x + 1 processes that they locked on some value, it commits that as the final value.
DLS was a major breakthrough because it created a new category of network assumptions to be made — namely, partial synchrony — and proved consensus was possible with this assumption. The other imperative takeaway from the DLS paper was separating the concerns for reaching consensus in a Byzantine and asynchronous setting into two buckets: safety and liveness.
Safety
This is another term for the “agreement” property we discussed earlier, where all non-faulty processes agree on the same output. If we can guarantee safety, we can guarantee that the system as a whole will stay in sync. We want all nodes to agree on the total order of the transaction log, despite failures and malicious actors. A violation of safety means that we end up with two or more valid transaction logs.
Liveness
This is another term for the “termination” property we discussed earlier, where every non-faulty node eventually decides on some output value. In a blockchain setting, “liveness” means the blockchain keeps growing by adding valid new blocks. Liveness is important because it’s the only way that the network can continue to be useful — otherwise, it will stall.
As we know from the FLP impossibility, consensus can’t be achieved in a completely asynchronous system. The DLS paper argued that making a partial synchrony assumption for achieving the liveness condition is enough to overcome FLP impossibility.
Thus, the paper proved that the algorithms don’t need to use any synchrony assumption to achieve the safety condition.
Pretty straightforward, right? Don’t worry if it’s not. Let’s dig a little deeper.
Remember that if nodes aren’t deciding on some output value, the system just halts. So, if we make some synchrony assumptions (i.e., timeouts) to guarantee termination and one of those fails, it makes sense that this would also bring the system to a stop.
But if we design an algorithm where we assume timeouts (to guarantee correctness), this carries the risk of leading to two valid transaction logs if the synchrony assumption fails.
Designing a distributed system is always about trade-offs.
This would be far more dangerous than the former option. There’s no point in having a useful service (i.e., liveness) if the service is corrupt (i.e., no safety). Basically, having two different blockchains is worse than having the entire blockchain come to a halt.
A distributed system is always about trade-offs. If you want to overcome a limitation (e.g., FLP impossibility), you must make a sacrifice somewhere else. In this case, separating the concerns into safety and liveness is brilliant. It lets us build a system that is safe in an asynchronous setting but still needs some form of timeouts to keep producing new values.
Despite everything that the DLS paper offered, DLS was never widely implemented or used in a real-world Byzantine setting. This is probably due to the fact that one of the core assumptions in the DLS algorithm was to use synchronous processor clocks in order to have a common notion of time. In reality, synchronous clocks are vulnerable to a number of attacks and wouldn’t fare well in a Byzantine fault-tolerant setting.
The PBFT Algorithm
“Practical” in this sense meant it worked in asynchronous environments like the internet and had some optimizations that made it faster than previous consensus algorithms. The paper argued that previous algorithms, while shown to be “theoretically possible,” were either too slow to be used or assumed synchrony for safety.
And as we’ve explained, that can be quite dangerous in an asynchronous setting.
In a nutshell, the PBFT algorithm showed that it could provide safety and liveness assuming (n-1)/3 nodes were faulty. As we previously discussed, that’s the minimum number of nodes we need to tolerate Byzantine faults. Therefore, the resiliency of the algorithm was optimal.
The algorithm provided safety regardless of how many nodes were faulty. In other words, it didn’t assume synchrony for safety. The algorithm did, however, rely on synchrony for liveness. At most, (n-1)/3 nodes could be faulty and the message delay did not grow faster than a certain time limit. Hence, PBFT circumvented FLP impossibility by using a synchrony assumption to guarantee liveness.
The algorithm moved through a succession of “views,” where each view had one “primary” node (i.e., a leader) and the rest were “backups.” Here’s a step-by-step walkthrough of how it worked:
A new transaction happened on a client and was broadcast to the primary.
The primary multicasted it to all the backups.
The backups executed the transaction and sent a reply to the client.
The client wanted x + 1 replies from backups with the same result. This was the final result, and the state transition happened.
If the leader was non-faulty, the protocol worked just fine. However, the process for detecting a bad primary and reelecting a new primary (known as a “view change”) was grossly inefficient. For instance, in order to reach consensus, PBFT required a quadratic number of message exchanges, meaning every computer had to communicate with every other computer in the network.
Note: Explaining the PBFT algorithm in full is a blog post all on its own! We’ll save that for another day ;).
While PBFT was an improvement over previous algorithms, it wasn’t practical enough to scale for real-world use cases (such as public blockchains) where there are large numbers of participants. But hey, at least it was much more specific when it came to things like failure detection and leader election (unlike Paxos).
It’s important to acknowledge PBFT for its contributions. It incorporated important revolutionary ideas that newer consensus protocols (especially in a post-blockchain world) would learn from and use.
For instance, Tendermint rotates a new leader every round. If the current round’s leader doesn’t respond within a set period of time, the leader is skipped and the algorithm simply moves to the next round with a new leader. This actually makes a lot more sense than using point-to-point connections every time there needs to be a view-change and a new leader elected.
As we’ve learned, most Byzantine fault-tolerant consensus protocols end up using some form of synchrony assumption to overcome FLP impossibility. However, there is another way to overcome FLP impossibility: non-determinism.
Enter: Nakamoto Consensus
As we just learned, in traditional consensus, f(x) is defined such that a proposer and a bunch of acceptors must all coordinate and communicate to decide on the next value.
Traditional consensus doesn’t scale well.
This is too complex because it requires knowing every node in the network and every node communicating with every other node (i.e., quadratic communication overhead). Simply put, it doesn’t scale well and doesn’t work in open, permissionless systems where anyone can join and leave the network at any time.
The brilliance of Nakamoto Consensus is making the above probabilistic. Instead of every node agreeing on a value, f(x) works such that all of the nodes agree on the probability of the value being correct.
Wait, what does that even mean?
Byzantine-fault tolerant
Rather than electing a leader and then coordinating with all nodes, consensus is decided based on which node can solve the computation puzzle the fastest. Each new block in the Bitcoin blockchain is added by the node that solves this puzzle the fastest. The network continues to build on this timestamped chain, and the canonical chain is the one with the most cumulative computation effort expended (i.e., cumulative difficulty).
The longest chain not only serves as proof of the sequence of blocks, but proof that it came from the largest pool of CPU power. Therefore, as long as a majority of CPU power is controlled by honest nodes, they’ll continue to generate the longest chain and outpace attackers.
Block rewards
Nakamoto Consensus works by assuming that nodes will expend computational efforts for the chance of deciding the next block. The brilliance of the algorithm is economically incentivizing nodes to repeatedly perform such computationally expensive puzzles for the chance of randomly winning a large reward (i.e., a block reward).
Sybil resistance
The proof of work required to solve this puzzle makes the protocol inherently Sybil-resistant. No need for PKI or any other fancy authentication schemes.
Peer-to-peer gossip
Not “technically” safe in asynchronous environments
In Nakamoto consensus, the safety guarantee is probabilistic — we’re growing the longest chain, and each new block lowers the probability of a malicious node trying to build another valid chain.
Therefore, Nakamoto Consensus does not technically guarantee safety in an asynchronous setting. Let’s take a second to understand why.
For a system to achieve the safety condition in an asynchronous setting, we should be able to maintain a consistent transaction log despite asynchronous network conditions. Another way to think about it is that a node can go offline at any time and then later come back online, and use the initial state of the blockchain to determine the latest correct state, regardless of network conditions. Any of the honest nodes can query for arbitrary states in the past, and a malicious node cannot provide fraudulent information that the honest nodes will think is truthful.
Nakamoto Consensus does not technically guarantee safety in an asynchronous setting.
In previous algorithms discussed in this post, this is possible because we are deterministically finalizing a value at every step. As long as we terminate on each value, we can know the past state. However, the reason Bitcoin is not “technically” asynchronously safe is that it is possible for there to be a network partition that lets an attacker with a sufficiently large hash power on one side of the partition to create an alternative chain faster than the honest chain on the other side of the partition — and on this alternative chain, he can try to change one of his own transactions to take back money he spent.
Admittedly, this would require the attacker to gain a lot of hashing power and spend a lot of money.
In essence, the Bitcoin blockchain’s immutability stems from the fact that a majority of miners don’t actually want to adopt an alternative chain. Why? Because it’s too difficult to coordinate enough hash power to get the network to adopt an alternative chain. Put another way, the probability of successfully creating an alternative chain is so low, it’s practically negligible.
Nakamoto vs. Traditional Consensus
For practical purposes, Nakamoto Consensus is Byzantine fault-tolerant. But it clearly doesn’t achieve consensus the way consensus researchers would traditionally assume. Therefore, it was initially seen as being completely out of the Byzantine fault-tolerant world.
By design, the Nakamoto Consensus makes it possible for any number of nodes to have open participation, and no one has to know the full set of participants. The importance of this breakthrough can’t be overstated. Thank you, Nakamoto.
Simpler than previous consensus algorithms, it eliminates the complexity of point-to-point connections, leader election, quadratic communication overhead, etc. You just launch the Bitcoin protocol software on any computer and start mining.
This makes it easily deployable in a real-world setting. It is truly the more “practical” cousin of PBFT.
And there you have it — the brief basics of distributed systems and consensus. It has been a long, winding journey of research, roadblocks, and ingenuity for distributed computing to get this far. I hope this post helps you understand the field at least a tiny it better.
Nakamoto Consensus is truly an innovation that has allowed a whole new wave of researchers, scientists, developers, and engineers to continue breaking new ground in consensus protocol research.
https://medium.com/racecapital/defi-infrastructure-101-overview-market-landscape-78e096a85834
Chris McCann, Jun 2021
Decentralized Finance (DeFi) is redefining the future of finance. There is a major shift going on in the underlying infrastructure powering financial applications, and it’s changing the way we think about permission and control, transparency and risks.
The goal of this report is to provide an introduction of the new emerging area of DeFi infrastructure powering DeFi apps today. While it’s easy to get caught up in the hype and speculation within the space, I’ll focus on the key components of DeFi applications, their key differentiation compared to traditional finance, potential risks, and longer term implications these DeFi apps are causing.
The major categories of DeFi apps include decentralized exchanges, lending platforms, stablecoins, synthetic assets, insurance, among others. While diverse in scope, all of these DeFi apps share a major set of commonalities including:
Using underlying blockchains as the core ledger
Open source and transparent by default
Interoperable and programmable (composability)
Open and accessible to all (permissionless)
Using Underlying Blockchains as the Core Ledger
Compared to traditional financial applications which use core banking systems (Fiserv, Jack Henry, FIS, etc.) as the underlying ledgers of record, DeFi apps use blockchains as their underlying core ledger.
A few of the most prominent blockchains used to build DeFi apps include Ethereum, Solana, and Binance Chain, etc. These underlying blockchains store the ledger state of what is deposited into the DeFi apps, what is stored within the smart contracts, all of the transactions, and withdrawals.
All of the core accounting functions to ensure matching inputs and outputs are handled by the blockchain itself, the DeFi apps don’t need to create external systems to reconcile balances, because all of the transactions are queryable across the various block explorers.
In addition, compared to the traditional system there is no separate process of settling & clearing transactions. The transaction processing, clearing, and settling all happen at the same time when the transaction is broadcasted. Although it is advisable to wait around ~21 blocks or more to ensure finality on the blockchain itself.
Open Source and Transparent by Default
Compared to traditional financial applications which are all closed-source and built on top of proprietary systems, DeFi applications are typically entirely open sourced and built on top of open underlying blockchains.
Banking “APIs”
This causes three interesting properties:
Composability — The DeFi app itself can be forked, remixed, and reused in many other applications (more on this below).
Transparency — Since the DeFi app is open source, it is completely auditable to know exactly what the smart contract is doing in terms of functions, user permissions, and user data.
Auditability — Since the underlying blockchain itself is open sourced, the entire flow of funds is completely auditable including collateral in the system, trading volume, defaults, etc.
Unlike the traditional financial system (which is opaque), runs on a fractional reserve system, and is prone to market shocks — the DeFi system is completely transparent and over-collateralized — which allows DeFi companies to weather downturns much more efficiently.
Interoperable and Programmable
In order for developers to gain the trust of users, the majority of the DeFi apps are completely open source — including the front end and the smart contracts themselves. In addition, since DeFi apps all run on top of a common platform (the underlying blockchain) these DeFi apps are completely interoperable with each other and can be programmed to work with any other DeFi app in the ecosystem.
Contrast this to the traditional financial system where;
Infrastructure Fragmentation — Traditional financial apps are not built on top of common infrastructure.
Siloed Applications — Traditional financial apps are typically proprietary to one banking institution. For example, all of Wells Fargo’s “fintech apps” work together but not across different banking institutions.
Developer Unfriendly — Traditional financial apps are not made for other developers to build services on top of.
The traditional financial system does have common standards; however, it’s extremely hard to reach consensus across market participants because financial institutions view their software as their competitive moat instead of using products as a differentiating factor.
One of the biggest reasons why we have seen so much innovation within the DeFi space is because the systems are interoperable, it allows the developer ecosystem to have more creative expression on the products and services they create. On top of this, developers don’t need to waste time reinventing the wheel, but rather can build upon common frameworks and focus on the things that make their products special.
Open and accessible to all
With traditional financial applications, new users typically need to go through a lengthy onboarding process, income verifications, credit checks, or even in person meetings — just to be able to use a given financial product.
With DeFi applications, all you need is a wallet address to interact with these systems. DeFi apps don’t ask for income verification, they don’t need credit checks, and in most cases they don’t even need to know who you are outside of the wallet address you are using.
This is one of the most under-appreciated aspects of DeFi products.
Here is a more architectural diagram on the main technical differences between a traditional fintech app and DeFi app (simplified for brevity’s sake):
Here is a more direct comparison chart on some of the key differences between centralized and decentralized financial applications:
Below is a market map of two different DeFi ecosystems, one built on the Solana ecosystem and the other built on the Ethereum ecosystem.
The reason why I am picking these two ecosystems to focus on is to show the breadth of DeFi apps being built across two different underlying protocols. I also believe Solana is the most interesting new layer one protocol because of its high transaction throughput (50K+ transactions per second), sub second latency & transaction confirmation times, and fast growing ecosystem of developers building DeFi apps on top of the Solana protocol.
While similar in structure, each underlying protocol has its own ecosystem built on top which is largely independent of the other. Below are some of the further explanations of each layer and the tradeoffs between them.
Base Layer (Layer One)
The base layer is the blockchain in which the core ledger itself sits. Ethereum is the most dominant layer one today, and Solana is the most promising new entrant with faster transaction speeds, more throughput, and cheaper transactions.
Node Infrastructure
A never ending amount of data needs to be queried about the underlying ledger (retrieving blocks, finding transactions, syncing data, writing transactions, etc). In the Ethereum ecosystem, a whole industry sprung up to solve this need (Infura, Alchemy, etc.).
Contrast this with Solana where the underlying ledger is fast enough and in sync enough that teams can just query Solana’s RPC nodes directly (this might not last forever though).
Layer Two
On Ethereum, there are various layer two solutions primarily used for scaling since Etheruem itself cannot handle all of the transactions on itself. Two of the promising scaling solutions include Matic, Optimism, among others.
On Solana, since there is only one layer to build upon (no layer 2 scaling solution needed) there are no specialized integrations needed and no mismatches with the underlying ledger which is processing settlement.
Order Book Aggregation
When new DeFi projects are built on top of Solana (DEX, AMM, Options, etc.), they can pull orders from Serum and push orders back into Serum, greatly reducing the cold start challenge most new financial applications face.
The best way to think about it is to think of it as “networked liquidity” and an “order management” system which is used by the majority of projects within the Solana ecosystem.
One of the more innovative examples of combining a CLOB (Serum) and an AMM is Raydium (very similar to Uniswap v3). The combining of these systems allows for passive LPs with active market making using Serum.
DeFi Toolset
There are a set of common tools needed to operate most of these DeFi apps, either from the perspective of developers or end users. These services don’t have direct traditional finance analogies but they include:
Wallets — The main interface people use to store assets & interface with DeFi apps.
Oracles — On-chain data feeds DeFi apps use to reference prices and execute transactions against (example: liquidations).
Block Explorers & Analytics — Tools like Block Explorers were created to allow people to query the blockchain ledger itself directly. These are used most often when verifying transactions.
Stablecoins — The two main assets used in DeFi ecosystems include the underlying native protocol token (ETH or SOL) and ideally on-chain stablecoins (USDC, Dai, or Pai).
Front-Ends — A new emerging layer which creates easy to use front-end applications to interact with multiple DeFi projects at once, or to simplify transactions. This includes both Zapper.fi within the Ethereum Ecosystem or Step Finance within the Solana ecosystem.
DeFi Apps
The DeFi apps themselves are composed of all of the core financial applications which can be used directly, or embedded into other various apps within the crypto ecosystem.
When comparing and contrasting DeFi infrastructure with traditional financial infrastructure, there were a few pieces that don’t exist yet in the decentralized world that could be interesting to explore.
A few to highlight below:
Consumer Applications — In the traditional financial world, consumers typically act with consumer apps (ex. Robinhood, Chime, Transferwise) not the underlying protocols themselves. The front-ends of the DeFi space could be greatly improved and intermediate much more of the total consumer experience. In general, the UI/UX of most DeFi apps are still very difficult to use from a consumer perspective.
CRM — The DeFi space doesn’t really have a concept of customer relationship management nor typically collects any amount of consumer data. While great from a privacy perspective, there is great value in understanding the customer better.
Notifications — Notifications or alerts don’t really exist at all in the DeFi space at all. On a more broader level there aren’t any great methods to communicate with users either.
Product Analytics — There are tools to measure blockchain activity, but not to measure engagement within DeFi applications.
Security — DeFi products do typically conduct security audits; however, none of the security audits guarantee the most common protections consumers are accustomed to in the traditional financial world. On top of this, the demand for security auditors outstrips the supply, so it’s a big bottleneck.
Transaction Rollbacks — In traditional finance, if you make a mistake, a financial institution can initiate a rollback of the transaction. This does not yet exist in DeFi.
Custody — Right now, most DeFi projects need to be interacted with from an individual wallet perspective. None of the custodians allow you to interact with DeFi apps.
Developer Platforms — Most of the developers in the crypto space are building right on top of the layer one protocol itself. There are no concepts of developer platforms or middleware just yet.
Identity — One of the biggest complaints from the traditional finance world about DeFi is the pseudonymity of users. Ideally there needs to be a way to keep out the bad actors while persevering consumer privacy.
After meeting hundreds of founders and seeing progress teams are making, one thing is very clear — the pace of innovation in DeFi is 10x faster vs. that of traditional fintech apps.
In traditional finance:
The underlying ledgers are not open source nor developer friendly.
There are a whole host of “banking as a service” applications just to wrap underlying partner banks in developer friendly platforms.
Fintech apps are very challenging regulatory wise and typically take years of development before releasing a single product.
Contrast that to DeFi where:
Everything is open source including the ledger itself.
All of the transactions are public.
Everything is built from the perspective of developers building applications on top of protocols.
New DeFi apps are built and released in weeks, not years.
We at Race Capital believe that DeFi developers will forever change how the finance world works. We are incredibly bullish about the DeFi infrastructure stack and community.
Thanks!
https://nakamoto.com/beginners-guide-to-defi/
Linda Xie, Jan 2020
In this beginner’s guide to decentralized finance (“DeFi”) we review the following:
Stablecoins. A building block of decentralized finance. Unlike cryptocurrencies like Bitcoin or Ethereum that are known for their price volatility, a stablecoin is engineered to remain “stable” at exactly 1.00 units of fiat. Most stablecoins are pegged to the USD, but some are in other fiat currencies like the Chinese RMB.
Decentralized lending. Programmatically take out a loan on the blockchain. No bank account required.
Decentralized exchanges. Buy and sell cryptocurrencies through a blockchain, rather than a centralized exchange like Coinbase. In principle, a machine can trade on these!
Collateralization. Provide digital assets to collateralize your decentralized loans, providing the lender some recourse in the event of default.
Decentralized Identity. Identities are used in the context of smart contracts for things like assessing your creditworthiness for a decentralized loan.
Composability. Snapping together DeFi functions that do different things, much like software libraries. For example, if one contract takes in crypto and generates interest, the second contract could automatically reinvest that interest.
Risk management. High returns in DeFi are often accompanied by even higher risks. Fortunately, new tools are arising to help hedge these risks.
Let’s go through these concepts one by one.
In general, the first two types of stablecoins have proven most popular. Whether it is fiat or crypto providing the collateral, people appear to want certainty around price stability. With that said, there are ongoing experiments around the third class of stablecoin, with people looking to combine both the crypto-collateralization and algorithmic elements.
Given functional stablecoins like USDC and DAI, we can start rebuilding pieces of the traditional financial system as automated smart contracts. One of the most fundamental is the concept of borrowing and lending.
To understand what is being shown in this snapshot, start with the left hand side. The first row represents a borrower in the market who is willing to take out a 30-day loan for $4,421.58 at a 0.0265% daily rate. Right below them is another borrower willing to take out a 30-day loan for $34,292.38 at a slightly lower 0.0263% interest rate. On the right hand side are the lenders. The first row represents a lender willing to lend out up to $8,199.32 for 2 days at a 0.027418% daily rate. The next row is someone willing to lend $255.68 for two days at the slightly higher rate of 0.027436% per day. And so on and so forth. This is how a centralized order book for loans works. In the above example, the highest rate a borrower is willing to accept is 0.0265% daily interest, while the lowest rate a lender is willing to give is 0.027418% daily interest. One of these two parties will give in and either raise or lower their price of money, and then a deal will be made. Bitfinex provides the service to set up the order book and match users, and then takes a cut off the top of each loan for their trouble.
Some decentralized services for lending and borrowing take this to the next level. Rather than building an order book and facilitating matches, they allow users to directly lend or borrow from the smart contract itself, which dynamically raises or lowers interest rates to match. For example, if a large amount of crypto is borrowed from the smart contract, a higher interest rate is charged to borrowers. Moreover, in order to borrow funds, a user needs to provide collateral to the smart contract, providing an amount greater than what was borrowed so the loan is overcollateralized.
Decentralized crypto exchanges attempt to put services like Coinbase Pro on the blockchain. That is, they aim to facilitate trades of different cryptocurrencies between two parties.
To understand the use case, start with centralized cryptocurrency exchanges. These exchanges, like Coinbase Pro, act as an intermediary and custodian where two parties deposit their assets and are able to trade with each other. While they have worked at scale to facilitate billions of dollars in trades, centralized exchanges do present a single point of failure that can be hacked, censor transactions, or prevent certain people from trading.
Decentralized exchanges aim to address this by using smart contracts to reduce or eliminate middleman.The dream is fully peer-to-peer exchange of all digital assets.
One issue with the decentralized lending and borrowing services mentioned thus far is that they require quite a lot of collateral. This overcollateralization requirement can be a highly inefficient use of capital -- and many people do not have the extra funds in the first place to provide as collateral. However, people are working on decentralized identity and reputation systems that will reduce the collateralization requirements. One of the first applications would be building blockchain analogs of fiat-based credit bureaus like Experian, TransUnion, and Equifax that institutions like banks rely on for credit scores. Now, to anticipate an objection, it’s certainly true that credit bureaus can put certain groups such as international and young people at a disadvantage. But newer services like Lending Club have addressed the problem of overreliance on FICO scores by offering additional data points like home ownership, income, and length of employment.
As DeFi matures, we should expect the libraries to start getting used outside the crypto community. Eventually you’ll be able to add one line of code to add a full decentralized marketplace to a video game, or another line of code to allow merchants in your e-commerce store to earn interest on their balance.
While DeFi is fascinating, it is important to acknowledge the risks that come with it. Let’s enumerate some classes of risk:
Smart contract risk. Many of these systems are new and need more time to be battle tested. When protocols are interacting with each other, the smart contract risk compounds. If one protocol has a critical smart contract bug, then it can cause the entire system to be vulnerable. It would be wise to avoid putting too much capital in any of these systems during the early days.
Collateralization and volatility risk. There are also risks associated with specific collateral types used to back loans. Overcollateralization reduces volatility risk, but if the price of the collateralized asset falls too quickly, a margin call isn’t guaranteed to cover the full amount that was borrowed. However this should be less of a risk with a reasonable collateralization ratio and vetted collateral types. Another potential issue is that the interest rate volatility on many DeFi platforms could make it impractical for someone to participate. There will likely be interest rate swaps or other methods to lock in a rate for a premium but this also adds its own complexities.
Regulatory risk. DeFi platforms have varying degrees of decentralization, and we have not yet seen court cases that test all the claims being made. We’ll have to see what happens here.
The DeFi space is large and growing larger. Hundreds of millions of dollars worth of cryptocurrency has already been deployed and the space’s future potential is massive. This article only covered a handful of use cases, but we’ve provided some further reading below to understand DeFi further and keep up with the developments.
Understanding DeFi
Keeping up with DeFi
Thank you to Balaji Srinivasan, Will Warren, and Jordan Clifford for reviewing this post!
Disclaimer: Linda Xie is a Managing Director of Scalar Capital Management, LLC, an investment manager focused on cryptoassets. This post is not investment advice.
https://newsletter.thedefiant.io/p/the-defiants-definitive-guide-to
Defiant, Mar 2021
At The Defiant, we are convinced finance will become increasingly decentralized and open. It just makes sense; our current financial system is running on infrastructure that was built in the 1970s; messaging networks like SWIFT and ACH that simply aren’t made to transact value.
Blockchains have value at their core. They are money networks, and as such, they make for more efficient financial systems. They cut out intermediaries, improving speed and cost. They are permissionless and global, so that essential tools to grow savings and make payments and transfers, are at the hands of anyone with an internet connection, anywhere in the world.
A financial system that’s running on distributed networks and powered by cryptocurrencies allows users to maintain control over their assets and information. For the first time ever, any individual can own a piece of the networks and applications they use. They can participate in these protocols’ governance and benefit from their growth. It’s a new paradigm for the internet and for money.
We are witnessing the birth of the internet of money. It’s here, growing and maturing. You can use it right now.
The Defiant is the leading information company at this intersection of tech and money. It’s the only content platform producing unbiased, objective journalism, that’s focused on decentralized finance. We cover the news in this space day to day, across our newsletter, podcast, YouTube channel, and website.
For those who are new to this ecosystem (and even for those who have been around for longer), the flood of new terms, apps, and concepts can feel overwhelming.
This guide is meant to be a one-stop-shop to get started. These are all the DeFi basics you will need to start exploring the internet of money.
🤝 Share this guide with your friends, family, colleagues —anyone who you think would benefit from learning about the future of finance and how to start taking control of their assets.
Here are the first two articles you need to read as you get started on your DeFi journey.
Definition: Decentralized finance, or DeFi, is the ecosystem of financial applications being built with blockchain technology and without banks.
Characteristics:
Non-Custodial
Open
Transparent
Decentralized
History:
Origins of the term
Bitcoin and MakerDAO
In this guide we’ll be helping you take your first steps into the boundless world of Decentralized Finance, better known as DeFi.
We guide you through:
Setting up an account at a crypto exchange
Buying ETH
Setting up an Ethereum wallet
Withdrawing to MetaMask
How to use Uniswap and trade from ETH to DAI
How to lend stablecoins on DeFi protcols
How to connect your wallet to portfolio trackers
"I'm Not Super Bullish on DeFi. We're Using This Tech to Enrich a Small Group of People." James Prestwich.
"Basic Financial Services is a 21st-Century Fundamental Human Need." OMG Network's, Vansa Chatikavanij.
The Bangkok-based project formerly known as OmiseGo has spearheaded research and work on a scaling technology for Ethereum called Plasma. Scaling solutions will be key for Ethereum and therefore DeFi, to continue growing. The network is already at capacity, and gas prices are getting prohibitively high, especially for complex DeFi transactions.
"Decentralized Money Shouldn't be Traded on Centralized Exchanges." Loopring Founder, Daniel Wang.
"For the First Time, You Can Buy a Piece of the Art, and a Piece of the Gallery." MetaKovan.
This podcast episode is with MetaKovan, the pseudonymous investor who holds the largest known NFT collection in his Metapruse fund, which he leads with his (also pseudonymous) partner Twobadour.
"I'm Imagining a World Where Every Song Has an Investable Layer." DJ 3LAU.
"We've Been Creating Value for Instagram and TikTok With Very Little Actually Accruing to Us." Trevor McFedries.
"It Turns Out Music Does Have Value. We've Been Pricing It Incorrectly For 20 Years." RAC.
https://insights.flagshipadvisorypartners.com/insights/crypto-starting-to-realize-its-promise-in-payments
Anupam Majumdar and Dan Carr, Feb 2022
Despite its recent dip, 2021 was a breakthrough year for crypto, with its market capitalization rising by 188% to reach ~$3 trillion in November 2021. Crypto’s profile and growth to date has derived from its utility as a store of value, but we are finally starting to see its potential as a ‘medium of exchange’. In this article, we examine how crypto payments are gaining relevancy in mainstream commerce, illustrate the current payments use cases, and evaluate their growth outlook. Bitcoin, the first cryptocurrency, was introduced in 2009 as an alternative medium of exchange to fiat-based currencies. This first generation of cryptocurrencies (bitcoin then others) never gained meaningful traction as payment methods due to their high volatility, low transaction processing speeds, and lack of acceptance. Bitcoin was highly successful as a validation of blockchain technology, which subsequently inspired the development of new generations of crypto across a wide range of applications (as shown in figure 1). Today, select cryptocurrencies (e.g., Solano) and next generation crypto innovations such as stablecoins (e.g., Tether, USDC, Dai) and central bank digital currencies (CBDCs) are driving a radical shift in consumer and merchant perceptions and accelerating usage of crypto payments.
FIGURE 1: Evolution of Crypto & Relevance as a Medium of Exchange
Source: Flagship market observations Drivers of Crypto as a 'Medium of Exchange’ As we outline in figure 2, crypto currencies are evolving beyond a ‘store of value’ to relevance as a ‘medium of exchange’. Emergence of new technological innovations such as stablecoins, NFTs (non-fungible tokens, which are essentially digitized tokens of ownership against digital assets) and DeFi (decentralized finance, a new breed of financial securities and investment vehicles built on the blockchain network) have accelerated consumer curiosity and adoption. Traditional PSPs have invested in crypto as a growth vertical and have further invested in strategic M&A. Visa and Mastercard have also adapted their rails to settle select crypto directly. Lastly, while regulations continue to be a double-edged sword, greater clarity in some markets is encouraging adoption by traditional actors.
FIGURE 2: Building Blocks of Crypto Payments Adoption
*Non-fungible Tokens (NFTs) and Decentralized Finance (DeFi) Source: Theblockcrypto.com, Chainanalysis.com, Flagship Advisory Partners analysis Current State of Crypto Payments
As illustrated in figure 3, crypto payments primary use case today is on-ramping and off-ramping fiat to crypto via exchanges (where volumes are huge). Several specialized fintechs that offer fiat on-ramps (buying crypto in exchange for fiat currency) have emerged on this basis (e.g., Ramp, MoonPay, Wyre), and many traditional PSPs have also thrived supporting these volumes (e.g., Checkout.com, Nuvei, Worldpay). New merchant segments, beyond exchanges, have also emerged (e.g., NFT marketplaces, GameFi, DeFi) accelerating the opportunity for C2B crypto acceptance. In traditional ecommerce, we have seen an acceleration of crypto acceptance as a payment method by merchants in select verticals such as online gaming, luxury travel, digital and adult services. In the last 12 months, Visa and Mastercard have partnered with over 80 crypto exchanges, enabling these exchanges to issue branded cards. These cards have accelerated the acceptance of crypto towards C2B online commerce. Visa recently announced the total card volumes exceeded $2.5 billion in its fiscal Q1 of 2022, already accounting for 70% from last year’s volumes. Select businesses are also facilitating B2C and B2B payouts using crypto, though this is specific to select verticals, mainly addressing payouts to contract and gig workers (independent, online platform contractors). Crypto payments are also evident in C2C remittances today, providing an economical way to remit payments especially for consumers in emerging markets.
FIGURE 3: Crypto Payments Proposition Models Overview
Source: Flagship market observations
As we illustrate in figure 4, we estimate crypto payment volumes at €0.5 trillion today, a tiny fraction compared to the global aggregated C2B, B2B and B2C payments turnover. C2B Fiat on/off ramps account for 95% of this share, so general purchasing and other use cases remain small.
FIGURE 4: Global Volumes: Traditional Payments vs. Crypto Payments (USD trn.; 2021)
Global includes payment volumes in B2B, B2C, C2B and C2C services Source: Visa, Flagship analysis
While crypto payments are small today, we expect volumes to accelerate and grow rapidly in the next 12-18 months. As illustrated in figure 5, C2B crypto payment adoption will continue to grow as newer generations of crypto mature with the ability to support rapid, simple and low cost means of payment in everyday life. We consider cross-border C2C to be potentially disruptive to traditional fiat remittance payments, as consumers are likely to benefit from economical price points and to be able to remit money in real time. B2C and B2B may take a little longer to mature, as traditional businesses will take time to embrace crypto.
FIGURE 5: Crypto Payments Maturity by Use Case
Source: Flagship market observations
Our expectation for acceleration of crypto use cases is based on the growth drivers that we outline in figure 6. In the near term, we anticipate mainstream PSPs to play a central role in driving consumerization of crypto. New blockchain applications in the form of NFTs will gain momentum and be accessible through fiat rails. We anticipate card schemes and regulators to play an important role in creating the network and the rules for direct crypto settlement. In the medium term, we do expect a tipping point when crypto becomes more mainstream, due to 1) regulatory clarity and sovereign, acceleration of CBDCs, and 2) acceleration of new commerce activity in platforms such as Metaverses and platforms built on Web 3.0.
FIGURE 6: Growth Drivers
Source: Flagship market observations
Payments has been an elusive aspiration of the cryptocurrency ecosystem since its inception many years ago. However, the key building blocks are now in place and a series of growth drivers are coming that we expect will accelerate the digital enablement of crypto payments in everyday commerce. We remain optimistic of the upsides in crypto and its appeal to both consumers and merchants as a long-term vehicle of commerce.
Eric Hughes
Triple-entry accounting as a common ledger. Source: .
Ideated and developed by and in the 1990s and early 2000s, triple-entry accounting ensures that two parties can maintain a trustworthy shared record by sending signed messages of offer, acceptance, and validation. If Alice wants to update the shared record, she sends a signed message to Bob over a system called Ivan; if Bob agrees with the update (and his agreement is required), he replies by accepting the update in another signed message over Ivan; finally, Ivan checks the validity of the signature and, if everything is in order, signs off on the record as well.
The result is a signed receipt, which constitutes a common ledger implementing the WYSIWIS (“What You See Is What I See”) principle. In other words, Alice, Bob and Ivan can be certain (due to the nature of how cryptographic signatures work) that they all hold exactly the same record. If the entire set of receipts is read, each party’s balance can be known, which enables huge in accounting.
At first sight, it appears that it is not. Stephan February offers an impeccable - explanation of how you get from Grigg’s original triple-entry model to peer-to-peer digital cash (i.e. Bitcoin) by replacing Ivan with a distributed timestamp system. However, February himself argues that there is no third signature in this design. Furthermore, there is no second signature either!
February’s account of Bitcoin as a triple-entry system. Source: .
Can we then say that Bitcoin is not triple-entry accounting, because there are no second and third signatures? Not necessarily. As we have argued , even if those signatures were necessary, they are there upon closer examination.
Bitcoin’s UTXO model. Source: Joerg .
In Bitcoin, miners check for valid transactions and include them in blocks, which they do incentivized by the block reward that, together with transaction fees, they get for each block mined (the “coinbase transaction”). The community accepts this block by including the hash of this block in the next accepted block (which forms the “chain” in blockchain). Together, community consensus and the coinbase transaction act as third-party validation of the transaction (for a more detailed explanation, see the of February’s article). .
To read our research in more detail, access our full paper .
Boyle, T. 2001g. The Shared Transaction Repository (STR) ver. 0.60 spec. GL Dialtone. Available at
February, S. M. 2020a. Grokking Bitcoin. TwoStack. Available at
February, S. M. 2020b. Triple Entry Accounting. TwoStack. Available at
Grigg, I. 2005. Triple Entry Accounting. Systemics Inc. Available at
Grigg, I. 2015. The Nakamoto Signature. Financial Cryptography. Available at
Ibañez, J. I., Bayer, C. N., Tasca, P., Xu, J. 2020. REA, Triple-entry Accounting and Blockchain: Converging Paths to Shared Ledger Systems. Social Science Research Network (SSRN). Available at
Ibañez, J. I., Bayer, C. N., Tasca, P., Xu, J. 2021. The Efficiency of Single Truth: Triple-entry Accounting. Social Science Research Network (SSRN). Available at
Ibañez, J. I., Bayer, C. N., Tasca, P., Xu, J. 2021. Triple-entry Accounting, Blockchain and Next of Kin: Towards a Standardization of Ledger Terminology. Social Science Research Network (SSRN). Available at
Essentially, a blockchain is a new type of distributed system. It started with the advent of and has since made a lasting impact in the field of distributed computing. So, if you want to really know how blockchains work, a great grasp of the principles of distributed systems is essential.
In the paper “,” Leslie Lamport shows how we can deduce whether one event happens before another by remembering the following factors:
By determining which event happens before another, we can get a of events in the system. Lamport’s paper describes an algorithm which requires each computer to hear from every other computer in the system. In this way, events can be .
But wait — there’s a huge caveat: coordinating otherwise independent clocks is a very complex computer science problem. Even if you initially set a bunch of clocks accurately, the clocks will begin to differ after some amount of time. This is due to “,” a phenomenon in which clocks count time at slightly different rates.
Despite the fact that most real systems are designed to withstand Byzantine failures, that these designs are too general and don’t take into account “rational” failures, wherein nodes can deviate if it is in their self-interest to do so. In other words, nodes can be both honest and dishonest, depending on incentives. If the incentives are high enough, then even the majority of nodes might act dishonestly.
In their 1985 paper “,” researchers Fischer, Lynch, and Paterson (aka FLP) show how even a single faulty process makes it impossible to reach consensus among deterministic asynchronous processes. Basically, because processes can fail at unpredictable times, it’s also possible for them to fail at the exact opportune time that prevents consensus from occurring.
Introduced in the 1990s, was the first real-world, practical, fault-tolerant consensus algorithm. It’s one of the first widely adopted consensus algorithms to be proven correct by Leslie Lamport and has been used by global internet companies like Google and Amazon to build distributed services.
In favor of offering flexibility in implementation, several specifications in key areas are left open-ended. Things like leader election, failure detection, and log management are vaguely or .
In 2013, Ongaro and Ousterhout published a new consensus algorithm for a replicated state machine called , where the core goal was understandability (unlike Paxos).
Trying to build a reliable computer system that can handle processes that provide conflicting information is formally known as the “.” A Byzantine fault-tolerant protocol should be able to achieve its common goal even against malicious behavior from nodes.
The paper “” by Leslie Lamport, Robert Shostak, and Marshall Pease provided the first proof to solve the Byzantine General’s problem: it showed that a system with x Byzantine nodes must have at least 3x + 1 total nodes in order to reach consensus.
The paper “” by Dwork, Lynch, and Stockmeyer (hence the name “DLS” algorithm) introduced a major advancement in Byzantine fault-tolerant consensus: it defined models for how to achieve consensus in “a partially synchronous system.”
Another Byzantine fault-tolerant algorithm, published in 1999 by Miguel Castro and Barbara Liskov, was called “” (PBFT). It was deemed to be a more “practical” algorithm for systems that exhibit Byzantine behavior.
For example, is a new consensus algorithm that is heavily influenced by PBFT. In their “validation” phase, Tendermint uses two voting steps (like PBFT) to decide on the final value. The key difference with Tendermint’s algorithm is that it’s designed to be more practical.
A major contribution of Nakamoto Consensus is the use of the . It’s more suitable for peer-to-peer settings where communication between non-faulty nodes can’t be assumed. Instead, we assume a node is only connected to a subset of other nodes. Then we use the peer-to-peer protocol where messages are being gossiped between nodes.
And there’s an entirely new family of protocols being developed that go beyond Nakamoto Consensus. , anyone? ;) But I’ll save that for the next post — stay tuned!
DeFi is a developing market sector within the intersection of blockchain technologies, digital assets, and financial services. According to , the value of digital assets locked into DeFi applications grew 10X from less than $1 billion in 2019, to over $10 billion in 2020, and over $80 billion at its peak thus far in 2021. Yet the DeFi applications and underlying infrastructure are still in its nascent stage of development.
DeFi apps are financial applications with no central counterparties. In practice this means there is no institution (e.g. banks) you are interfacing with to access these financial applications; instead users interface directly with the programs (e.g. smart contracts) on top of the protocol itself. For more of a DeFi 101 primer I highly recommend .
This is commonly referred to as the “” or “” aspects of DeFi. All of these DeFi apps are like individual lego pieces which can be remixed to work with other lego pieces to build something new.
Because of these arbitrary rules set by financial institutions, these onboarding processes are including , , , , etc.
This is commonly referred to as DeFi apps being . If you have the funds inside your wallet for the transaction you want to do, you can do it. There are no institutions or intermediaries to stop or deny service to you. It doesn’t matter what your background is or what country you come from, DeFi apps do not discriminate.
Unique to Solana, there is an additional layer occupied by a DeFi project named which provides a CLOB (Central limit order book) that is used by all of the DeFi projects built on top.
Embeddable Wallets — Wallets are seen as these external services, there aren’t any offerings of white-label wallets to embed these directly into the DeFi apps themselves. There are several initiatives such as , but these are still in its infancy.
If you are building the horizontal infrastructure layers of the new open source financial stack including: trading, lending, borrowing, and/or any horizontal tools all new DeFi projects will rely upon in the future, we want to chat with you. Send me a message >
Decentralized finance, also referred to as “DeFi” or open finance, aims to recreate traditional financial systems (such as lending, borrowing, derivatives, and exchange) with automation in place of middlemen. Once fully automated, the financial building blocks of DeFi can be composed to produce more complex capabilities. Today, the primary venue for decentralized finance is , but in principle these ideas can be implemented on any smart contract platform.
If we try recreating traditional financial products on a blockchain, we are faced with an immediate problem: price volatility. Specifically, the native cryptocurrency of the Ethereum blockchain (namely ETH) experiences in the USD/ETH exchange rate, sometimes moving 10% or more in a single day. An instrument with this degree of price volatility is less than ideal for a number of traditional financial products. For example, if you take out a loan, you don’t want loan payments to oscillate by 10% right before a payment. That degree of volatility would make it hard to plan for the future. Stablecoins are one solution to this problem. These are cryptocurrencies specially engineered to remain “stable” at an exchange rate of approximately 1.00 units of fiat per coin. The and provide a good list of the top stablecoins. There are three general categories of stablecoins: centralized fiat-collateralized, decentralized crypto-collateralized, and decentralized algorithmic.
Centralized fiat-collateralized stablecoins are backed 1:1 by fiat in a bank account. For example, the stablecoin issued by Coinbase is backed 1:1 by US dollars in a bank account. There is little risk to holding or using the coin as long as you trust the issuing entity and underlying fiat. Another benefit is that there is a centralized entity that is liable if something goes wrong, which many individuals and businesses prefer. In the US, FDIC deposit insurance covers up to at least $250,000, while other countries have their own terms. While these sound like ideal features, not everyone has access to centralized stablecoins. For example, the for USDC states that it is only available in and users are prohibited from transacting USDC for certain activities.
Decentralized crypto-collateralized stablecoins don’t have a central operator or user agreement. This means anyone can use them without the permission of a company or government. However, the tradeoff to not directly backing a stablecoin with fiat is increased complexity around maintaining stability. Rather than the simple model of USDC where $1000 of USDC is backed by $1000 in a bank, crypto-collateralized stablecoins back $1000 of their coin with at least $1000 of (highly volatile) cryptocurrency. For example, is a system built on Ethereum that governs a decentralized stablecoin called . DAI aims to be pegged at 1.00 USD. The way the peg works is that DAI can be “minted” by anyone within the Maker system by locking up crypto as collateral (primarily ETH) and taking out a loan of DAI. The collateral provided needs to be greater than the amount borrowed so that the loan is overcollateralized. For example, you can lock up $200 worth of ETH as collateral to borrow $100 worth of DAI, which you can then use to trade on an exchange. The main reason to do this is leverage - if you believe the price of ETH will not drop significantly, in which case you are getting a “free” $100 to trade on crypto exchanges. If the price of ETH drops so that your $200 worth of ETH is now below the collateralization requirement, the Maker algorithm will seize your collateral and liquidate it to get back ~$100. In this fashion the Maker algorithm tries not to lose money on a loan. While the Maker system is significantly more complex than something like USDC, in theory an end user of DAI who isn’t minting wouldn’t need to understand the complexity, in much the same way that users of normal US dollars don’t need to understand the intricacies of monetary policy. With that said, DAI does present its own risks, including smart contract risk and the possibility of DAI breaking the peg and trading significantly above or below 1.00 USD/DAI.
Decentralized algorithmic stablecoins are the third class of stablecoins. These do not have any collateral backing their system, relying solely on algorithms to get the price to remain stable. One example includes , which shut down before it was launched. A concern some have with this model is that a well funded and motivated entity could attack such a system and drive people to lose confidence in the stability of the peg. This could then lead to a death spiral and the collapse of the stablecoin.
There are a number of DeFi platforms which enable borrowing and lending of Ethereum tokens directly through smart contracts, like , , and . An impressive feature of these smart contracts is that a borrower need not find a lender or vice versa. Instead, the smart contract replaces the role of the middleman, and interest rates are calculated algorithmically according to supply and demand.
Before we explain how this works, it’s worth reviewing how some centralized crypto exchanges implement borrowing and lending. Here’s a visual of the Bitfinex :
But why is decentralized lending even useful? The answer is that it’s a massively scalable application. A crypto service which makes more interest than traditional bank accounts with lower risk could in theory attract billions in deposits. Compound is already at in deposits, and other services are growing fast. The main risks are related to smart contract bugs and crypto volatility, but the interest rates are also substantially higher than typical bank rates of 2% or lower. Here’s a snapshot of interest rates that can be earned for lending out stablecoins on different platforms, from :
There are a number of projects pursuing decentralized exchange of Ethereum-based tokens in various forms, like , , and . For example, Uniswap utilizes a so-called automated market maker (AMM) to algorithmically provide liquidity. Buyers and sellers pull liquidity from the smart contract directly and receive a price quote based on the token quantity desired and the liquidity available. Uniswap will always quote a price regardless of the order size by asymptotically increasing the price as the size of the order increases. Decentralized exchanges can currently handle only a fraction of the volume of centralized exchanges, and as such can’t really convert large amounts of money back and forth. Moreover, many of these projects are limited to trading Ethereum-based tokens on the Ethereum blockchain, which limits their access to large cap coins with their own chains. Still, there are promising technologies like and that may address these limitations.
Decentralized identity and reputation services could offer something similar by including attributes such as social media reputation, history of repayment of previous loans, vouching from other reputable users, and the like. Making this useful for actual financial decisions will require a lot of trial and error on the specific data points to use and the corresponding collateral requirements, and we are just at the beginning of that process. Interestingly, in the long run, DeFi with decentralized identity systems could become another option for people locked out of traditional financial systems. For example, there are without an official ID, and ~50% of women in low income countries lack an ID. However, many of these people do have smartphones. So it’s possible that once decentralized IDs work in the developed world, that they could be rapidly exported to the developing world as a leapfrog technology -- much like smartphones themselves.
We’ve covered decentralized stablecoins, loans, exchanges, and identity. But perhaps the most important aspect of building decentralized finance building blocks on a smart contract platform like Ethereum is the composability. Just like a software library, smart contracts for different financial applications can plug into each other like lego pieces. For example, if you want to add the ability to trade tokenized assets on your platform, you can easily make the assets tradable by integrating a decentralized exchange protocol. And these smart contract lego pieces can even create completely new concepts that haven’t been explored in the traditional world. One example is the project called that brings together DeFi and social media by allowing participants to use their Twitter accounts to “mint” new tokens for themselves, essentially generating digital dollars from their social capital. Popular accounts can put out premium content that is only accessible to a specific group of token holders, which allows them to monetize their following. One can then do interesting things like bet on certain Twitter accounts becoming more popular. Another example is the project called , which combines DeFi and lotteries to create a “no-loss” lottery. Users purchase lottery tickets on-chain, and all funds from ticket purchases earn interest on Compound. At the end of the lottery, everyone gets their funds back -- but one person gets all of the interest earned on the pooled money. It’s essentially a way to use lottery mechanics to incentive savings and wealth creation!
Decentralized insurance like and are an area within DeFi that provides the ability to hedge some of these risks. Prediction markets like also address the hedging use case by allowing users to bet on the probability that there will be a smart contract issue with one of the protocols they are using. With that said, these hedging methods themselves are in their infancy, and add smart contract risk of their own. We do think they will mature, however. And if the DeFi space gets large enough, then traditional insurance companies might offer products too.
👉 Read the full post
👉 Read the full post
Here is a compilation of our DeFi 101 short guides, plus their related news articles, podcasts, videos and guest op-ed pieces. These resources will get you from newbie to Jedi of the most cutting-edge corner of tech and finance. They’ll help you better leverage our , and , which will in turn help you stay updated on the space and take your understanding to the next level.
At a basic level, blockchain is nothing more than a type of database. Every time one accesses an online account, such as Twitter, Google, or Facebook, one links up to a database.
Interoperability happens when two or more systems can communicate and exchange value. The best way to understand interoperability is to see the early success stories of it in action.
Consider Ethereum first. It is the underlying blockchain architecture…
Layer 2 is a collective term for solutions designed to help scale your application by handling transactions off the main Ethereum chain…
As the Ethereum network gets busier, gas prices increase as transaction senders aim to outbid each other. This can make using Ethereum very expensive.
In short: DeFi is a global, open alternative to the current financial system. It differs through products and services built on open-source technology that anyone can use to…
DeFi today can be found mostly flourishing on Ethereum. New applications launch faster than you can keep up with…
Centralized wallets store your crypto assets on your behalf. To open an account using a centralized wallet, first verify your identity and fulfill know-your-customer…
Electronic cash is nothing new. In fact, over 150 years has passed from the first wire payment made by…
CeFi vs DeFi. Centralized exchanges (CEXs) versus decentralized exchanges (DEXs). Trading on CEXs involves…
Yield farming is one of the many memes that was created by the DeFi community. While it’s a term that gets tossed around loosely, there’s a narrower criteria to define what is yield farming…
Yield Farming is the hottest thing in blockchain right now drawing in millions of dollars in collateral as traders and investors look to capitalise on high interest rates and advantageous secondary token rewards. What's really going on here and is it sustainable?
You can lose 100% of your money in any DeFi app or protocol. So above all else, apply a simple rule of thumb to help mitigate this risk: don’t deposit more than you’re willing to lose.
As more apps and protocols launch, and more money pours into DeFi, it’s important that DeFi investors can mitigate the risk of losing their funds. With great power, comes great responsibility.
Yearn Finance, the yield aggregation protocol founded by Andre Cronje, has been hacked. One of the platform’s so-called vaults lost $11M, and the attacker got away with $2.8M.
Listen to the podcast episode .
Alpha Finance was hacked to the tune of $37m but this was a hack so specific and well-informed it was immediately assumed to have been an inside job…
After five attacks and millions of dollars lost due to an exploit, this week we're diving deep into the mechanism behind what made this possible: Flash Loans.
Another week, another hack… what have we learned this time?
In this guide we’ll be helping you take your first steps into the boundless world of Decentralized Finance, better known as DeFi.
MetaMask is a free cryptocurrency and NFT wallet that comes as a browser extension or a mobile app to interact with the Ethereum network, BSC, or any other layer 2 networks.
“Gas” is the ETH required to power every transaction on Ethereum. Gas is a term that was coined to describe the ETH (ether) required to transact on the Ethereum network.
Uniswap is a decentralized financial exchange, or DEX, which allows anyone to take part in the financial transactions of Ethereum-based tokens without a central body…
DeFi has the potential to be a wild place at times. Seemingly bulletproof protocols can rug in an instant or suffer exploits that token prices will never recover from.
In this podcast episode Camila Russo speaks with , who has focused on cross-chain interoperability as founder of Summa, and now with proof of stake blockchain Celo. They talked about how against his expectations, users have opted for wrapped tokens, instead of cross-chain swaps; it all comes down to ease of use. He believes in the future, all dapps will have to be built with cross-chain, cross-shard capabilities as a default.
This podcast interview is with Daniel Wang, CEO and founder of the protocol. The exchange built on the protocol launched a little over a month ago, with the goal of providing a non-custodial platform, meaning it allows users to keep control of their funds, with similar performance in throughput and cost as centralized exchanges.
DJ and music producer (Justin Blau) explores what that will look like with music. He envisions a world where tokens allow artists to connect directly with their audience, removing the need for intermediaries, and where fans can become investors in their idols.
, is behind , a 19-year-old aspiring pop star with almost 3M Instagram followers, striking deals with Clavin Klein and Prada. But what’s more surprising is that she’s not actually human. She’s a computer-generated character. The company that Trevor co-founded, called Brud, created her image and life story.
DJ Andre Anjos, aka . RAC, makes electronic and dance music, and has been involved in crypto in one way or another since 2016. Most recently, he tokenized his latest album and sold it on digital goods market place ZORA. The token, which has the ticker , is linked to a physical, limited edition cassette tape that token holders can redeem.
Please do not hesitate to contact Anupam Majumdar at or Dan Carr at with comments or questions
https://medium.com/the-bitcoin-times/what-is-lighting-50edf410110e
Aleksandar Svetski, Dec 2019
A high level primer on Bitcoin’s most well known second layer tech.
We’re now in the home stretch of the first edition of the Bitcoin Times.
In the last chapter, we closed out with comparisons between Bitcoin & the internet. In the next few, we’ll give some high level understanding of the concepts of second layer tech, with the focus being on Lightning.
Giving Lightning the explanation and time it deserves is out of the scope of this paper, but I’ll attempt to give you enough of an understanding to go further down the rabbit hole with….
Imagine a network, where each of the participants are not only route ‘users’, but also route operators. Where every participant becomes a node, that strengthens and broadens the network for not only themselves, but for everybody using it.
The best analogy I can think of is the internet (once again). The internet really exploded, when we became not only ‘consumers of content’, but also creators and routers of this data and content.
Testament to this explosion is the company which arguably capitalized on this the most. Facebook is barely 15yrs old and is one of the largest in the world. It gave everyone a forum to consume, create and share content; in other words — everyone was a node that made Facebook more valuable (and not just in relation to its market cap).
Conceptual way to understand the network effect layers have
What happens when you apply that same concept of read / write / route to money and payments?
In short — it changes the game.
Whilst not entirely accurate: Bitcoin is like the internet (one transformed information, the other money) and Lightning is a little like Facebook in that it makes money a content type that everyone can collectively participate in.
Money has never had that kind of fluidity, and it’s this fluidity that Lightning represents at a high level.
BUT…You might say: “Wait a minute. Not everyone can route money. They’re not a bank! How can we trust them”? That’s where Bitcoin comes in.
Lightning is technically able to be applied or “anchored” onto other networks, but its maximum utility comes from doing so on a network that gives the highest guarantee of immutability.
That’s the entire point, and how we unlock Lightning’s potential.
If you can refer back to something that has prioritized security, stability, resistance to censorship and shutdown, then you can begin to really abstract and build financial complexity on top of it; without worrying about the potential of error, compromise, fraud or failure. Bitcoin + Lightning is where the future is at.
Lightning enables:
Instant Payments: Because we’re not worrying about block confirmation times, payment speed is measured in milliseconds to seconds. It’s truly peer to peer, and as fast as data can move.
Scale: When all participants are also nodes, you don’t get the congestion we have in today’s archaic, centralised payment networks. You get true scale; capable of millions to billions of transactions per second across the network. This blows away any high-speed blockchain or any other legacy payment rails by many orders of magnitude.
Low Cost: Non custodial micro payments (e.g: pay per action/ click) are truly possible at fractions of cents. This is foundation for the set of use cases yet to emerge
Security: By anchoring to a source of truth (aka; Bitcoin) with simple, robust “smart contracts” one can ensure the integrity of the second layer without recording them on chain (via a complex version of netting).
Lightning is a second layer technology. By using the native smart-contract scripting language of a network (such as Bitcoin) to anchor or connect to, it’s possible to create a secure second layer ‘network’ of participants who are able to process and route transactactions at high volume and high speed.
For example:
Dingus & Wingus decide they want to transact. Lots of times. Instead of bothering everyone on the core network and having every single validator on the core network have to record their transactions, they decide to open up what’s called a “payment channel”. Think of it like putting some money on your transaction account.
They can then transact between each other, back and forth, as much as they want — each time netting off against the prior transaction. After a certain period of aggregating these transactions, and updating the final net state; they could choose to close out the channel by broadcasting the final, net result to the underlying network (e.g: Bitcoin) and settle.
It’s important to note that this final, net entry can be closed out at any time by either party — without any trust or custodianship — by broadcasting the most recent version to the blockchain.
A very sophisticated, technical diagram.
Closing a channel is also how the network deals with cases of attempted fraud or “bad acting”. The last valid, signed set of transactions between both parties wins - and there are some incentive / disincentive rules that help to ensure it’s in everyone’s economic self-interest to do the right thing (i.e.; attempt to defraud the other user, but last signature shows otherwise, you lose the funds you committed to the channel).
This is all similar to how legal contracts function. One does not go to court every time a contract is made (that’s analogous to doing ‘everything on the blockchain’). Only in the event of non-cooperation is the court involved, and by making the transactions and scripts parsable and thus “anchoring” to the underlying network, these smart-contracts can be enforced and the result settled.
A payment channel between two participants is just the beginning. It’s a building block for a larger network. In fact, the network only forms when numerous payment channels join to form a web. In this way, two participants who are not directly connected can transact with each other.
Let’s say Dingus wants to pay Pingus. He can still do it even if he doesn’t have a direct connection (payment channel) with him, but as long as Wingus from earlier can connect them via a chain, i.e. route.
Another… very sophisticated, technical diagram.
The exciting part is that as the network grows, you won’t necessarily even need to set up a dedicated channel to send funds to a certain person. Instead, you will be able to send payment to someone using channels that you’re already connected with. The system will automatically find the shortest route.
By creating an entire network of these two-party ledger entries, it’s possible to find payments paths across the network similar to how packets are routed on the internet.
As all the pieces of the puzzle come together, one starts to see the magnitude of this innovation.
This is how everything else in nature works, along with the functional systems of cooperation we’ve built throughout the millennia.
You abstract the small, you settle when you need to — whether that be at closure, or on disagreement.
In fact, it’s a big part of how modern banking evolved (because it’s more efficient).
The difference (and beauty) with Bitcoin is that you can’t influence or manipulate it (remember: immutability as a service) so it will be the ultimate arbiter & settle as per the original rules.
It’s time to move to a new model where the arbiter / settlement function is digital and owned by the commons, not by the few.
What could the future hold?
Aside from the potential of doing billions of transactions per second (seeing as though speed is what everyone wants), and transforming payments and value transfer from things that happen at a time and place, to something that “streams” over time and space, perhaps something a little easier to imagine is the launch of a Bitcoin bank.
One where anyone, anywhere in the world could set up an account in seconds, and begin participating in global commerce.
Where all reserves are held and denominated in Bitcoin, on Bitcoin — and the organisation can be held 100% accountable because it’s all transparent and able to be queried.
Could we open up the ability to lend, borrow, spend, save, trade and interact globally — without worrying about exchange rates, inflation and manipulation?
I don’t know. Maybe. Or maybe I’m thinking way too small. I don’t know.
What I do know is that the real innovation is yet to come — and those innovations (like Facebook and cat videos on the internet) will not be skeuomorphic.
They will not be something we can predict or even imagine today. Myself and my team at Amber are making inroads in the new world, and we’ll continue to be at the cutting edge — but we are well and truly at the beginning — and all we can do is keep pushing the envelope.
Sometimes images speak louder than words
In the next & final section of The Bitcoin Times Edition 1, we’ll review Money as the fabric of society, how money functions, and we’ll close out the paper with some final thoughts.
Thankyou for sticking with us along this journey.
https://defiprime.com/defi-yield-farming
William Peasteron, Jun 2020
Actual farmers measure yield as the total amount of a crop that’s grown. Accordingly, DeFi proponents have now latched onto the farming metaphor and memed into existence “yield farmers,” i.e. folks who measure yield as the amount of interest that’s grown atop underlying crypto assets like Dai, USDC, and USDT when put to use in DeFi platforms like Compound.
The DeFi arena was already catching fire in 2020 before yield farming exploded onto the scene, but things have definitely kicked into overdrive in the sector thanks to the beginning of Compound’s COMP governance token distribution system in June. Simply put, the scheme rewards Compound users with COMP.
Before the distribution system started, Compound was the second-largest DeFi project per total value locked in its smart contracts. Yet just days after the system’s launch, Compound’s now decisively the largest DeFi project and its COMP token has the largest market cap of any DeFi token at press time. Why? Droves of traders have migrated to Compound to use the platform in order to “farm” COMP.
This buzz around the biggest happening in DeFi this year so far has had more than a few crypto users renewing their focus on various yield farming activities already available in the ecosystem, e.g. through projects like Balancer, Curve, and Synthetix. That said, yield farming is not necessarily new, but the surge of attention around such cryptonative opportunities absolutely is. Let’s dive deeper into DeFi’s hottest meme right now to better wrap our heads around what it means for us users.
InstaDApp’s made yield farming easy for Compound users.
To that end, BAL is the governance token of Balancer. Of the 100 million BAL ever to be minted , up to 65 million have been set aside to reward liquidity providers. 145,000 BAL are now being distributed to these providers on a weekly basis, which has led to traders parking their funds in Balancer pools to yield farm the BAL awards.
Back in March, Synthetix launched an incentives program for liquidity providers of sUSD, the native stablecoin of Synthetix, through the Curve and iearn exchange protocols. The 4-week test campaign was designed so that 32,000 SNX tokens would be “distributed pro-rata to liquidity providers who stake their Curve LP tokens.”
The system worked as follows: users deposited sUSD and another supported stablecoin (think USDC, USDT, or Dai) into iearn, at which point they’d receive an allocation of Curve.fi sUSD/y.curve.fi tokens. Depositors could then take these tokens to Mintr, the Synthetix ecosystem’s decentralized minting hub, and stake them in order to qualify for the trial’s SNX rewards. The idea, then, was that liquidity providers would get regular pool APY and incentives in the form of SNX, for providing liquidity for the Synthetix ecosystem. Obviously, that’s a very attractive campaign for yield farmers, where you’re earning interest on your parked digital assets as well as additional liquid digital assets you can instantly sell for profits on any DEX.
Because of the unique way the system has been set up, participants who provide WBTC, renBTC, and sBTC liquidity to the new BTC Curve liquidity pool will simultaneously earn BAL, SNX, REN, and CRV (Curve’s coming reward token). That’s a yield farmer’s dream come true, but how’s it possible? First off, the Synthetix and Ren teams have created a Balancer pool composed of SNX and REN tokens. This pool will generate BAL from Balancer’s liquidity mining campaign, as well as liquidity provider (LP) rewards in the form of BPT, which is simply a wrapped combination of SNX and REN.
For 10 weeks, then, this generated BAL and BPT will go to depositors who provide liquidity to the BTC Curve liquidity pool. Additionally, the LPs will also be allocated CRV tokens for servicing Curve.
Why only 3 days? Because the demand to use Futureswap was so explosive in that span that the project’s builders saw enough and decided to shutter the Alpha platform early just to be cautious. But those who used the exchange while it was live saw potential. In a follow-up analysis, the Futureswap team said high volume during the Alpha “translated into the outperforming of holding equal value amounts of ETH/DAI for liquidity providers of over 550% annually.” That margin will certainly get most yield farmers’ attention.
Be curious, but don’t be reckless. Yield farming is new and isn’t going anywhere, so there’s no need to rush in. And in zooming out a bit, the fact that yield farming isn’t going anywhere makes it yet another ace for Ethereum when it comes to fostering interesting things that users want.
https://incentivized.substack.com/p/where-do-defi-yields-come-from
Yuga.eth, Jan 22
One of the most distinctive characteristics of Decentralized Finance (DeFi) is the prevalence of the concept of “yield.” Every day, new protocols advertise absurdly high numbers in an attempt to attract customers: 97% APR on ThisCoin, 69,420% APY on ThatCoin, and so forth. Of course, this property of DeFi is also what makes newcomers suspicious. How could a new protocol on a blockchain you’ve never heard of possibly earn a billion percent when the rate for a typical savings account is 0.5%?
This article will demystify the notion of DeFi yield. We will lay out the most common mechanisms by which DeFi protocols provide yield to their customers, and in particular, we will make the following argument:
DeFi yield comes from the value of the underlying DeFi protocol. Thus, locking up your money with a yield-generating DeFi protocol is a bet that the protocol itself is intrinsically valuable.
Let’s dive in.
Notice how I used the word “value” above. How does one denominate “value”? The fact is, it’s up to you: understanding denomination of value is the key to understanding DeFi yield.
As an example, let’s suppose that a DeFi protocol offers 1,000% yield for staking its native asset, ThisCoin. ThisCoin is currently trading at $2 per coin. You take $100, buy 50 ThisCoins, and stake the entire sum and expect to receive 50 * 1,000% = 500 ThisCoins at the end of a year.
A year passes, and you indeed do receive 500 ThisCoins back: you have achieved 1,000% yield! But now, each ThisCoin costs just 1 cent. That means that your 500 ThisCoins are worth $5, and in dollar terms, you suffered a 95% loss. On the other hand, if each ThisCoin costs $20 per coin, then you now have a $10,000 stack, for a 10,000% yield in dollar terms.
The point is this: whether you value the 1,000% yield in ThisCoin more than the 95% loss in US Dollar value is up to you, depending on whether you believe ThisCoin holds more intrinsic value than the US Dollar. (In this case, it probably doesn’t.)
The example serves to illustrate that to understand a protocol’s yield, you must understand the assets by which the yield is being disbursed. The potential fluctuations in the value of the yield-disbursed assets is a risk that needs to be taken into consideration.
Even with the proper denomination of value, there can be a great deal of variability in the way that yield is calculated. Here are some common aspects that you should be aware of when you see a sky-high number for the yield.
Lookback Period: The APY figures you see on DeFi protocols can be based on data from a time period in the past: it could be the past day, the past week, the past year, or anything else. Given the variability of market fluctuations, protocol performance, etc. the projected APY can change wildly. Thus, projected APY may or may not resemble actual APY depending on past and future market conditions. Mitigate this risk by understanding the time horizon on which protocol APY figures are computed.
For example, in the last 24 hours, the TraderJoe USDC-AVAX pool received $156K in trading fees, disbursed to the LPs. Given the pool’s overall $166M value, this corresponds to an APR of $156K * 365 / $166M = 34.2% for liquidity providers.
But fundamentally, the source of yields for DEX and AMM LPs is clear: they come from trading fees. The traders are willing to pay these fees because the DEXs and AMMs are providing a valuable service: a liquidity pool and an automatic algorithm to swap assets. The more popular the trading pair, the greater the trading volume, the greater the fees, and thus, the greater the yields for LPs. In other words, the yield derives from the value of the underlying protocol
Borrowing / lending pools are similar to DEXs / AMMs in that they are all about providing liquidity. The difference is that the liquidity is provided in the form of debt rather than sales.
For example, borrowers may want to take out a loan in USDC in order to pay for their groceries. To do this, they deposit ETH. Two scenarios would result in a liquidation of their ETH and a closure of their loan: 1) If the price of ETH dropped enough, or 2) If the loan accrued enough interest. So long as neither of these events happen, the loan remains open. Once the loan is paid back, the borrower gets back their ETH.
Again, likening this to the TradFi setting: in the centralized world, it is the banks and other institutions (e.g. Fannie Mae and Freddie Mac) that capture the interest rate fees, passing on what are generally very low rates to depositors. In DeFi, the lenders themselves get to capture this value, and the interest rates are set algorithmically by open-source code.
Many protocols use the term “staking” to refer to any locking-up-of-assets, but this is not correct. Staking, in its true sense, is to risk loss of value for potential gain of value.
Proof-of-Stake is about ensuring that the transactions on a blockchain have integrity - there is no double-spending, fake accounting, and so on. To ensure that a series of transactions have integrity, validator nodes do the work of checking that every transaction is valid according to the rules of the blockchain.
How can we trust that the validators will do honest work? This is where Proof-of-Stake comes in. At a high level, it works as follows:
Users stake some amount of their asset with a validator.
Once staked, there is usually a minimum lock-up period, ranging from weeks to months. Eventually, users can unstake their asset.
For every honest piece of work that the validator does (e.g. computing transaction integrity, voting on correct blocks etc.), they are rewarded.
The rewards are disbursed to the stakers, and this is the yield.
For every dishonest piece of work that the validator does (e.g. violates rules, cheats, goes offline, etc.), they are slashed (i.e. some portion of funds are taken away).
The penalties are taken proportionately from the stakers.
Validators verify and vote on each other’s work to decide on honesty / dishonesty.
As a de facto successor to Proof-of-Work, Proof-of-Stake is a monumental achievement in the field of distributed consensus. By locking up your assets with a validator, you are earning yield by contributing to the security of the underlying network, which is intrinsically valuable, and you are risking the possibility of losses if the validator is dishonest or broken.
If you don’t want to think too hard about how to generate yield, an optimizer might be for you. These are giga-brained mechanisms that take your asset, perform a bunch of DeFi operations behind the scene, and return a larger amount of the asset to you in the end. The best analogy for this type of operation might be a mutual fund or hedge fund: you aren’t really doing the investing, you are outsourcing that to an algorithm.
Gro Protocol’s Capital Allocation.
Yield optimizers are attractive because they amalgamate a diverse range of strategies so that if any single one fails, you might still end up making money with the other ones. They also save you the time of having to research how exactly any single protocol works. That said, you assume the risk of trusting the authors of the optimizer strategy - the yield you’ll receive is a function of how good the optimizing strategy is.
Derivatives are a double-edged sword. On one hand, the ingenuity of these protocols are filling a niche that will undoubtedly become more valuable as DeFi matures as an industry. On the other hand, these are fairly complicated pieces of financial technology that the average DeFi participant cannot be expected to be knowledgeable about. If you are going to get your yield from derivative-based protocols, it’s important that you understand how the structure of the derivative directly relates to your yield!
Governance tokens are one of the largest sources of advertised yield, and yet they are also arguably the hardest to understand.
When DeFi protocols are starting out, they often reward Liquidity Providers and other early participants with the protocol’s own governance tokens, a process known as “liquidity mining.” They will also bootstrap liquidity pools on DEXes for their governance tokens, so that the tokens become exchangeable for common assets like USDC and ETH. This induces a market price for the token.
The moral of the story here is that if a significant portion of the yield for a DeFi protocol comes in the form of its native governance token, you should apply extra scrutiny to it, because in this case, your yield literally derives from the value of the underlying protocol.
Rebasing tokens are fundamentally defined by the eponymous mechanism of rebasing, which just means that if you lock up the currency in a staking contract, the amount of the currency that you hold increases by a certain percentage at a regular time period, like 8 hours. The frequency of this compounding is what leads to astronomically high APYs. For example, currently, OlympusDAO offers rewards of 0.3265% per epoch (8 hours) on staked OHM, which results in a 3,450% APY.
Of course, if you’ve been reading this post closely, you’ll easily see that an increase in the underlying currency amount does not necessarily imply an increase in value. That is, if the number that represents the amount of currency does not meaningfully correspond to something intrinsically valuable - like software that facilitates financial exchange, or a real-world asset like the US dollar, or a share of future revenues of a protocol - then the currency amount going up does not necessarily imply that you own more value.
Recently, rebasing tokens had a reckoning as more participants realized that their high APYs didn’t actually mean they owned more value. Here are the charts of OHM, TIME (Wonderland) and BTRFLY ([redacted]) market cap:
OHM market cap has declined 85% over 3 months.
TIME market cap has been extremely volatile and declined 52% over the past 3 months.
BTRFLY market cap has declined 48% in 3 months.
“Reserve Currencies” showcase how DeFi can quickly become untethered from reality. As new participants see extremely high numbers, they become excited and buy in. This drives the price up, leading to more participants buying and staking, creating a flywheel effect. If at any point people had stopped to consider why the numbers were going up they might have realized that their high APYs were not based on value, but on an inflationary monetary instrument.
TIME in particular has seen quite a bit of drama recently, with accusations that the manager of the treasury is a convicted financial felon:
But if the currency doesn’t even attempt to make its “amount” meaningfully reflect an underlying source of value, then they shouldn’t be seen as a reliable deliverer of DeFi Yield.
There are many ways of earning yield through DeFi. Some are low-risk and some are high-risk, but in all cases, it behooves you to understand where the yield is coming from. The straighter the line between the underlying value of the protocol and the yield being disbursed to you, the surer you can be that there won’t be a rugpull down the line. Since DeFi Yield comes from the value of the underlying protocol, your key job is to determine the value of DeFi protocols. And I’ll be here to help you do that :)
https://jumpcrypto.com/yield-farming-for-serious-people/
Jump Crypto
This piece uses a conceptual lens to study yield farming, i.e., earning compounding returns on crypto assets. It illuminates the fundamental exchange of value that yield farming entails.
Yield farmers passively provide five forms of value: operating the network, lending to traders, providing liquidity, governing protocols, and raising visibility.
Yield farmers are compensated for these activities by a mix of users, non-farming protocol owners, and each other.
Warren Buffet warns that you should "never invest in a business you cannot understand." So an investor could be forgiven for looking at yield farming — a landscape full of free money ("airdrops") and scams ("rug pulls"), ruled by characters calling themselves "degens" and "apes" — and running away.
However, a closer examination reveals that yield farming is simply another type of business activity, one which provides value by bearing risk. Obscured by the frenzy of new protocols, misspelled slang, and coordinated capital flows, yield farming rewards entrepreneurial efforts to build new platforms.
In this article, we examine yield farming through a fundamental economic interpretation. In particular, we ask two questions:
What core value do yield farmers create, for which they are compensated?
Who pays that compensation, whether explicitly or implicitly?
For many traditional businesses, the answers are straightforward. For instance, our local sandwich shop provides a range of services — sourcing quality ingredients, assembling sandwiches, and wiping tables — and diners pay for those services directly. Yield farming is not much more complicated. Jargon aside, most yield farming strategies engage in five core types of economic activity, in passive and delegated ways:
Farmers support network operations, such as validating transactions.
Farmers provide lending for traders.
Farmers provide liquidity to token holders.
Farmers provide management and governance to protocols.
Farmers enhance protocols’ marketing efforts.
This piece examines yield farming through these simple, abstract lenses to help current and potential farmers understand their opportunities and evaluate their risk profiles. Since specific strategies are constantly rising and falling, we will avoid diving too deep into the mechanics of any particular trade. Instead, we will explore the underlying conceptual bedrock, which is broadly applicable to both current and future farming opportunities.
Managing passive strategies: Yield farming is hard work, and farmers must constantly find ideas, manage risks, and rebalance positions. However, we consider these strategies passive because, once the upfront cost to find an opportunity is incurred, the deployed position earns a return with little to no further action. This framing contrasts with active ways of earning compensation, e.g., running validator nodes or managing algorithmic market-making strategies, which require ongoing technical labor.
Well-defined interest: To make our definition more useful, we focus on strategies that have some well-defined schedule of interest payouts. There are many forms of well-defined interest schedules: fixed and floating, simple and complex. The schedule itself is what distinguishes yield farming from simple buy-and-hold strategies, e.g., buying coins or NFTs hoping for price appreciation, as these do not have any definite payouts.
Yield farming can further be defined, with respect to a traditional buy-and-hold strategy, as a way to earn extra returns while holding exposure to those same positions. To help better understand this definition, consider some examples from the world of traditional finance.
An investor who holds cash in her standard bank account would not be yield farming, since this is the default action and does not involve managing a strategy. But an investor who constantly opens new high-yield savings accounts to earn bonuses and promotional rates would be yield farming cash positions.
Similarly, a consumer who uses a single credit card would not be yield farming, but one who actively manages a portfolio of cards to maximize miles, bonuses, and other rewards would be. In this case, the underlying asset would be a line of credit.
An investor who simply holds a stock would not be yield farming, since again this is the default buy-and-hold action and earns no well-defined interest. However, an investor who boosts returns by lending their shares to short sellers would be.
Fixed-income investors who buy and hold fall into a grey area. If we consider cash as the underlying asset, an investor who lends that cash to a borrower would indeed be yield farming, since they would be earning interest passively on a cash position. But in many cases, we tend to think of a fixed-income security, e.g., a bond, as the underlying asset, in which case the investor is not earning any extra yield.
Extra returns do not come freely, of course. In exchange for this compensation, yield farmers take on risks and provide a range of passive benefits to the protocol. We will now outline the five ways they do so.
The most basic function in crypto is correctly and securely operating the network. This is done by node operators, usually called validators, who process transactions in exchange for payments in the network’s native token. While many networks rely on computationally-intensive problem-solving to incentivize good behavior ("proof-of-work" networks), others rely on validators posting collateral ("proof-of-stake" networks). That collateral can be partially or fully seized if a validator underperforms or misbehaves.
In turn, the first major form of yield farming is for farmers to delegate tokens to high-quality validators, i.e., validators with reliable and honest performance, in exchange for a share of proceeds. If yield farmers allocate to low-quality validators, those validators will face negative consequences, i.e., forfeited collateral, and farmers will bear that burden.
Consider the two original questions: what economic value do farmers create and who compensates them?
Yield farmers allocate their holdings towards high-quality validators, allowing the network to run more efficiently and securely.
Rewards are paid by network participants, who pay fees to validators in exchange for using the network. Validators then remit a portion of those fees back to farmers.
The explosion of decentralized protocols, known as "DeFi Summer" in mid-2020, dramatically changed the face of crypto lending activity. Prior to this point, lending relied mostly on large and centralized institutions, but since 2020 a new wave of decentralized protocols has allowed individual traders to participate in a wide range of lending activities.
Thus, the second major form of yield farming is a generalization of the first. Rather than only lending to validators, traders can lend cryptocurrency positions to anyone. In particular, yield farmers place tokens in funding pools, and borrowers automatically borrow from those pools using suitable collateral.
Many examples of these types of protocols exist, with Aave, Compound, and Anchor as some of the most popular. These protocols typically accept deposits in a base asset — one that already exists outside the protocol, e.g., UST in the case of Anchor — which can be lent out to borrowers. These protocols keep track of deposits by issuing the lender a new synthetic token (e.g., "aUST" for Anchor), which lenders can use to redeem the original position and the accrued interest later.
For now, almost all protocols have focused on over-collateralized lending. Lenders thus face the risk of the collateral depreciating more quickly than it can be liquidated. However, a few protocols such as TrueFi and Goldfinch are expanding into the uncollateralized lending space by vetting borrowers, using their knowledge of some off-chain, real-world information. Future yield farmers will likely choose between making fully collateralized loans and taking on the default risk of borrowers more directly.
As before, we ask ourselves the same two questions: What economic value do farmers generate and who pays them?
Yield farmers allocate holdings towards capital-constrained traders, which allows for those traders to express views on asset prices more efficiently. In the future, yield farmers may also provide value by allocating holdings towards higher-quality borrowers and projects.
Farmers are compensated by borrowers, who pay continuous interest back to farmers (with protocols taking a cut). While some protocols temporarily guarantee fixed interest rates, most use floating rates that allocate supply and demand.
Liquidity provision, like lending, has been democratized and massively expanded by DeFi protocols. Previously, only centralized exchanges and professional market makers could successfully muster the capital, technical expertise, and ongoing attention needed to provide liquidity. Today, retail traders can passively do so too.
This is the third major form of yield farming. Farmers deposit cryptocurrency positions into common liquidity pools (known as "automated market makers" or AMMs). Traders who need liquidity can swap tokens against these pools — often paying explicit fees in addition to spreads to do so. Liquidity providers earn these fees and/or spreads by facilitating two-way liquidity, but also bear the risk of capital losses if the fundamental exchange rate changes (and does not revert). This contrasts with active liquidity providers, who frequently adjust their positions as the exchange rate drifts.
Examples of major protocols running liquidity pools for yield farmers include Curve, Uniswap, Sushiswap, among others. These pools act as centralized liquidity hubs, facilitating trades between many different pairs of assets.
In returning to the same two questions, the answer to the first question — what value do farmers provide? — is straightforward.
Yield farmers provide liquidity to those who need it, allowing them to enter and exit positions with minimal market impact.
But answering the second question — who pays for that value? — is more complicated. In particular, there are three ways that liquidity providers are compensated.
First, AMMs distribute direct rewards — namely, trading fees and spreads — to liquidity providers. The rewards are directly paid by the users who take liquidity from the pool.
Second, AMMs issue rewards in their own native tokens, e.g., Curve issues the CRV token. (Note that Curve was indeed the first protocol to successfully operationalize this model, which has since become wildly popular and widely imitated.) In addition to their direct economic value, these tokens usually come with special rewards schemes (and governance rights).
For instance, a farmer on Curve can boost their rewards on a pool up to 2.5x from the base amount by holding a substantial amount of CRV tokens relative to the liquidity they provide.
As a result, rewards are paid by two constituencies. First, farmers who have small CRV positions earn lower fees from liquidity provision, implicitly subsidizing farmers with large CRV positions. Second, non-farmers who hold CRV positions are diluted through CRV supply inflation, thus again subsidizing farmers.
Third, individual protocols may pay for rewards to those who provide liquidity for their specific token. In practice, they operationalize this by buying governance tokens of the large liquidity protocols (directly or indirectly) to redirect extra rewards to their token’s pool. This method is discussed more in the next section.
Although "code is law" is frequently cited as the blockchain ideal, much of crypto remains a hands-on activity. Capital needs to be directed, protocols upgraded, and systemic threats addressed. Most such tasks are accomplished via active and decentralized means — individual stakeholders write and review code, vote on proposals, etc. But in the last few years, pooled management systems have grown in popularity, particularly in directing capital.
The fourth major form of yield farming is to power those pooled systems and thus manage tokens in passive and delegated ways. For instance, the protocol Convex has been highly successful in directing liquidity on the Curve platform across liquidity pools. The protocol Yearn has found similar success at a higher level, allocating assets across multiple lending and liquidity protocols.
This reminds us that a single yield farming strategy may provide value in multiple ways. For instance, a yield farmer could provide liquidity on Curve directly, or she could provide liquidity on Curve through Convex. She earns rewards for providing liquidity in both cases, but earns extra rewards for doing so more efficiently in the latter case.
The answers to our two core questions — the value provided and the compensation received — are more subtle than in previous sections. In particular, there are two ways in which yield farmers earn their keep through managing protocols. The first subset is value-creating:
Yield farmers create surplus through more efficient management. Using the same examples of Convex and Yearn, these protocols can reallocate liquidity towards particular markets more quickly and cheaply than a group of disaggregated traders. (Note that aligning incentives towards the most useful markets is a subject of lively and evolving debate.)
Farmers are compensated both explicitly and implicitly:
First, farmers earn better returns by redirecting resources towards high-value opportunities. They are then compensated by their new end users, such as borrowers or liquidity utilizers.
Second, farmers save on transaction costs by utilizing such services. This can be highly significant — fixed fees on the Ethereum network sometimes exceed $100 per transaction, which makes larger transactions much more attractive. (Imagine an ATM that charged $100 to withdraw. You’d want to take out a lot each time!) Pooled protocols amortize these fees, so the portfolio can be rebalanced more frequently. Farmers can also save on individual effort through automated portfolio management, where the protocol automatically identifies and transfers funds towards the current best opportunities.
However, farmers can also profit from management through value extraction:
Farmers can take advantage of protocol-based rewards (as with Curve) either through pooled structures ("aggregators") or individually, but aggregators make more efficient use of reward-boosting mechanisms. Thus, on the margin, the increased compensation is funded by small liquidity providers and token holders who do not provide liquidity. In short, value is being transferred rather than created.
Thus, the final value provided by yield farmers is enhanced visibility and trust through asset allocation. To keep that value within the system, protocols often reward illiquidity as a means of "leasing" attention and usage, which gives them time to build and mature. Specifically, protocols ask farmers to purchase and lock tokens in exchange for token distributions — with larger rewards for longer lockups. Of course, part of this exchange is that locked holders, who cannot respond to market conditions, bear substantial macroeconomic price risk relative to liquid ones.
Looking at our core questions of fundamental economic value and compensation for the last time, we arrive at the following:
Yield farmers provide increased TVL to a given protocol, driving higher awareness and more usage.
Yield farmers are compensated by the protocol, which typically provides rewards in the native token. This means that, in the short run, non-farmers pay for these rewards by bearing the inflationary burdens. However, in the long run, the protocol hopes to be successful in creating value and attracting new users. In this case, later generations of holders pay for the marketing benefits provided by the early (yield farming) generations.
Conceptually, this channel is the most nebulous and prone to Ponzi-like dynamics. Indeed, protocols must grapple with key questions, such as whether they can keep users after the implicit marketing budget ends. But there is some precedent for success from the startup world, where "blitzscaling" — the strategy by which VC-funded startups massively subsidize users to gain market share before raising prices — has proved effective if finicky. Crypto is now copying that playbook, incurring deep expenditures upfront with the chance of realizing enormous value over time. Channeling the Lindy effect, protocols hope to become a fixture on the crypto scene in the short run to ensure success in the long run.
Rewards-driven marketing is a tightrope that must we must tread carefully. Protocols that offer no rewards will languish without attention. On the other hand, highly aggressive protocols (e.g., ones that offer 1000% staking yields) attract only yield farmers, and there are few non-farmers to power the rewards. This leads to short-term spikes from pools of "mercenary liquidity," but long-term collapses.
Despite their numerous variations, yield farming strategies at their core are fairly simple. Farmers passively provide value to protocols, in exchange for which they receive both direct and indirect compensation. However, there is one final ingredient in the mix: financialization. Crypto has developed a robust ecosystem of financial protocols, allowing yield farmers to transfer assets freely and to take generous leverage. This is how many yield farmers multiply high base yields to even more enticing levels (e.g., 20% to 100%).
While a further discussion of the use of leverage in yield farming is out of scope for this piece, we will mention one common example: liquid staking. This is where farmers deposit a base token into a protocol and receive a synthetic tradable token in exchange. As long as the synthetic token is accepted as collateral by other lending protocols, farmers can leverage their position. Namely, a farmer could deposit the base token (e.g., ETH), receive a synthetic token representing that claim (e.g., sETH), borrow ETH against that synthetic token as collateral, deposit that ETH, receive sETH against it, etc. This sequence is by no means the only one, and the ability to short or fractionalize adds even more complexity to the mix.
But it is important to stress that, while financialization can amplify yield farming opportunities, the underpinning theory remains the same. Farmers are compensated for bearing risk and passively providing value, while financialization connects different parts of the ecosystem and expands the menu of risk-return profiles.
At their core, yield farmers and traditional farmers are not so different. Both take risks (whether price risk or weather risk), provide value (whether creating buzz or creating food), and earn returns. We hope that, with the fog of jargon and insider lingo lifted, yield farming can be seen as just another economic activity that helps the crypto ecosystem run more efficiently.
But, like traditional farming, yield farming is not an easy business. Despite the conceptual simplicity, yield farming in practice involves an ability to find under-appreciated opportunities, move crypto positions rapidly, and understand nuanced risks in smart contracts. Aspiring yield farmers should brace themselves for mistakes and only deploy the holdings they are prepared to lose.
As the industry matures, we believe that these problems will ease and that a broader set of individuals will be able to participate safely. Yield farming will then deliver on crypto’s broader promise — democratizing finance and allowing anyone, regardless of sophistication or wealth, to provide these core sources of value.
https://blog.angle.money/angle-explains-amms-22daddf51f32
Angle, Jul 2022
AMM stands for “Automated Market Maker”. It’s basically a smart contract that manages liquidity provided by Liquidity Providers to create an on-chain marketplace. We use them every time we do on-chain swaps.
The most known AMMs (Uniswap and its forks, Curve, Balancer) respect certain properties:
They do not need any human input and are described by mathematical invariants, which dictates what swaps they authorize. The most known of is Uniswap V2’s invariant x*y = k
. The smart contract holds x
token X and y
token Y, and allows any swap dx
for dy
that preserve the invariant. So any swap such that (x+dx)*(y-dy) = k
.
In addition to being driven by a mathematical formula, they are “path independent”, meaning that if we don’t account for accrued fees, their states only depend on the prices. So as a LP, if you add liquidity and remove it at the same price, you can’t have made a loss.
Let’s do a step-by-step example to better understand. Assume you have 1000€ to invest on the agEUR / ETH pair, and that the starting price of ETH is 1000€. If you do nothing and hold only one of the 2 assets, your returns in € after a year would be:
Holding agEUR versus holding ETH
It may seem complex, but this graph basically says that you’ll have 1000€ if you’ve held agEUR or whatever 1 ETH will be worth a year from now if you’ve held ETH.
Now assume you want to provide liquidity in Uniswap V2. You’ll have to bring 500 agEUR and 0.5 ETH at first. Would it be better than holding those 500 agEUR and 0.5 ETH ? Without taking fees accrued by the AMM into account, after a year you’ll have:
Uni V2 LP without fees versus holding 50% agEUR / 50% ETH
See the difference between the red curve and the purple curve? This is the infamous Impermanent Loss a.k.a. “IL”, that is to say what you’ve lost by enabling others to swap your assets.
There are numerous proposals out there of protocols or magical techniques to “eradicate” IL, but it is somehow inherent to AMMs: if you allow traders to swap your liquidity, then you’ll be left with a different asset balance than what you brought, which may be worst than having let them idle in the first place. So how do LPs earn ? Thanks to fees and the “path independent” property: with a reasonable price change, after some time the pool will be back to a state close to its original one and will have accrue some fees.
Let’s add add these fees to the previous curve. Let’s assume the volume is 100 agEUR / days for a year and the pool has a 0.25% fee. The graph would then be:
Uni V2 LP versus holding 50% agEUR / 50% ETH
Now we see all the potential benefits of LPing: in this scenario, without price change, there would be a ~9% APR !
Let’s now deal with Uniswap V3. Briefly, it’s is a complexification of Uniswap V2: the mathematical invariant is the same, but LPs can “concentrate” their liquidity on a price range by adding a “virtual” liquidity to the pool. Let’s not dig into the maths this time but focus on what the implications for LPs are.
When adding liquidity on Uniswap V3, you choose a price range, which defines the amount of “virtual” liquidity you’re also adding to the pool. The more your range is small, the more “virtual” liquidity you add to the pool, which’d mean more IL but bigger swaps so higher fees to move the pool price.
In the extreme case of a range 0-infinity, you’re back in the Uni-V2 case. Let’s take the example of a 500–2000 range and compare Uniswap V3, V2 and holding:
Uni V2 LP versus Uni V3 LP versus holding 50% agEUR / 50% ETH, without fees
What happens ? When the price reaches 500, everything has been swapped to ETH so the Uniswap V3 position only contains ETH, and when it reaches 2000, there is only agEUR left in the position. The IL is much stronger that Uniswap V2. But it also means that to move the price of the AMM, more liquidity needs to be swapped, so there will be more arbitrages and more fees earned by LPs. But how much ?
For the purpose of this simulation, we can therefore 2 types of volumes:
a daily “intrinsic” volume: all the swaps that are not an arbitrage, and that are not made in order to move the price of the pool. It includes for example all the retail, spot orders. This volume will not be higher depending on the AMM type.
a volume due to arbitrages between market places, that aims to move the price of the pool. This one can be estimated given a daily price change and the formula of the AMM: you can compute the swap size needed to move the pool price, hence the fees earned.
Let’s assume a mean price deviation of 10%, implying more volume and fees for the UniV3 pool:
Uni V2 LP versus Uni V3 LP versus holding 50% agEUR / 50% ETH
Uni V3 versus holding ETH until 1500, then holding agEUR
https://medium.com/dydxderivatives/decentralized-lending-an-overview-1e00fdc2d3ee
Antonio Juliano, May 2019
To date, the biggest sector by far for decentralized applications has been lending & borrowing crypto assets. Several high quality products have been built that allow users to borrow and lend directly on the Ethereum blockchain with no intermediaries. Decentralized lending products are available to anyone, anywhere, and require only an Ethereum wallet to use. These products are already seeing real usage today with total USD volumes in the hundreds of millions.
Crypto holders can lend on decentralized lending platforms to earn passive income on their holdings through interest fees paid by borrowers. This is an attractive option to lenders as they can earn relatively low risk interest on their existing holdings without entrusting their private keys to a 3rd party centralized service.
The dominant use case for borrowing crypto is margin trading. Borrowing allows traders to get leverage which multiplies gains and losses while trading, as well as short selling, a trading strategy which makes money when the price of an asset goes down.
Margin trading involves borrowing an asset, and then immediately selling it. For example, you could borrow DAI (a stablecoin) and then buy ETH with it, which would let you buy more ETH than you would otherwise be able to (giving you leverage on ETH). Of the 4 top lending protocols discussed below, all except for dYdX focus on just the borrowing & lending side of margin trading, meaning traders have to go to other exchanges to execute the sell of the borrowed funds.
Currently, all relevant decentralized lending platforms use a form of borrowing called collateralized borrowing. Collateralized borrowing means that borrowers must lock up collateral of greater value than the value of their borrow. The collateral serves to ensure lenders will be repaid even if the borrower never repays the loan.
For example, say you want to borrow $100 worth of ZRX, you would have to lock up more than $100 worth of collateral in another asset. Say you choose to lock up $150 worth of ETH. Now if you default on your loan, the lender who lent you $100 of ZRX can just seize your ETH, which is worth even more.
However, the price of both ZRX and ETH can change over time. Say the price of ETH falls, and now your collateral is only worth $90. Now the lender could not get their money back by seizing your ETH anymore, because it wouldn’t be worth as much as your debt. This is where the concept of a liquidation comes in.
A liquidation is when your borrow is automatically repaid by selling off some of your collateral to buy back the asset you owe to the lender. Liquidations occur when your borrow falls below some required level of collateralization (usually between 115%–150%).
MakerDAO is both the most complex and widely used decentralized lending platform available today. MakerDAO is the creator of DAI, which is a cryptocurrency with a target price of $1 (known as a stablecoin).
On MakerDAO, there are no lenders, and the only asset available to borrow is DAI. Borrowers can borrow a newly created supply of DAI by locking up ETH as collateral, and must maintain a 150% collateralization ratio. The interest rate on DAI is global, and is set through governance by MKR token holders. The interest rate has recently been fairly volatile, increasing from 2.5% to 19.5% in a bit over a month.
Opening a MakerDAO CDP via cdp.makerdao.com
Assets: DAI (Borrow Only), ETH (Collateral Only) [MakerDAO plans to add more collateral assets with their upcoming release of muli-collateral DAI]
Interest Rates: Variable. Set through governance by MKR token holders
Collateral Requirement: 150% minimum
Liquidation Penalty: 13%
Compound uses a money market based approach, with global pools of capital for each supported asset. Each asset has a global borrow and lend interest rate which all borrowers pay & lenders earn. These interest rates are variable and set algorithmically based on the percent of each pool that is being borrowed.
Borrowers must collateralize their accounts with 150% of the value being borrowed. For example if you want to borrow 1 ETH from Compound, you could deposit 225 DAI into the Compound DAI pool and then borrow 1 ETH from the ETH pool (assuming 1 ETH = 150 DAI — this would be 150% collateralized). Borrows on compound have unlimited duration.
Borrowing REP & Lending DAI on compound.finance
Assets: DAI, ETH, ZRX, REP, BAT
Interest Rates: Variable. Set algorithmically based on supply & demand
Collateral Requirement: 150% minimum
Liquidation Penalty: 5%
Dharma utilizes peer-to-peer lending to match individual lenders directly with borrowers. On Dharma, borrowers and lenders enter into fixed term / interest rate loans.
Lenders can use Dharma to offer fixed-term loans of up to 90 days, and start earning interest only after they are matched with a borrower. Lenders’ funds are locked for the duration of the loan.
Borrowers on Dharma lock up collateral equal to 150% the value of the assets being borrowed. Borrows have a maximum 90 day duration, and fixed interest rates for the duration of the loan.
Borrowing DAI on dharma.io
Assets: DAI, ETH
Interest Rates: Fixed. Rates set centrally by Dharma
Collateral Requirement: 150% initial, 125% minimum
Liquidation Penalty: Unclear (?)
The main difference between dYdX and the previously mentioned lending platforms is dYdX natively supports trading in addition to borrowing / lending, meaning that traders can margin trade without leaving the platform for another exchange. dYdX’s product is targeted at margin traders, and is more complex than either Compound or Dharma, while also supporting more functionality.
Under the hood dYdX uses a pooled based lending approach and algorithmic variable interest rates similar to Compound. There are no lockup periods or maximum durations while lending on dYdX.
On dYdX borrowers must lock up 125% collateral, meaning dYdX offers the highest leverage (up to 4x) of any decentralized lending platform. Borrows / positions on dYdX are limited to 28 days.
Opening a Leveraged Long position on trade.dydx.exchange
Assets: DAI, ETH, USDC
Interest Rates: Variable. Set algorithmically based on supply & demand
Collateral Requirement: 125% initial / 115% minimum
Liquidation Penalty: 5%
Within the past year we’ve seen the emergence of quality decentralized lending platforms. The ability to borrow and lend on a completely open platform is a fundamental advancement in financial markets, and is already seeing real volume today.
https://defining.substack.com/p/deep-dive-1-amms?utm_source=twitter&s=r
Emiri, Apr 2022
2021 was the year of the L1 trade. From SOLUNAVAX at the beginning of the year to FOAN towards the end of the year (shoutout white wolf), every L1 caught a bid. Something even a blind person could see is that whenever a new L1 caught a bid, there would be a massive rotation of capital into that ecosystem. Everytime that happened, the native DEX always performed the best.
DEXs are a cornerstone piece of any DeFi ecosystem. Majority of the liquidity within a DeFi ecosystem will flow through its native DEX. As you may know, DEXs use something called an Automated Market Maker (AMM) to facilitate smooth on-chain trading. The innovation in the AMM space over the last 3 years has been rampant. Everyone is trying to make the next AMM that is faster, more capital efficient, more gas-optimized, and better for Liquidity Providers (LPs). I was doing some research and wanted an organized list of all the different types of AMMs out there, but the information was very fragmented. So I said fuck it, and decided to make my own handbook that dives into all the different types of AMMs that are currently out there. I will only be covering the AMMs that are truly unique from one another, I will not be covering forks because I’ll probably have kids by the time I finish writing. Why AMMs?
If you have only interacted with CEXs or are from TradFi then the concept of an AMM might be new to you. All centralized exchanges use something called a Centralized Limit Order Book or CLOB for short. A CLOB simply matches bids and asks according to price and time priority. Users can make 2 types of orders here, a limit order & a market order. This works well for centralized exchanges, so why do we need AMMs for DEXs?
The first problem is gas fees. Most of us have probably lost mini-fortunes in gas fees. Using a CLOB on-chain would probably cost the equivalent of a small countries GDP in gas fees. Constantly adjusting orders and placing orders would simply not be possible because you will get charged gas fees on every single transaction.
Another problem is frontrunning. The transparent nature of the blockchain becomes a problem for CLOBs. It is not a problem for market orders since AMMs only allow market orders, the problem is for limit orders. Suppose a large trader has a limit order set for a specific market, their orders will be streamed to every market participant and will most certainly be frontrun. It becomes extremely difficult to execute trades the way you want. In centralized exchanges, these orders are mostly obfuscated for the end market participant which makes frontrunning difficult.
In comes AMMs.
What are AMMs?
If you already know how AMMs work, then skip this section
To facilitate gas-efficient trading on-chain, AMMs use something called liquidity pools. A liquidity pool is basically a smart contract that holds the tokens of a trading pair in a certain ratio. To get liquidity into these pools you have users called liquidity providers or LPs. Suppose the trading pair is ETH/USDC. LPs are incentivized to provide equal amounts of liquidity to this pool (usually 50/50) and in return they get an LP token. This LP token represents their position in the liquidity pool and it entitles them to receive a portion of the trading fees generated from the pool. When LPs want to take their liquidity out, they simply burn the LP token. Every time a swap is facilitated by a liquidity pool, there will be a price adjustment which is determined by a deterministic pricing algorithm. This is the Automated Market Maker. Now traders can directly buy and sell from these different liquidity pools through market orders making an on-chain trading experience feasible.
As time has gone on, different issues were encountered with AMMs which drove devs to make their own versions of this model. This deep-dive will look at all the different variations that exist.
This will be the order of this guide, so you can skip go to the AMMs you want to read about
1. Uniswap
2. Curve Finance
3. Balancer
4. Bancor
5. Crocswap
6. Muffin
7. Pendle
8. Primitive
9. Cowswap
10. Osmosis
11. Dopex
12. YieldSpace
13. Sudoswap
1) Uniswap
Uniswap has been live since November 2018. Since it’s been live, it has gone through 3 stages of evolution. From V1 to V3, the team has always tried to bring innovative solutions to make the user experience better. Uniswap V1:
The first version of Uniswap established the general structure of how it will work. They have basic liquidity pools where each pool is its own smart contract. Only two tokens can be swapped in these pools, and when LPs provide liquidity they have to do so in a 50/50 ratio. So if the pair is ETH/DAI, they have to provide both tokens in equal proportions. For doing so, the LPs on Uniswap receive rewards, there is a standard 0.30% trading fee collected by Uniswap across all pools which are proportionally redistributed to LPs. The AMM model used by Uniswap is called the “Constant Product Market Maker”. This follows the infamous x*y=K formula. This formula basically means that the product of the quantities of the two tokens in a pool must always remain constant. At the time of release, Uniswap was one of a kind. But they had one major issue with V1. It only supported ERC-20/ETH pairs. Uniswap V2:
While ERC-20/ERC-20 swaps was one of the main features, there were other notable features. The first one was on-chain price feeds through the use of price oracles. Price oracles constantly feed the smart contract information about the price of a certain asset. The oracles were made to be resistant to price manipulation and fulfilled the purpose of improving the user experience. The other feature was flash swaps, it allows a user to receive the output tokens from the contract before giving the input tokens. While it may seem counter-intuitive to give tokens before they’ve been paid for, Ethereum transaction are atomic so if the user does not pay the input token amount by the end of the transaction, the transaction will get rolled back. Flash swaps are good for capital free arbitrage or instant leverage.
While the Uniswap team could’ve been satisfied with their success so far, they wanted to take it a step further to improve the user experience.
Uniswap V3:
V3 came a year after V2 with the main focus being on capital-efficiency. This is done through concentrated liquidity & multiple fee tiers. Previously, when LPs would provide liquidity to a pool it would be in the 0 to infinity price range. Their liquidity would be distributed evenly across the price curve. This is not efficient because all the liquidity does not get used. Take the example of a USDC/DAI pair. This pair would mostly trade within the $0.99 to $1.1 price range. With liquidity being distributed evenly across the curve, majority of it is not being used. The same can applied for any other token pairs.
Hence, they introduced concentrated liquidity. With this LPs can pick a range within which they would like to provide liquidity and can keep adjusting their position based on market conditions. For our above example most LPs would keep their range as $0.99 to $1.1. But for an ETH/DAI pair for example, some LPs may choose a $2k to $3k range while others may choose a $2.8k to $3.2k. The tighter the range made by the LP and the longer price trades within their range, the more fees they accumulate as rewards. So LPs are incentivized to actively manage their positions to keep the Uniswap AMM very capital-efficient.
2) Curve Finance
Curve is a DeFi powerhouse, it’s such an integral part of DeFi that a literal war has been fought over it. Let’s look at why.
The Plain pools are like any other liquidity pool, they facilitate swaps between ERC-20 tokens. Some pools have more than 2 tokens in them, the most popular being the tripool (DAI/USDC/USDT). In the lending pools, the underlying tokens are lent out to different protocols such as Compound or Yearn and a wrapped version of the token can be traded in the lending pools. Suppose the token is DAI, if it’s cDAI then it’s on compound but if it’s yDAI then it’s on Yearn. This way you can earn lending interest as well as trading fees. The metapool is a way for LPs of the plain pools mentioned above to earn additional trading fees by depositing LP tokens into the metapool. In the metapool, stablecoins are paired against base pool LP tokens. Suppose you are a LP for the tripool. You receive the LP token 3CRV. You can now deposit this 3CRV token into the GUSD metapool. People can trade this, but LPs get the new LP token gusd3CRV which can be put in gauges to earn additional rewards. Factory pools were made in collaboration with Yearn, it is basically a permissionless way for anyone to create their own metapool. This can be an individual or a protocol but it is most often a protocol.
The Gauges responsible for giving LPs their CRV rewards. The gauges monitor the activity and usage by LPs and automatically assign a portion of the CRV inflation to be directed to LPs. Through this they can lock their CRV for the infamous veCRV to get more governance control.
Curve has now launched a V2 of their protocol. V2 involves the launch of crypto pools. A new way to trade non-pegged assets with low risk. The whitepaper is extremely complex but I’ll try and simplify it. They basically use concentrated liquidity just like Uni V3, but for this AMM rather than concentrating liquidity around price = 1 like they do for stableswaps, it is concentrated around current price. The additional feature is their internal oracle which is used to automatically adjust the range where liquidity is provided rather than having users do it manually like they do on Uni V3.
3) Balancer
The team at Balancer saw the constant product market making formula of having two assets in a pool at a fixed ratio as problem. There was no customizability and lack of options. Hence, Balancer created smart pools. The Balancer smart pools use the constant mean formula. This allows the creation of liquidity pools with upto 8 tokens with no fixed ratios and varying weights allocated to each tokens. This opens up options for both traders and liquidity providers because there is more customizability. This customizability also means that there is third player in the Balancer AMM, not only traders and LPs. They are the controllers. They are responsible for managing the pool. This typically involves just rebalancing the weights depending on market demand and volatility. An added benefit is that it very easy to create your own pool. Anybody can do it. Just go to “pool management” tab and plug in your wallet, provide the liquidity and you’re good to go. This is good for individuals if there is no pool up to their liking or if they see an opportunity to create pool that will be demanded by others. Projects can also use this feature to attract liquidity.
4) Bancor
Bancor is an OG DeFi protocol, they created the first ever AMM. Their AMM is called omnipool. Unlike other AMMs where you can create a pool with any token, Bancor’s omnipools require all tokens to be paired against their native token BNT. The problem with traditional AMMs is fragmented liquidity, some pairs will have deeper liquidity than other pools which makes the trading experience worse. This is because there is no common denominator. For example, a random token (Token ABC for example) will have deeper liquidity when paired against ETH, and weaker liquidity when it’s paired against USDT or USDC. To combat this liquidity issue, Bancor decided to have a common denominator in the form of BNT. Now users have a central liquidity pool to trade from which gives every token equally good liquidity thereby making the overall trading experience much smoother.
However, Bancor saw 2 key problems with its AMM design, involuntary token exposure and impermanent loss. Hence, they created Bancor v2.1
Bancor v2.1 allows single sided exposure and provides impermanent loss insurance. For single sided exposure, Bancor allows LPs to provide only one token rather than forcing them to provide both. The LPs will have 100% exposure only to the token which they have provided to the pool and will accrue fees and rewards based on that. Impermanent loss insurance works like this, if a user deposits $100k of a token into a pool, Bancor matches it with a $100k deposit of BNT. Now both the user and the protocol are accruing fees and rewards. Once the user withdraws liquidity both the user and Bancors LP token gets burned at the same time. The protocol checks to see if the user has suffered any impermanent loss and accordingly distributes the accumulated fees from their LP position to the user.
5) Crocswap
Crocswap is some big brain stuff, it took me a lot of time to wrap my head around this one but I finally got it.
It’s an Ethereum-based AMM where the entire DEX is run on one single smart contract. Typically, DEXs are run by having a separate smart contract for each liquidity pool, but by running the entire DEX on one contract you will see major gas & tax savings since tokens aren’t always being reshuffled between contracts. Within this smart contract, there are multiple lightweight data structures which represent individual liquidity pools. This opens up the doors to smoothly conduct many different multi-pool operations.
Crocswap also makes significant improvements for LPs. They have a dual-liquidity model. Classical liquidity (or ambient liquidity) of Uniswap v2 and concentrated liquidity of Uniswap v3 both co-exist in Crocswap. Combined with this, LPs provide liquidity through range orders. Basically, there is a pre-determined price grid where LPs can place orders. As price hits the different ticks, liquidity is added or removed depending on the nature of the order at that tick. This LP model has two key benefits, the first is that it makes the ambient liquidity LP tokens fungible, and the second is that the LPs position automatically compounds rewards rather than having to manually collect rewards. All in all, Crocswap combines the best features of multiple different AMM variations, and puts it into a sleek, cheap, & efficient one contract exchange design.
6) Muffin
Muffin (formerly Deliswap) focuses on two key features for their AMM. Concentrated liquidity & multiple fee tier pools.
We already know what concentrated liquidity is, it’s the same mechanism used by UniV3 and Curve V2. An issue that other AMMs have is fee-tiers. Some AMMs like Uniswap have a fixed fee tier at 0.30% while other AMMs offer 1 or 2 fee-tiers per pool. If you want more fee-tiers then you have to make a new pool of the same token which fragments liquidity. Muffin wants to offer LPs more granular control over where they provide liquidity to generate the most fees rather than having liquidity distributed evenly across all fee-tiers. Hence, they created the “multiple fee-tiers per pool” model. Within one pool, there can be up to 6 fee-tiers. Each fee-tier will act like its own inner-pool which will have its own liquidity and price. Depending on the state of the market, LPs can choose what price range and fee-tier they would like to provide liquidity in. This creation of more choice makes the system more capital-efficient and generates more rewards for LPs.
7) Pendle
The ultimate aim of the AMM is to minimize time-dependent impermanent loss that arises from using tokens with a time-decay. To do this, they use a formula similar to Uniswaps x*y=k but they just add an additional account for weight and time. So the pool starts off with a curve similar to uniswaps, but as time passes the curve shifts in such a way to ensure that the price of the YT keeps falling as time passes. The curve shifting is governed by a time-decay pricing model which is similar to the options pricing model in tradfi.
8) Primitive
Primitive is a spot & derivative token exchange with the key innovations coming from the derivative token side. For the spot exchange, they use the concentrated liquidity method. When it comes to derivatives, they use a variation on AMMs called Replicating Market Maker (RMM). RMM is a way to make the on-chain derivative trading experience more user-friendly and more efficient. The Primitive team saw two problems, current on-chain derivatives depend solely on oracles. Oracles present security risks and can be a central point of failure. The other issue is that LPs get a LP token to represent their share in the pool which is essentially a derivative, while there are trading markets for these LP tokens, none of these derivative are inherently financially useful. Using the RMM, you can create or offer any type of derivative payoff products. I’ll go over some examples outlined by the team to give you some clarity. In lending markets like Fuse these LP tokens can be shorted which expands the derivative payoff range for call & put options, for liquidity strategy protocol like Ribbon and Charm you can create a basket of LP tokens to create any type of novel DeFi product, lastly you can also create binary options by being able to sell the rights to one side of the LP token. 9) Cowswap
Cowswap is built on top off the CoW protocol. To understand Cowswap lets understand CoW protocol first.
There are 2 key features that make the CoW protocol unique, batch auctions & Coincidence of Wants (Hence the name CoW). Batch auctions are when orders are placed off-chain after which they are aggregated into batches to eventually be settled on-chain. The benefit of batch auctions is that it simplifies the Coincidence of Wants (CoWs). CoWs are when two parties hold an item that the other wants, they can exchange them directly without the need for any type of intermediary. Similarly, when two traders hold an asset that the other wants they can exchange them easily through batch auctions. This means that CoW protocol doesn’t need direct access to on-chain liquidity which gives them significant MEV protection.
Built on top of this is the DEX Cowswap. It acts as a meta DEX aggregator or a DEX aggregator of DEX aggregators because it gives users the best price and cheapest execution across all AMMs and aggregators on Ethereum. To do this it doesn’t use any existing AMM or constant function market maker model, it replaces these mechanisms with a new party called “solvers”. The solvers are incentivized to submit the most optimal settlement solution for a designated batch. The solvers are incentivized to compete against each other to find the most optimized solution. The winning solver is then appropriately rewarded with tokens. Anyone can become a solver after fulfilling some of the minor prerequisites.
10) Osmosis
Osmosis is an AMM native to the Cosmos ecosystem. For those of you who aren’t familiar with Cosmos, they follow a hub and zone architecture where each zone is an application specific blockchain. Osmosis is one of these Zones which lives in an interoperable Cosmos ecosystem. So It’s an L1 AMM with IBC built-in which means it is interoperable with every other zone in the Cosmos ecosystem. The main feature of Osmosis is customizability.
Other AMMs have most things hard-coded. They either have a “constant product” or “constant sum” or “constant mean” or any other type of formula, the bonding curves are pre-determined, the fee structures and tiers are also pre-determined. Osmosis essentially gives users the tools to create their own AMM pool with maximum customizability. People can set their own parameters for bonding curves, have two token pools or multi-weighted asset pools, they can use any of the already established AMM formulas or make their own one. For LPing the rewards are decided through governance, when coupled with customizability this creates different strategic incentive games.
With this customizability for liquidity pools, there needs to be some form of governance for decision making. So in Osmosis, the LP tokens received don’t only accrue rewards but they also represent the share of decision-making power a LP has for that specific pool. The longer a LP is locked into a pool the more rewards and governance power they get.
Essentially, with this customizability, Osmosis creates an “AMM as a serviced infrastructure” model. With the complexity of new types of assets in DeFi, you need options in terms of finding the optimal AMM for each type of asset while not requiring people to take on the massive task of building their own AMM.
11) Dopex
Dopex is a very ambitious protocol. Their aim is to create a crypto options protocol that runs entirely on-chain.
At the heart of Dopex is their Single Staking Options Vault (SSOVs). These SSOVs allow users to deposit their assets in the contract, the protocol then sells these assets as call options to buyers while the depositor earns rewards for locking up their assets. Other than SSOVs Dopex also has multiple other types of pools to allow users to trade options. They have pools including Options pools and volume pools. Underlying this architecture is the Dopex AMM.
Before we get into the AMM, you need to know what a volatility smile is. A volatility smile is a result of plotting the strike price and implied volatility of a group of options onto a graph. When implied volatility is plotted at each strike price, the graph forms a smiley face. With that established let’s get into the AMM.
The Dopex AMM uses a black-scholes pricing model. This pricing formula gives traders a theoretical price of an option by using option volatility as an input. Using option volatility helps understand how the price of an asset will move in the future. So Dopex uses chainlink adapters to get information on implied volatility and asset prices, this information is used with the black-scholes pricing formula to determine the volatility smile. This gives users accurate option prices on-chain while taking into account real market risk and behavior.
12) YieldSpace
The Yield Protocol created the concept of fyTokens. These fyTokens are synthetic tokens that have a certain maturity date. Once this maturity date is reached, the fyToken holder can redeem the original asset or target asset at its original price. Before maturity the price will be free-floating. Suppose the target asset is DAI, then the fyToken will be fyDAI which will be free-floating until maturity when it can be redeemed for 1 DAI. For those familiar with tradfi, it is similar to a zero-coupon bond. It is a debt security instrument where traders can buy the bond at a significant discount and then redeem it for its face-value or par value at maturity. The difference between purchase price and par value is the profit. While existing AMM formulas can be used to trade tokens with price maturity, they are by no means optimal. Using Uniswaps constant product formula would lead to higher price-impact and fees for traders when close to maturity making it capital-inefficient. Using the constant sum formula doesn’t allow for price-discovery at all which will discourage trading. Hence, the YieldSpace AMM uses something called the “constant power sum formula”.
X^1-t + Y^1-t = k
X represent the reserves of the target token, Y represents the reserves of the fyToken, and t represents the time to maturity. This formula is basically a combination of the constant product and constant sum formula making it optimal for trading tokens with maturity dates. When t=1 the formula acts like a constant product formula and when t=0 it acts like a constant sum formula.
This is done to ensure that the marginal interest rate of the fyToken that is offered by the pool is equal to the ratio of the fyToken and original token reserves in the pool. Suppose the reserves in the pool are 110 fyDAI and 100 DAI the marginal interest rate offered will be 10%. So the allocation of fyToken to the pool changes according to the current interest rate offered by the fyToken. If interest rate rises then fyToken allocation rises and if interest rate falls then fyToken allocation falls.
13) Sudoswap
I saved Sudoswap for last because this is the most unique AMM. It’s an AMM for NFTs. Rather than using off-chain orderbooks, Sudo uses on-chain liquidity pools to allow low slippage swaps for NFTs. There are 3 main types of pools that users will interact with. There is a buy-only pool, a sell-only pool, and a both pool. Each pool is a separate contract managed by a single address. A buy-only pool will always have ETH and will be ready to give a quote to buy the NFT, a sell-only pool will always contain NFT and will always be ready to give a quote to sell the NFT for ETH, and in both pool, you have both a whole number NFTs and ETH, users can the either buy from that pool which increase the amount of ETH in the pool, or sell which decreases the amount of ETH in the pool. This means that each NFT will have multiple pools for it.
From the LP side, they can better control of price ranges because the quotes that each pool gives is determined by a bonding curve. The bonding curve that each pool uses is determined and stated when the pool is created. There are three types of bonding curves currently used. A constant bonding curve means that the pool is always quoting the same price, a linear bonding curve means that the price quoted by the pool increases/decrease linearly with buys/sells, and the exponential curve works by increasing/decreasing price by a certain percentage depending on the buys/sells.
The main benefits of the Sudo AMM is that the market becomes more liquid, users get an instant quote on their NFTs rather than waiting for a bid and then settling by simply accepting the best offer on Opensea. It is also a good decentralized alternative. Platforms like Opensea are basically web2 platforms, they have all the control. We have seen them randomly blacklist accounts and prevent the selling of certain NFTs as they please. A problem with Sudoswap is that it is not as gas efficient as the current marketplaces. Even though the team has done all that they can to minimize gas expenditure, it will take a lot of work to reach to the point of being competitive with other NFT marketplaces from that aspect.
Personal Thoughts
Now that all the different types of AMMs are in one place, what I’ve noticed is that most of the innovation after Uniswap comes from improving the experience from the LP side. Liquidity is everything in DeFi, without it a product is pretty much rendered useless. Hence, making the experience easier and more profitable for LPs has a direct effect on the success of a protocol. It also helps improve capital efficiency which is a major o[point of focus for all new AMMs.
From the traders perspective, the mechanisms are pretty much figured out. Most DeFi users are very familiar with the swap experience. The main thing is for the UI to keep improving. It should be as easy as possible for traders to navigate and execute on these platforms. Something that should be added to improve the trading experience on DEXs is limit orders. As I outlined earlier, it is tough to do because of frontrunning but if a team can figure out a way to do it or a variation of it, they will be very successful.
Most of the AMMs mentioned here haven’t been launched as yet which is why AMMs like Uniswap still remain dominant, but when they launch it will be interesting to see whether the improvements for the user will be significant enough to take them away from an experience they are already familiar with.
That is all for this Deep-dive into AMMs.
If you enjoyed reading and are feeling generous then please consider donating to 0x43A5D9C141125Cd67B9268ef28C7c6a9dC15F3c9
Subscribe to the substack and follow the medium
https://medium.com/dragonfly-research/what-explains-the-rise-of-amms-7d008af1c399
Haseeb Qureshi, Jul 2020
Imagine a college friend reached out to you and said, “Hey, I have a business idea. I’m going to run a market making bot. I’ll always quote a price no matter who’s asking, and for my pricing algorithm I’ll use x * y = k
. That’s pretty much it. Want to invest?”
You’d run away.
Well, turns out your friend just described Uniswap. Uniswap is the world’s simplest on-chain market making operation. Seemingly from nowhere, it has exploded in volume in the last year, crowning itself the world’s largest “DEX” by volume.
If you haven’t paid close attention to what’s happening in DeFi in the last year, you’re probably wondering: what is going on here?
For the uninitiated: Uniswap is an automated market maker (AMM). You can think of an AMM as a primitive robotic market maker that is always willing to quote prices between two assets according to a simple pricing algorithm. For Uniswap, it prices the two assets so that the number of units it holds of each asset, multiplied together, is always equal to a fixed constant.
That’s a bit of a mouthful: if Uniswap owns some units of token x
and some units of token y
, it prices any trade so that the final quantities of x
and y
it owns, multiplied together, are equal to a fixed constant, k
. This is formalized as the constant product equation: x * y = k
.
This might strike you as a weird and arbitrary way to price two assets. Why would maintaining some fixed multiple between your units of inventory ensure that you quote the right price?
Let’s say we fund a Uniswap pool with 50 apples (a
) and 50 bananas (b
), so anyone is free to pay apples for bananas or bananas for apples. Let’s assume the exchange rate between apples and bananas is exactly 1:1 on their primary market. Because the Uniswap pool holds 50 of each fruit, the constant product rule gives us a * b = 2500
—for any trade, Uniswap must maintain the invariant that our inventory of fruit, multiplied together, equals 2500.
So let’s say a customer comes to our Uniswap pool to buy an apple. How many bananas will she need to pay?
If she buys an apple, our pool will be left with 49 apples, but 49 * b
has to still equal 2500
. Solving for b
, we get 51.02 total bananas. Since we already have 50 bananas in inventory, we’ll need 1.02 extra bananas for that apple (we’ll allow fractional bananas in this universe), so the price we have to quote her is 1.02 bananas / apple for 1 apple.
Note that this is close to the natural price of 1:1! Because it’s a small order, there is only a little slippage. But what if the order is larger?
You can interpret the slope at each point as the marginal exchange rate.
If she wants to buy 10 apples, Uniswap would charge her 12.5 bananas for a unit price of 1.25 bananas / apple for 10 apples.
And if she wanted a huge order of 25 apples—half of all the apples in inventory—the unit price would be 2 bananas / apple! (You can intuit this because if one side of the pool halves, the other side needs to double.)
The important thing to realize is that Uniswap cannot deviate from this pricing curve. If someone wants to buy some apples and later someone else wants to buy some bananas, Uniswap will sweep back and forth through this pricing curve, wherever demand carries it.
Uniswap sweeping back and forth through its pricing curve after a series of trades.
Now here’s the kicker: if the true exchange rate between apples and bananas is 1:1, then after the first customer purchases 10 apples, our Uniswap pool will be left with 40 apples and 62.5 bananas. If an arbitrageur then steps in and buys 12.5 bananas, returning the pool back to its original state, Uniswap would charge them a unit price of only 0.8 apples / banana.
Uniswap would underprice the bananas! It’s as though our algorithm now realizes it’s heavy on bananas, so it prices bananas cheap to attract apples and rebalance its inventory.
Uniswap is constantly performing this dance — slightly moving off the real exchange rate, then sashaying back in line thanks to arbitrageurs.
This should give you a sense for how Uniswap pricing works. But this still begs the question — is Uniswap good at what it does? Does this thing actually generate profits? After all, any market maker can quote prices, but it’s another thing to make money.
Uniswap charges a small fee for every trade (currently 0.3%). This is in addition to the nominal price. So if apples and bananas always and forever trade at 1:1, these fees will simply accumulate over time as the market maker sweeps back and forth across the exchange rate. Compared to the baseline of just holding those 50 apples and bananas, the Uniswap pool will end up with more fruit at the end, thanks to all the fees.
But what if the real exchange rate between apples and bananas suddenly changes?
Say a drone strike takes out a banana farm, and now there’s a massive banana shortage. Bananas are like gold now. The exchange rate soars to 5 apples : 1 banana.
What happens on Uniswap?
The very next second, an arbitrageur swoops in to pick off the cheaply priced bananas in your Uniswap pool. They size their trade so that they purchase every banana that’s priced below the new exchange rate of 5:1. That means they’ll need to move the curve until it satisfies the equation: 5b * b = 2500
.
Running the math out, they’d purchase 27.64 bananas for a grand total of 61.80 apples. This comes out to an average price of 2.2 apples : 1 banana, way under market, netting the equivalent of 76.4 free apples.
And where does that profit come from? Of course, it comes at the expense of the pool! And indeed, if you do the accounting, you’ll see that the Uniswap pool is now down exactly 76.4 apples worth of value compared to someone who’d held the original 50 apples and 50 bananas. Uniswap sold off its bananas too cheaply, because it had no idea bananas had become so valuable in the real world.
This phenomenon is known as impermanent loss. Whenever the exchange rate moves, this manifests as arbitrageurs sniping cheap assets until the pool is correctly priced. (These losses are “impermanent” because if the true exchange rate later reverts back to 1:1, then now it’s like you never lost that money to begin with. It’s a dumb name, but oh well.)
Pools make money through fees, and they lose money via impermanent loss. It’s all a function of demand and price divergence — demand works for you, and price divergence works against you.
In retrospect, it’s incredibly elegant, one of the simplest possible products you could have invented, and yet it arose seemingly from nowhere to dominate DeFi.
Since Uniswap’s rise, there has been an explosion of innovation in AMMs. A legion of Uniswap descendants have emerged, each with its own specialized features.
Different curves are better suited for certain assets, as they embed different assumptions about the price relationship between the assets being quoted. You can see in the chart above that the Stableswap curve (blue) approximates a line most of the time, meaning that in most of its trading range, the two stablecoins will be priced very close to each other. Constant product is a decent starting place if you don’t know anything about the two assets, but if we know the two assets are stablecoins and they are probably going to be worth around the same, then the Stableswap curve will produce more competitive pricing.
Seeing the growth in CFMM volume, it’s tempting to assume that they are going to take over the world — that in the future, all on-chain liquidity will be provided by CFMMs.
But not so fast!
CFMMs are dominating today. But in order to get a clear sense of how DeFi evolves from here, we need to understand when CFMMs thrive and when they do poorly.
Let’s stick to Uniswap, since it’s the simplest CFMM to analyze. Let’s say you want to be a Uniswap LP (liquidity provider) in the ETH/DAI pool. By funding this pool, there are two simultaneous things you have to believe for being an LP to be better than just holding onto your original funds:
The ratio in value between ETH and DAI will not change too much (if it does, that will manifest as impermanent loss)
Lots of fees will be paid in this pool
To the extent that the pool exhibits impermanent loss, the fees need to more than make up for it. Note that for a pair that includes a stablecoin, to the extent that you’re bullish on ETH appreciating, you’re also assuming that there will be a lot of impermanent loss!
The general principle is this: the Uniswap thesis works best when the two assets are mean-reverting. Think a pool like USDC/DAI, or WBTC/TBTC — these are assets that should exhibit minimal impermanent loss and will purely accrue fees over time. Note that impermanent loss is not merely a question of volatility (actually, highly volatile mean-reverting pairs are great, because they’ll produce lots of trading fees).
We can accordingly draw a hierarchy of the most profitable Uniswap pools, all other things equal.
Mean-reverting pairs are obvious. Correlated pairs often move together, so Uniswap won’t exhibit as much impermanent loss there. Uncorrelated pairs like ETH/DAI are rough, but sometimes the fees can make up for it. And then there are the inverse correlated pairs: these are absolutely awful for Uniswap.
Imagine someone on a prediction market going long Trump, long Biden, and putting both longs in a Uniswap pool. By definition, eventually one of these two assets will be worth $1 and the other will be worth $0. At the end of the pool, an LP will have nothing but impermanent loss! (Prediction markets always stop trading before the markets resolve, but outcomes are often decided well before the market actually resolves.)
So Uniswap works really well for certain pairs and terribly for others.
But it’s hard not to notice that almost all of the top Uniswap pools so far have been profitable! In fact, even the ETH/DAI pool has been profitable since inception.
This demands explanation. Despite their flaws, CFMMs have been impressively profitable market makers. How is this possible? To answer this question, it pays to understand a bit about how market makers work.
Market makers are in the business of providing liquidity to a market. There are three primary ways market makers make money: designated market making arrangements (traditionally paid by asset issuers), fee rebates (traditionally paid by an exchange), and by pocketing a spread when they’re making a market (what Uniswap does).
You see, all market making is a battle against two kinds of order flow: informed flow, and uninformed flow. Say you’re quoting the BTC/USD market, and a fat BTC sell order arrives. You have to ask yourself: is this just someone looking for liquidity, or does this person know something I don’t?
If this counterparty just realized that a PlusToken cache moved, and hence selling pressure is incoming, then you’re about to trade some perfectly good USD for some not so good BTC. On the other hand, if this is some rando selling because they need to pay their rent, then it doesn’t mean anything in particular and you should charge them a small spread.
So a market maker’s principal job is to differentiate between informed and uninformed flow. The more likely the flow is informed, the higher the spread you need to charge. If the flow is definitely informed, then you should pull your bids entirely, because you’ll pretty much always lose money if informed flow is willing to trade against you.
(Another way to think about this: uninformed flow is willing to pay above true value for an asset — that’s your spread. Informed flow is only willing to pay below the true value of an asset, so when you trade against them, you’re actually the one who’s mispricing the trade. These orders know something you don’t.)
The very same principle applies to Uniswap. Some people are trading on Uniswap because they randomly want to swap some ETH for DAI today. This is your uninformed retail flow, the random walk of trading activity that just produces fees. This is awesome.
Then you have the arbitrageurs: they are your informed flow. They are picking off mispriced pools. In a sense, they are performing work for Uniswap by bringing its prices back in line. But in another sense, they are transferring money from liquidity providers to themselves.
For any market maker to make money, they need to maximize the ratio of uninformed retail flow to arbitrageur flow.
But Uniswap can’t tell the difference between the two!
Uniswap has no idea if an order is dumb retail money or an arbitrageur. It just obediently quotes x * y = k
, no matter what the market conditions.
So if there’s a new player in town that offers better pricing than Uniswap, like Curve or Balancer, you should expect retail flow to migrate to whatever service offers them better pricing. Given Uniswap’s pricing model and fixed fees (0.3% on each trade), it’s hard to see it competing on the most competitive pools — Curve is both more optimized for stablecoins and charges 0.04% on each trade.
Over time, if Uniswap pools get outcompeted on slippage, they will be left with majority arbitrageur flow. Retail flow is fickle, but arbitrage opportunities continually arise as the market moves around.
This failure to compete on pricing is not just bad — its badness gets amplified. Uniswap has a network effect around liquidity on the way up, but it’s also reflexive on the way down. As Curve starts to eat the stablecoin-specific volume, the DAI/USDC pair on Uniswap will start to lose LPs, which will in turn make the pricing worse, which will attract even less volume, further disincentivizing LPs, and so on. So goes the way of network effects — it’s a rocket on the way up, but on the way down it incinerates on re-entry.
Of course, these arguments apply no less to Balancer and Curve. It will be difficult for each of them to maintain fees once they get undercut by a market maker with better pricing and lower fees. Inevitably, this will result in a race to the bottom on fees and massive margin compression. (Which is exactly what happens to normal market makers! It’s a super competitive business!)
But that still doesn’t explain: why are all of the CFMMs growing like crazy?
Let’s take stablecoins. CFMMs are clearly going to win this vertical.
Imagine a big traditional market maker like Jump Trading were to start market making stablecoins on DeFi tomorrow. First they’d need to do a lot of upfront integration work, then to continue operating they’d need to continually pay their traders, maintain their trading software, and pay for office space. They’d have significant fixed costs and operating costs.
Curve, meanwhile, has no costs at all. Once the contracts are deployed, it operates all on its own. (Even the computing cost, the gas fees, is all paid by end users!)
And what is Jump doing when quoting USDC/USDT that’s so much more complicated than what Curve is doing? Stablecoin market making is largely inventory management. There’s not as much fancy ML or proprietary knowledge that goes into it, so if Curve does 80% as well as Jump there, that’s probably good enough.
But ETH/DAI is a much more complex market. When Uniswap is quoting a price, it isn’t looking at exchange order books, modeling liquidity, or looking at historical volatility like Jump would — it’s just closing its eyes and shouting x * y = k
!
Compared to normal market makers, Uniswap has the sophistication of a refrigerator. But so long as normal market makers are not on DeFi, Uniswap will monopolize the market because it has zero startup costs and zero operating expense.
Here’s another way to think about it: Uniswap is the first scrappy merchant to set up shop in this new marketplace called DeFi. Even with all its flaws, Uniswap is being served up a virtual monopoly. When you have a monopoly, you are getting all of the retail flow. And if the ratio between retail flow and arbitrageur flow is what principally determines the profitability of Uniswap, no wonder Uniswap is raking it in!
But once the retail flow starts going elsewhere, this cycle is likely to end. LPs will start to suffer and withdraw liquidity.
But this is only half of the explanation. Remember: long before we had Uniswap, we had tons of DEXes! Uniswap has decimated order book-based DEXes like IDEX or 0x. What explains why Uniswap beat out all the order book model exchanges?
I believe there are four reasons why Uniswap beat out order book exchanges.
This is not a small point. Once next generation high-throughput blockchains arrive, I suspect the order book model will eventually dominate, as it does in the normal financial world. But will it be dominant on Ethereum 1.0?
The extraordinary constraints of Ethereum 1.0 select for simplicity. When you can’t do complex things, you have to do the best simple thing. Uniswap is a pretty good simple thing.
Third, it’s extremely easy to provide liquidity to Uniswap. The one-click “set it and forget it” LP experience is a lot easier than getting active market makers to provide liquidity on an order book exchange, especially before DeFi attracts serious volume.
This is critical, because much of the liquidity on Uniswap is provided by a small set of beneficent whales. These whales are not as sensitive to returns, so the one-click experience on Uniswap makes it painless for them to participate. Crypto designers have a bad habit of ignoring mental transaction costs and assuming market participants are infinitely diligent. Uniswap made liquidity provision dead simple, and that has paid off.
Recall that one of the three ways that traditional market makers make money is through designated market making agreements, paid by the asset issuer. In a sense, an incentivized pool is a designated market maker agreement, translated for DeFi: an asset issuer pays an AMM to provide liquidity for their pair, with the payment delivered via token airdrop.
But there’s an additional dimension to incentivized pools. They have allowed CFMMs to serve as more than mere market makers: they now double as marketing and distribution tools for token projects. Via incentivized pools, CFMMs create a sybil-resistant way to distribute tokens to speculators who want to accumulate the token, while simultaneously bootstrapping a liquid initial market. It also gives purchasers something to do with the token—don’t just turn it around and sell it, deposit it and get some yield! You could call this poor man’s staking. It’s a powerful marketing flywheel for an early token project, and I expect this to become integrated into the token go-to-market playbook.
That said, I don’t believe Uniswap’s success will last forever. If the constraints of Ethereum 1.0 created the conditions for CFMMs to dominate, then Ethereum 2.0 and layer 2 systems will enable more complex markets to flourish. Furthermore, DeFi’s star has been rising, and as mass users and volumes arrive, they will attract serious market makers. Over time, I expect this to cause Uniswap’s market share to contract.
Five years from now, what role will CFMMs play in DeFi?
In 2025, I don’t expect CFMMs the way they look today to be the dominant way people trade anymore. In the history of technology, transitions like this are common.
In the early days of the Internet, web portals like Yahoo were the first affordance to take off on the Web. The constrained environment of the early Web was perfectly suited to being organized by hand-crafted directories. These portals grew like crazy as mainstream users started coming online! But we now know portals were a temporary stepping stone on the path to organizing the Internet’s information.
The original Yahoo homepage and the original Google homepage
What are CFMMs a stepping stone to? Will something replace it, or will CFMMs evolve alongside DeFi? In my next post, entitled Unbundling Uniswap, I’ll try to answer this question.
For the next 4 years, is offering “liquidity mining” for liquidity providers. This means that anyone that in this span will be rewarded with a proportional allocation of COMP, of which 2,880 are distributed daily.
The start of this reward system caught the attention of plenty of traders. In the days since, many folks have moved their assets into Compound in order to start yield farming COMP distributions. Additionally, third-party projects are helping to facilitate COMP farming like the smart wallet project , which rolled out a “Maximize $COMP mining” widget to help users easily hop in on the action in just a few clicks. Essentially, it’s : people borrow and deposit assets simultaneously in order to get more COMP. You can perform the same actions manually without a smart wallet, but something like InstaDApp makes this yield farming much easier, in just a few clicks.
is an automated-market maker (AMM) that allows users to create liquidity pools composed of multiple ERC20 tokens in contrast to the 1:1 pools used by Uniswap. This makes Balancer a flexible protocol, but it’s also newer. Its builders want its governance to be fully decentralized and also do some bootstrapping. For these reasons, the Balancer team recently kicked off its own .
Another intriguing opportunity for yield farmers comes courtesy of a new partnership between , , and the interoperability project and involves a 10-week incentivized .
is a decentralized futures exchange that’s billed as being both for “traders and yield seekers.” That said, users get paid for providing liquidity on the platform. The project hasn’t fully entered the limelight yet, but users got a taste of what may be to come after the exchange ran a earlier this year.
Yield farming has exploded onto the DeFi scene and captured peoples’ imaginations, and the fervor at hand makes it easy to envision this exciting meme going on to attract people to Ethereum for years to come. But remember: it’s not all fun and games, either. As Ethereum stalwart Eric Conner recently , yield farming does have liquidation risks and smart contract risks, and as such you should never farm with money you’re not willing to lose while things are still early.
From
For the purposes of this article, we will define yield to be the rate at which a financial position expects to accrue value. This is roughly consistent with TradFi’s definition of , which is the interest rate at which the bond reaches par value given current prices. So broadly speaking, if you hold 1.0 unit of value, and you deposit it into a protocol with a 10% yield, you should expect to have 1.1 units of value after a year.
From
APR vs. APY: The annual percentage rate is the amount of additional value you accrue per year. The annual percentage yield is the amount of additional value you accrue per year assuming compounding (i.e. reinvestment of dividends). Depending on the specific protocol, assuming compounding may or may not make sense; use the figure that is a better fit for the protocol. See for more details.
Decentralized Exchanges (DEXs) and Automated Market Makers (AMMs) are the canonical example of yield-generating protocols. They are a seminal achievement in financial engineering, to the point of being known as “.”
They work as follows: Liquidity Providers (LPs) deposit pairs of tokens into liquidity pools. The purpose of these pools is to allow traders to exchange one token for another without an intermediary; the exchange rate is determined algorithmically according to a formula like a . In return for depositing these tokens, LPs receive some portion of the fees charged to traders, proportional to the amount of the pool’s liquidity they provided. Fees are generally charged to traders in denominations of the pool tokens.
From
The analogous setting in traditional finance (“TradFi”) would be a , which keeps track of the highest bids and lowest asks of the assets in the market. The big difference is that in the TradFi setting, it is the exchanges and brokers that capture the trading fees, whereas in DeFi, it is the LPs that capture those same fees. There is a beauty to the fact that in DeFi, anybody can participate in any aspect of the market-making.
LPing is not without risk: if the relative values of the tokens change significantly, LPs can be subject to . With any DeFi protocol, there is also SmartContract risk of hacks and exploits.
Let’s take as an example. Users can deposit their funds into a lending pool and expect to earn either a variable or fixed interest rate on it. On the other side, borrowers can deposit assets as collateral, and take out a loan of any other asset up to the value of a certain percentage of their collateral (the collateralization ratio). As long as the value of the collateral does not go below a certain threshold relative to the loan, the loan remains “healthy” and the borrower has unlimited time to repay the loan, with interest. When the value of the collateral goes lower than the threshold, however, it is liquidated (i.e. sold off) and the debt is canceled.
From
Clearly, this type of service has intrinsic value: the ability to automatically cover, originate, liquidate, and close loans without a financial intermediary is valuable in and of itself: it eliminates the possibility of human error, allows greater capital efficiency, and so on. And if you lend your assets towards this end, you are being compensated for contributing value to the protocol. After all, if , you wouldn’t lend them, would you?
In my opinion, the best example of how staking produces yield is the Proof-of-Stake mechanism, adopted by such blockchains as Cardano, Solana, and eventually Ethereum. These algorithms can be abstruse, with names like and , but the intuition behind them is simple.
The security of Proof-of-Stake networks comes from the fact that the amount of capital it would take to is intractable.
From
’s vaults may be the most famous example of this. They accept a bunch of different assets and employ a wide range of strategies to earn yield on your assets. For example, the LINK vault had the following description:
Supplies LINK to to generate interest.
Supplies LINK to to earn VSP. Earned tokens are harvested, sold for more LINK which is deposited back into the strategy.
Supplies LINK to to generate interest and earn staked AAVE tokens. Once unlocked, earned tokens are harvested, sold for more LINK which is deposited back into the strategy. This strategy also borrows tokens against LINK. Borrowed tokens are then deposited into corresponding yVault to generate yield.
Stakes LINK in vault and mints DAI. This newly minted DAI is then deposited into the DAI yVault to generate yield.
Supplies LINK to to earn LEAG. Earned tokens are harvested, sold for more LINK which is deposited back into the strategy.
Another great example of a yield optimizer is . [Full disclosure: I am a member of the G-Force, Gro’s decentralized marketing arm, and have unvested GRO.] Gro is a risk-tranched stablecoin yield optimizer, so it accepts assets like USDC, USDT, and DAI. It deposits these into a number of different protocols downstream to generate trading fees (i.e. from DEXes), lending income (i.e. from lending pools), and protocol incentives (i.e. governance tokens), and liquidates them back into the original stablecoin.
Financial derivatives are an increasingly popular area in DeFi, with ever more sophisticated products becoming available. In our - the decentralized options exchange that allows users to earn yield through Single Staking Option Vaults. Another great example of this is , which provides mechanisms for users to write covered call options and sell put options on their DeFi assets. In these instances, the yield being generated comes from assuming the risk of certain price action on the underlying asset.
As the name suggests, these tokens give holders the right to participate in governance of the underlying DeFi protocols, through proposals and on-chain voting. These changes can encompass something as minimal as a parameter change (e.g. “increase the interest rate for this vault by 0.1%”) to an overhaul of the entire ecosystem (e.g. the ).
The question then becomes: why does a governance token have value? One standard answer is that governance tokens are analogous to equity in a company. You are theoretically entitled to some portion of the future cashflow of the protocol, and you have fractional decision-making power over it as well. In practice, however, this may or may not be the case: depending on how the protocol is structured, the revenue could flow mostly to the treasury rather than to governance token holders. And the amount of decision-making power you have will only be proportional to the amount of tokens you hold due to .
Governance tokens become particularly difficult to reason about when they exhibit a recursive structure - i.e. holding the tokens results in rights to control or partake in the allocation of future such tokens. This dynamic was covered extensively in .
Rebasing Tokens, or “reserve currencies” as they sometimes call themselves, refer to OHM and its forks, as we detailed in .
To be fair, there has been discussion by some Rebasing Token communities about , likely triggered by the recent price crash. Furthermore, these tokens are backed by treasuries and streams of revenue, like .
The phrase "yield farming" is tossed around in many contexts. Our first task is to settle on a concrete and precise definition. Yield farming is managing passive strategies to earn well-defined interest on cryptocurrency positions.
There are many examples, but consider two prominent ones that are more retail-facing. First, traders on have the option to stake their Ether on the platform, i.e., delegate their Ether to Coinbase as it participates in upgrading the Ethereum network to Ethereum 2.0, in exchange for interest of around 5% (at the time of writing). Second, Terra traders can use the app to stake their Luna, i.e., delegate their Luna tokens to one of several different validators who process the Terra network, in exchange for rewards.
Some activities look like liquidity provision, but can be subtly different. For example, Olympus DAO famously offers extremely high yields (currently 900%) for staking its OHM tokens. However, these yields are granted through hefty token dilution, and they are largely compensation for marketing (which we discuss more in the final section).
In some cases, farmers can be directly “” to allocate tokens to specific pools. These bribes frequently come from new protocols aiming to bootstrap liquidity, incentivize usage, and attract attention. But this is even costlier for independent farmers, who see rewards diverted away from their own pools, and non-farming token holders, for whom all rewards represent inflation.
As builders, we at Jump hope that management and governance will continue to evolve, increasing the ratio of value-creating to value-extractive activities over time. For instance, yield farmers could be incentivized to support useful code upgrades just as they are currently rewarded for selecting high-quality validators. Defenses like may also help make bribery, as well as other methods for large players to amass voting power, less viable.
Finally, yield farming allows protocols to leverage one critical and practical insight: big numbers are great marketing! In particular, the more TVL ("total value locked," i.e., assets allocated) a protocol has, the more attention it garners, the more trust it earns, and the more likely it is to emerge as a leader in a crowded field. TVL may also influence a protocol’s perceived valuation. While the concept of "fair value" is still nascent in crypto, TVL multiples are commonly cited, similar to the way companies use multiples of book value and asset managers use multiples of AUM. Amidst today’s Cambrian explosion of protocols, getting on the can be one of the most effective ways to stand out.
Chris Dixon outlines this mechanism more generally in his on token incentives. For users, token distribution helps bootstrap additional utility, which takes years to create, by exchanging some of it for financial utility that can be realized immediately. As a bonus, lockups restrict supply (mitigating selling pressure) and align incentives for yield farmers towards the protocol’s success. That is: at the beginning you are there for the rewards, at the end you are there for the utility — and the token distribution plan succeeds to the extent that it facilitates a smooth transition.
Please let us ( and ) know what we got wrong, as we'd like to understand this subject matter thoroughly and correctly. Thanks to the research team at and especially to and for feedback. This note does not constitute financial advice.
Although “interest” in traditional finance refers more strictly to value paid by borrowers for extension of credit, we will continue with the broader crypto definition here.
In particular, these yields are powered through frequent rebasing of the token (three times daily). Rebases do not create value, but just redistribute it. For instance, a 900% APY requires a 10x increase in the token supply, which in turn translates to each token being worth 90% less.
This may sound suspect, but “bribe” in DeFi is a standard term for a reward given to supporters of a given proposal, not a suggestion of improper or illegal inducements.
showed that most of us are losing money when providing liquidity to Uniswap V3. How come ? Probably because Automated Marker Makers are complex objects. But it’s worth digging into: besides being the most used DeFi protocols, they are also an interesting and beautiful mathematical concept.
In this scenario, the Uni V3 liquidity was able to generate more arbitrages, leading to more fees and hence more revenue for LPs, but at the cost of a more important IL in most cases. Basically for the same price deviation more of your liquidity will be swapped, which generates more fees but also more IL: check for yourself using .
Another cool feature of Uni V3 is that by adding multiple LP positions, or choosing your range carefully, you can reproduce . Let’s do just one example: starting at 1 ETH = 1000 agEUR, this is what you’ll get when providing liquidity on the range 1450–1550 versus going all in ETH and taking profits at 1500.
I hope this first article has helped to make AMMs clearer. Feel free to reach out and to play with our that we use internally for decision-making !
MakerDAO has where you can borrow DAI through a margin account known as a CDP (collateralized debt position). Of all the discussed lending platforms, MakerDAO is also the most difficult to use from a UX perspective.
Compound has which is fairly simple to use, and allows lenders and borrowers to interact directly with the protocol. Lenders can deposit funds into lending pools, and will continuously earn interest. Lenders can withdraw their assets at any time, as there are no fixed loan durations.
Dharma has built a where users can go to borrow and lend, and is the only mentioned product that does not require the use of an Ethereum wallet such as MetaMask. Currently Dharma’s product (though smart contract based) is centralized, but they have solid plans to decentralize over time.
On May 2020, Uniswap introduced V2. The main feature was the introduction of pools that supported ERC-20/ERC-20 pairs. This opened up plenty of more opportunities for traders. They no longer had to trade with everything paired to ETH, they could trade directly with their stablecoins for example. Another benefit is that LPs didn’t have to use their ETH to provide liquidity which would expose them to . This change sent Uniswaps growth parabolic, especially since it was at the time of DeFi summer. This was so successful that it made Uniswap the most forked protocol for a couple of months.
We all know the tremendous success stablecoins have seen, this led to the creation of hundreds of different stablecoins (shameless self-promo, if you want to learn more about stablecoins check). With so many stablecoins arises the demand for a place to easily swap between them. Curve facilitates this with low slippage, and low fee stableswaps. The magic of Curve is in the structural design of it’s exchange. They created multiple types of pools which is used to create interesting incentive games when combined with governance.
Pendle unlocks the full potential of yield-bearing assets with its AMM. It allows for the creation of limitless yield-trading strategies. It is currently available on Ethereum and Avalanche and is built on top off first degree protocols such as Aave, Compound, Redacted cartel, Benqi, and Trader Joe. Users can bring their yield-bearing assets and deposit them into pendle. They get two tokens in return, an OT and a YT. OT or ownership token represents ownership of the underlying asset while YT or yield token represents the entitlement to the yield of that asset. Users can trade the OTs on exchanges like Sushiswap or Trader Joe, and they can trade YTs on Pendle’s AMM. YTs have a time-decay, this is because people buying YTs are buying rights to the yield and currently there isn’t a way to force buyers to make consistent payments on-chain to keep the rights to the yield. As the YT gets closer to expiry, its value decreases. Once expired, the OTs can be used to redeem the underlying assets. Let’s look at the Pendle AMM where the YTs are traded. This is basically similar to a tradfi options market but for yield, so the AMM made to facilitate such activities on-chain is very complex. I will do my best to simplify it here, but if you want the technical explanation with all the math, graphs, and formulas then check .
Follow me on
Uniswap v2 volume. Credit:
(If you’re already familiar with Uniswap and AMMs, skip ahead to the section titled “.”)
The answer is: it depends! Specifically, it depends on a concept known as . Here’s how it works.
This is Uniswap in a nutshell. You can go a of course, but this is enough background for you to understand what’s happening in this space.
Since its launch in 2018, Uniswap has taken DeFi by storm. This is especially amazing given that the original version of Uniswap was only about ! (AMMs themselves have a , but constant product market makers are a .) Uniswap is completely permissionless and can be funded by anyone. It doesn’t even need an oracle.
Uniswap, Balancer, and Curve trading volume. Source:
Though they all inherited the core design of Uniswap, they each come with their own specialized pricing function. Take , which uses a mixture of constant product and constant sum, or , whose multi-asset pricing function is defined by a multi-dimensional surface. There are even shifted curves that can run out of inventory, like the ones uses to sell limited edition goods.
The Stableswap curve (blue), used in Curve. Source:
Of course, there are infinitely many specific curves an AMM could adopt for pricing. We can abstract over all of these different pricing functions and call the whole category : constant function market makers.
Uniswap returns for ETH/DAI pool (vs holding 50/50 ETH/DAI). Source:
As a market maker, you make money on the uninformed flow. Uninformed flow is random — at any given day, someone is buying, someone is selling, and at the end of the day it cancels out. If you charge each of them the spread, you’ll make money in the long run. (This phenomenon is why market makers will from Robinhood, which is mostly uninformed retail flow.)
First, Uniswap is extremely simple. This means there is low complexity, low surface area for hacks, and low integration costs. Not to mention, it has low gas costs! This really matters when you’re implementing all your trades on top of the equivalent of a .
Second, Uniswap has a very small regulatory surface. (This is the same reason why Bram Cohen believes .) Uniswap is trivially decentralized and requires no off-chain inputs. Compared to order book DEXes that have to tiptoe around the perception of operating an exchange, Uniswap is free to innovate as a pure financial utility.
The last reason why Uniswap has been so successful is the ease of creating . In an incentivized pool, the creator of a pool airdrops tokens onto liquidity providers, juicing their LP returns above the standard Uniswap returns. This phenomenon has also been termed “liquidity farming.” Some of Uniswap’s highest volume pools have been incentivized via airdrops, including AMPL, sETH, and JRT. For Balancer and Curve, all of their pools are currently incentivized with their own native token.
These factors go a long way toward explaining why Uniswap has been so successful. (I haven’t touched on “,” but that’s a topic for another day.)
https://medium.com/@MindWorksCap/liquid-staking-jostling-for-position-in-a-post-shanghai-ethereum-8e522c1841e6
MindWorks Capital, 12-Jan-23
The battle for staked Ether intensifies as withdrawals are slated to open in March after the Shanghai upgrade
Ethereum’s successful transition into Proof of Stake last year bid farewell to the Proof of Work miners and replaced them with Ether stakers who now secures the blockchain.
Participating in Ethereum’s proof of stake would normally require a user to deposit 32 ETH to activate validator software which then stores data, process transactions and add new blocks to the blockchain.
However, this could create a lot of barriers for ordinary users to participate in proof of stake as it requires a high upfront cost (32 ETH is currently ~$42,000 as of Jan 9th, 2023), high hardware requirements to run the software as well as the technical know-how to run the validator nodes.
Traditional staking is also hugely illiquid, as staked coins cannot be withdrawn immediately, even after the Shanghai upgrade. Instead, validators have to queue to exit the staking chain in order to preserve network security.
Because of the high barrier to entry and lack of convenience, liquid staking has become a hugely popular method for users to stake their ETH without having to run any software while maintaining liquidity. Lido, the largest Ethereum liquid staking provider, has over 4.6 million Ether staked on its platform accounting for 29% of all ETH staked.
Ethereum staking landscape is also poised to undergo a huge transformation after the Shanghai upgrade, which would be the next time developers upgrade the Ethereum network. Shanghai will be highly impactful for ETH staking as it would introduce the long-awaited feature -withdrawals. Before this upgrade, all ETH deposited onto the beacon chain to particpate in proof of stake has no way to be withdrawn back to the mainnet.
Liquid staking’s process is actually pretty simple. Users deposit tokens into a smart contract, which then are deposited into affiliated validators. Users will receive a token that represents the amount they have deposited into the liquid staking platforms.
Let’s use Lido as an example. Users deposit ETH into Lido and receive stETH in return. The stETH token represents the ETH that the user has deposited into Lido. The stETH token will automatically rebase and increase according to the rate of rewards earned by Lido affiliated validators. Users can also freely trade and transfer their stETH without being subjected to lockups.
You might be questioning at this point how liquid staking has seemingly solved all the problems that traditional staking has, with no upfront costs (besides purchasing the ETH), no hardware and software involved as well as preserving liquidity over the ETH. Liquid staking, in reality, has solved most of these problems although it does have several problems.
Firstly, liquid staking is not trustless. By participating in liquid staking, you are trusting the validators chosen by your liquid staking platform not to act maliciously as it could then be subjected to slashing, which would result in the ETH deposited being permanently lost.
Secondly, the token you received in return for depositing, such as stETH, will not necessarily maintain its monetary value as you cannot currently withdraw it back into ETH and will need to trade it on the secondary market in order to convert it. In fact, stETH has consistently traded below peg since launch, so traders that have to exit quickly usually would face a small loss in terms of ETH.
Lastly, by overly relying on liquid staking platforms, Ethereum could become more centralized as one platform could reach a critical share of the overall validators. Some in the Ethereum community has already spoken out towards Lido’s dominance and whether it could potentially cause danger to Ethereum.
While liquid staking is available on most proof-of-stake blockchains, we want to focus on Ethereum as it is the largest proof-of-stake blockchain by market capitalization as well as with withdrawals opening as soon as March.
As we can see in the chart above, Lido currently dominates the Ethereum liquid staking market with 75% market share. A big contributor to this dominance is due to its advantage being the first to market, as well as higher adoption from other DeFi applications such as Aave providing it with more use cases.
While Lido currently has a stranglehold over liquid staking on Ethereum, the market is about to enter a new phase of adoption as ETH that are staked on the beacon chain could soon be withdrew provided that withdrawals are enabled by the Shanghai upgrade.
This means that ALL ETH currently staked could potentially be withdrawn and staked elsewhere, whether or not they are currently in liquid staking pools or at home solo staking.
Given the advantages of liquid staking mentioned above, in all likelihood we should see the amount of Ether staked in liquid staking increase as withdrawals open due to its superior user experience when compared to other solutions such as home staking and permissioned staking pools.
To understand Lido’s dominance, we need to understand why stETH is by far the most popular liquid staking derivative (LSD) on Ethereum.
Firstly, stETH has the deepest liquidity out of all the LSDs. According to DefiLlama, one could trade $500 million worth of ETH and still suffer less than 1% of slippage on its trade. Compared to cbETH, the second largest LSD by deposits, a trade of $10m to cbETH would likely suffer from a slippage larger than 1%. So, for large whales or crypto funds who is concerned about liquidity, stETH would likely be your only viable choice.
Secondly, use cases are key for LSDs. One of the key use cases stETH has that others doesn’t is leveraged staking. Leveraged staking is a process in which you use your LSD as collateral to borrow ETH, converting that into a LSD so that you could multiply the yield through leverage. Aave, the largest lending platform on Ethereum, only accepts stETH as the sole LSD on its platform. Meaning if you would like to do leverage staking on the most liquid lending platform, stETH is again your only viable option.
To answer the question, it is currently difficult to forecast anyone taking significant market share from Lido due to the aforementioned advantages.
That said, some of the competing protocols are innovating to attract stakers onto their platform. For example, RocketPool is putting decentralization onto the heart of their messaging, allowing anyone to run a validator on its protocol with 16ETH. Another competitor, Frax Finance, is leveraging its stablecoin protocol and innovative LSD token design to attract liquidity and increase staking APY for holders.
The Stakes are high for liquid staking protocols in a post Shanghai upgrade world, but the heightened competition should result in an overall better user experience and new use cases that will further enrich the DeFi landscape.
An innovation we would like to see is the enabling of leveraged staking on L2 networks. This would simultaneously increase the adoption for the L2 network and the liquid staking protocol, as well as enabling retail users to participate without being subject to high fees on the mainnet.
As we count down towards to Shanghai upgrade, we cannot wait to see the battle unfolds in a new chapter of the staking landscape on Ethereum.
Disclaimer: MindWorks Capital has not invested in any liquid staking platforms. Our team members hold ETH, cbETH, stETH and FXS. These statements are intended to disclose any conflict of interest and should not be misconstrued as a recommendation to purchase any token. This content is for informational purposes only, and you should not make decisions based solely on it. This is not investment advice.
https://blockworks.co/the-investors-guide-to-staking/
Blockworks, Apr 2022
Proof-of-stake (PoS) is the term used to describe a class of blockchains that rely on an internal resource (‘stake’) rather than an external resource, such as energy (as with traditional proof-of-work blockchains), to achieve network consensus.
Proof-of-stake represents the next evolution of blockchain innovation. It achieves the same security guarantees as Bitcoin and other proof-of-work networks, but at a fraction of the energy cost. Today, thriving DeFi ecosystems operate on proof-of-stake networks with sustainable environmental usage.
By decoupling from an external source of network security, proof-of-stake networks offer token holders “skin-in-the-game” incentives through staking. What’s more, as the paradigm has grown popular in Web3 ecosystems, more user-friendly implementations have been created to simplify the process. As it stands, prospective stakers have two main paths to entry.
Most proof-of-stake networks support two methods of staking tokens. These are staking and delegating. A number of factors determine which to choose, including a token holder’s level of technical expertise, the number of tokens they have available and the effort they’re willing to invest.
Blockchain networks are decentralized. So, how does a proof-of-stake network remain secure and free from centralized control? Moreover, who decides which blocks get added to the ledger?
The answer to all of these lies in staking.
Staking rewards participation in proof-of-stake protocols. It aligns network security with individual incentives.
Stakers lock their crypto in a blockchain’s validator nodes. Successful validators have the right to bundle and process transactions into blocks on the blockchain. Their efforts are rewarded with the blockchain’s native crypto tokens.
Once tokens are staked, they are locked up and essentially untouchable for a period of time. The time required to unlock staked tokens depends on the network they are locked in. The time it takes for staked tokens to be accessible is variable and should be considered when staking.
Staking was designed as an alternative to the traditional proof-of-work consensus model that some networks, such as Bitcoin, use. It has since become the default model for DeFi blockchains. While most benefits of staking apply to both paths, deciding between staking and delegating is pretty straightforward, as each was created to appeal to a specific kind of participant.
Staking has two primary components: staked crypto and healthy validator nodes.
The setup, security and maintenance of a validator node is not difficult for those with some degree of familiarity with the subject, so the technical barrier to entry isn’t typically high. This makes staking directly an attractive option for retail that’s interested in Web3 and wants to actively participate.
Staking vs. delegating
Staking requires the token holder to set up a validator node and maintain the process themself. This is popular among individuals with the relevant skills who prefer to manage their own validators.
Delegation is a convenient, low-risk alternative for individuals or institutions looking for an easy process without logistical limitations such as node management or operation.
When delegating, a token holder ‘delegates’ their tokens to a trusted validator. This allows the token holder to take a share of the rewards without any of the setup and maintenance. The token holder always retains full control and ownership of their tokens, which are allocated to the validator for staking purposes.
Some delegation platforms also guarantee a node is completely owned by the delegator, just operated by the platform, to preserve the autonomy and influence of delegators over their stake.
Staking will generally provide higher yields than delegation, but it requires significant additional effort and is a more active investment. Delegation will generally provide lower yields but requires very little effort and is a more passive investment.
Delegation is becoming a very popular option across blockchains, as it is the more attractive option for many users.
Energy cost
Compared to its predecessor, proof-of-stake has significantly lower energy costs. Instead of elaborate mining rigs and expensive GPUs, running a proof-of-stake node simply requires the use of your internet-connected computer — and staking pools are even less demanding. In addition to resource efficiency, proof-of-stake offers a range of use cases that aren’t compatible with a proof-of-work model, like sharding.
Scalability
Sharding is one of the most anticipated features of ETH2’s roadmap. It is a form of database partitioning where large “blocks” are split into smaller, faster blocks. These shards eliminate the need for each node to process the transaction load of the entire network.
Proof-of-work, by comparison, requires each node to process every ever-growing block. Shards work in parallel to maximize transactions per second—requiring less processing power and storage per node. Sharding allows the network to scale its transaction throughput by a multiple of the number of shards a chain uses.
Ease of operation
Earn yield
Staking lets token holders earn yield on tokens that would otherwise be idle. Because of that, it is a popular investment model
While a number of factors can affect your expected yield, it’s common to earn 6% to 12% APY across DeFi protocols. This is advantageous for both finite-supply assets and for outpacing inflation on higher-issuance assets and fiat currencies.
Airdrop eligibility
Airdrops, or ‘stakedrops’, are a fun and lesser-known advantage of staking. New projects often airdrop tokens to validators on their chain. This means that stakers on popular chains are often the first beneficiaries of innovation.
Contribute to decentralization
Proof-of-work was designed to power decentralized blockchains, but the high investment needed leads to a centralization of network power. Proof-of-stake is more conducive to decentralization, as the entry barrier is nearly nonexistent. You can use staking pools to participate, even with small amounts of crypto. Institutions and those that want to validate at scale also benefit more than with proof-of-work. There isn’t a sunk-cost requirement of building mining rigs and expending power, and validation is shared by all participants, instead of requiring competition.
Boost network security
The more holders that stake, the more secure the network becomes. With more validators processing transactions and checking other validators’ work, malicious actors and modified blocks can be dealt with increasingly efficiently.
Participate in governance
Depending on each blockchain’s specific implementation of PoS, validators hold varying degrees of network influence. Typically, there are consistent stake benchmarks that grant validators a specific amount of power. For example, to run one node on the coming ETH 2.0, one would need to stake 32 ETH per node.
With staking pools, those nodes can be broken down into smaller portions split between all participants that pool their resources to operate nodes together.
The first step in becoming a validator is to decide which network to stake on. This decision depends on which assets and networks you believe will thrive in the long term and which align with your investment ethos. Once decided, a prospective validator can set up a node, stake their tokens on their chosen blockchain and enjoy the benefits.
Delegating to a trusted validator
For institutions or investors looking for scale, or for users unfamiliar with the technical processes and security standards of staking, delegating stake to an established validator makes the process simple.
Using a liquid staking solution
Traditional staking locks a validator’s crypto in a smart contract that has a chain-dependent delay in unstake time. To ensure validators can unstake and access their crypto freely, liquid staking solutions have been developed by different protocols and companies.
Liquid staking allows delegated validators to shoulder the unstake wait times and grant users instant access to funds instead of making them wait. This way, users can enjoy both the yield-farming benefits and the flexibility of instant fund access.
Examples of Liquid Staking Protocols
All investment models should be carefully weighed against one’s risk tolerance and ethical convictions, but staking has a few method-specific risks:
Slashing
The biggest risk to staking is slashing, or losing your principal staking amount due to being pegged as a “bad actor” on the network. When a smart contract detects an attempt to manipulate the blockchain or network, it programmatically punishes perpetrators by forfeiting their staked amount.
Validators not attempting to maliciously change or exploit the network should have nothing to worry about, but with different expectations on each network, it can be tricky to remain compliant. Additionally, validators who fail to perform their functions (due to internet outages, for example) may also be at risk of slashing mechanisms, depending on the blockchain.
Lockups
Lockups are the default staking mechanism for most blockchains. With some lockup periods taking days or weeks to withdraw from, fund accessibility is important to consider. Liquid staking solutions resolve this risk.
Even though blocks are added by a different mechanism than in proof-of-work, proof-of-stake networks still use nodes to achieve consensus. Running a blockchain node can be complicated for beginners, but proof-of-stake minimizes this complexity. You don’t need banks of mining nodes, you only need to run a single validator. Plus, accessible solutions like Blockdaemon’s simplify things even further. You just need enough cryptocurrency and the right infrastructure partner to start earning.
For example, has validator nodes for more than 50 blockchains and supports quick and secure delegated staking on protocols like:
– High-performance blockchain that lets builders build and use dApps globally –
– Ecosystem of interoperable and sovereign blockchain apps and services –
– Platform for decentralized video streaming –
– Framework that allows users to build custom blockchains and dApps on top of the network –
– “Reverse” liquid staking using tokens already circulating in DeFi protocols
on Polkadot – the first liquid staking product in the Polkadot ecosystem
for Ethereum & Solana – multi-platform liquidity for staked assets
Institutional Liquid Staking Solution – fully compliant institutional-grade staking on ETH2
Slashing insurance has been introduced by delegated validators, like guarantee, to resolve the risk of slashing.
https://www.coinbase.com/learn/market-updates/around-the-block-issue-14
Justin Mart and Ryan Yi, May 2021.
Where are all the DeFi insurance markets?
Insurance may not be the most exciting part of crypto, but it is a key piece that’s missing in DeFi today. The lack of liquid insurance markets prevents the maturation of DeFi and holds back additional capital from participating. Let’s take a look at why, and explore the different paths to providing insurance protection.
Insurance: The “so what?”
Insurance empowers individuals to take risks by socializing the cost of catastrophic events. If everyone was nakedly exposed to all of life’s risks, we would be much more careful. Readily available insurance coverage gives us confidence to deploy capital in emerging financial markets.
Let’s look at the relationship between risk and yield. If you squint, risk and yield are inextricably linked -- higher yields imply more implicit risk. At least, this is true for efficient and mature markets. While DeFi isn’t a mature market today, the significant yields are still an indication of higher latent risk.
Principally, this risk comes from the complexity inherent to DeFi and programmatic money. HIdden bugs-in-the-code are nightmare fuel for investors. Even worse, quantifying this risk is a mix of rare technical skill combined with what seems like black-magic guesses. The industry is simply too nascent to have complete confidence in just how risky DeFi really is. This makes insurance even more critical.
Clearly, strong insurance markets are a critical missing primitive and would unlock significant new capital if solved. So why haven’t we seen DeFi insurance markets at scale?
There are a few challenges in sourcing liquidity:
Who acts as underwriter, and how is risk priced? No matter the model, someone has to underwrite policies or price insurance premiums. Truth is, nobody can confidently assess the risks inherent in DeFi, as this is a new field and protocols can break in unexpected ways. The best indication of safety may well be the Lindy Effect -- the longer protocols survive with millions in TVL (total value locked), the safer they are proven to be.
Underwriter yield must compete with DeFi yield. When DeFi yields are subsidized by yield farming, even “risk-adjusted” positions often favor participating directly in DeFi protocols instead of acting as an underwriter or participating in insurance markets.
Yield generation for underwriters is generally limited to payments on insurance premiums. Traditional insurance markets earn a majority of revenue from re-investing collateral into safe yield-generating products. In DeFi what is considered a “safe” investment for pooled funds? Placing them back in DeFi protocols re-introduces some of the same risks they are meant to cover.
And there are a few natural constraints on how to design insurance products:
Insurance markets need to be capital efficient. Insurance works best when $1 in a pool of collateral can underwrite more than $1 in multiple policies covering multiple protocols. Markets that do not offer leverage on pooled collateral risk capital inefficiency, and are more likely to carry expensive premiums.
Proof of loss is an important guardrail. If payouts are not limited to actual losses, then unbounded losses as a result of any qualifying event can bankrupt an entire marketplace.
These are just some of the complications, and there is clearly a lot of nuance here. But given the above we can start to understand why DeFi insurance is such a challenging nut to crack.
So what are the possible insurance models, and how do they compare?
We can define different models by looking at key parameters:
Discrete policies or open markets: Policies that provide cover for a discrete amount of time and with well-defined terms, or open markets that trade the future value of a token or event? These coincide with liquid vs locked-in coverage.
On-chain or off-chain: Is the insurance mechanic DeFi native (and perhaps subject to some of the same underlying risks!) or more traditional with structured policies from brick-and-mortar underwriters?
Resolving claims: How are claims handled, and who determines validity? Are payouts manual, or automatic? If coverage is tied to specific events, be careful to note the difference between economic and technical failure, where faulty economic designs may result in loss even if the code operated as designed.
Capital efficiency: Does the insurance model scale beyond committed collateral? If not, there may be natural constraints on the amount and price of available coverage.
Let’s look at a few of the leading players to see how they stack up:
Straddling the DeFi and traditional markets, Nexus Mutual is a real Insurance Mutual (even requiring KYC to become a member), and offers traditional insurance contracts with explicitly defined coverage terms for leading DeFi protocols. Claim validity is determined by mutual members, and they use a pooled-capital model for up to 10x capital efficiency.
This model clearly works, and they carry the most coverage in DeFi today with $500M in TVL underwriting $900M in coverage, but still pales in comparison to the $50B+ locked in DeFi today.
Bundling these models together, there are several projects building either prediction markets or futures contracts, both of which can be used as a form of insurance contracts.
In the case of futures contracts, short selling offers a way to hedge the price of tokens through an open market. Naturally, futures contracts protect against pure price risk, paying out if the spot price declines beyond the option price at expiry. This includes the whole universe of reasons why a token price could decline, which includes exploits and attacks.
Prediction markets are a kind of subset of options markets, allowing market participants to bet on the likelihood of a future outcome. In this case, we can create markets that track the probability of specific kinds of risks, including the probability that a protocol would be exploited, or the token price would decline.
Both options and prediction markets are not targeting insurance as a use case, making these options more inefficient than pure insurance plays, generally struggling with capital efficiency (with limited leverage or pooled models today) and inefficient payouts (prediction markets have an oracle challenge).
Exploits in DeFi protocols are discrete attacks, bending the code to an attacker’s favor. They also leave an imprint, stranding the state of the protocol in a clearly attacked position. What if we can develop a program that checks for such an attack? These programs could form the foundation for payouts on insurance markets.
This is the fundamental idea behind Risk Harbor. These models are advantageous, given that payouts are automatic, and incentives are aligned and well understood. These models can also make use of pooled funds, enabling greater capital efficiency, and carry limited to no governance overhead.
However, it may be challenging to design such a system. As a thought experiment, if we could programmatically check if a transaction results in an exploit, why not just incorporate this check into all transactions up front, and deny transactions that would result in an exploit?
DeFi yields can be significant, and most users would happily trade a portion of their yield in return for some measure of protection. Saffron pioneers this by letting users select their preferred risk profile when they invest in DeFi protocols. Riskier investors would select the “risky tranche” which carries more yield but loses out on liquidation preferences to the “safe tranche” in the case of an exploit. In effect, riskier participants subsidize the cost of insurance to risk averse participants.
Traditional insurance
For everything else, traditional insurance companies are underwriting specific crypto companies and wallets, and may someday begin underwriting DeFi contracts. However this is usually rather expensive, as these underwriters are principled and currently have limited data to properly assess the risk profiles inherent to crypto products.
The fundamental challenges around pricing insurance coverage, competing with DeFi yields, and assessing claims, in combination with limited capital efficiency, has kept insurance from gaining meaningful traction to date.
These challenges collectively result in the largest bottleneck: capturing enough underwriting capital to meet demand. With $50B deployed in DeFi, we clearly need both a lot of capital and capital efficient markets. How do we solve this?
One path could be through protocol treasuries. Most DeFi projects carry significant balance sheets denominated in their own tokens. These treasuries have acted as pseudo-insurance pools in the past, paying out in the event of exploits. We can see a future where this relationship is formalized, and protocols choose to deploy a portion of their treasury as underwriting capital. This could give the market confidence to participate, and they would earn yield in the process.
Another path could be through smart contract auditors. As the experts in assessing risk, part of their business model could be to charge an additional fee for their services, and then back up their assessments by committing proceeds as underwriting capital.
Whatever the path, insurance is both critical and inevitable. Current models may be lacking in some areas, but will evolve and improve from here.
https://medium.com/intotheblock/a-primer-on-defis-risks-f3fdd1f55341
Lucas Outumuro, Mar 2022
A practical guide to risks in crypto, based on $5B+ in losses across hacks, rug pulls and economic incidents
Crypto’s rapid expansion has attracted hundreds of billions. Along with this growth, unfortunately exploits and scams have become widespread. These risks are still widely misunderstood both at the protocol and the individual level. Through this piece we hope to clarify the most relevant technical and economic risks inherent to DeFi protocols, analyzing some of the most prominent exploits and factors users should consider in order to manage their exposure to these risks.
DeFi protocols are exposed to a variety of risks — from rug pulls to hacks to economic attacks. These terms are often used interchangeably… but what do they really mean?
Before differentiating these attacks, it helps to first understand the types of risks involved. These can be broadly categorized the following way:
Technical risks — programmatic functions used in an adversarial manner to withdraw funds from protocols
Economic risks — using levers key to protocols in unintended ways to create imbalances which result in losses for depositors (and gains for the attacker)
Meme-ification of DeFi risks
In this context, we would classify hacks as purely technical, external attacks, rug pulls as internal, deliberate misuse of technical factors and economic exploits as those taking advantage of economic protocol imbalances.
In order to understand how susceptible DeFi protocols are to each type of risk, we delve into the 50 largest incidents that have taken place to date. This includes hacks in the hundreds of millions of bridges, economic collapses of algorithmic stablecoins and outright robbery of user funds.
Between these 50 incidents, we estimate users have lost over $5 billion in DeFi applications. As we’ll cover throughout the piece, among the largest exploits have come from bridge hacks, as well as the particularly large collapse of an algorithmic stablecoin.
Here is the distribution of the main risk factors behind the 50 largest DeFi attacks.
Approximately two thirds of the largest incidents have stemmed from a technical risk. Meanwhile just under a quarter have been due to economic imbalances resulting in vulnerabilities and 10% was a mix between the two types of risks.
We further classify programmatic attacks depending on whether they are due to smart contract bugs, private key management, front-end exploits or rug pulls.
Here we can observe the vast majority of technical attacks are due to unintended bugs present in protocols’ smart contracts. In fact, 46% of the 50 attacks considered originated from such risk. Among these, some of the most common were re-entrancy bugs such as the ones exploited in the infamous hack of The DAO.
While the Ronin bridge did use a multi-sig wallet, many of the other attacks in this subcategory are due to a single address controlling access to protocol funds. Such improper private key management can lead to both hacks and rug pulls. This acts as a central point of failure for hackers to exploit, while also granting developers the ability to deliberately withdraw user funds.
Protecting one’s funds against these risks may seem daunting, but it’s not impossible. Towards the end of this piece we’ll provide actionable steps users can take to mitigate technical risks, but before that let’s dive into economic risks, which have led to even larger losses.
Despite technical factors being behind most attacks in DeFi, a greater dollar value has actually been lost due to economic risks.
Economic risks can be further classified into four subcategories: supply-side, demand-side, stability mechanisms and asset health. These factors are interdependent to certain extent, though typically losses can be traced back to one or two of these subcategories per incident.
Supply-side risks deal mostly with the inflows and outflows of liquidity and its concentration. Here the dynamics are very different in comparison to the ones observed in technical attacks.
For instance, let’s look at a recent economic incident that resulted in at least $80 million in losses for depositors of a Curve pool. The MIM stablecoin was one of DeFi yield farmers’ go-to assets for high returns up until January 26, 2022. That day, it was exposed that the project’s founder Daniel Sesta had been collaborating in another project, Wonderland, with an anonymous co-founder who is an ex-convict associated with millions of funds through the Quadriga centralized exchange in Canada. Additionally, part of the collateral backing MIM was in Wonderland’s TIME token. As the co-founder’s identity was revealed, there were vast economic shocks throughout DeFi.
Michael Patryn, who had been pseudonymously known as Sifu, was CFO in Wonderland, and Abracadabra’s founder admitted to knowing his true identity. This led to significant losses in project’s tied to Daniele Sesta. In the case of the Curve MIM pool, depositors rushed to withdraw liquidity.
Approximately $2 billion in liquidity was withdrawn from the pool within hours. Since the pool consisted of MIM and 3Crv (another pool with 33.3% of USDT, USDC, DAI), depositors opted to withdraw funds in 3Crv to avoid risks related to MIM. This led to the composition of assets in the pool to become imbalanced.
As the pool shifted to being primarily MIM, depositors began being charged higher exit fees if they opted to withdraw funds in 3Crv (or any of its components). With liquidity in the pool getting thinner and MIM losing its peg to the dollar, exit fees grew to the point where one address realized an $80 million loss from a withdrawal. Aggregate losses for all depositors likely reached figures due to this incident.
While this is not an exploit per se, the Curve MIM incident highlights how economic risks can result in sizable losses for DeFi users. When it comes to economic attacks, the most frequent variable involved is price manipulation, typically of a relatively low market capitalization or illiquid asset. These take advantage of vulnerabilities in protocol’s stability mechanisms, particularly the oracles they use.
The lending protocols Cream Finance and Compound have fallen victim to this, where attackers use flash loans to manipulate prices of assets, allowing them to artificially inflate the price of their collateral and grow their borrowing power to an unsustainable point. Since Cream and Compound at the time used on-chain oracles, the attacker was able to do this all in one block through a flash loan.
Despite flash loans facilitating many of these attacks, they are not the main reason behind them. These are signs of protocols being susceptible to manufactured imbalances, and sometimes can even be done manually without flash loans as was the case with the $200M exploit of the Venus protocol in BSC. We will further discuss how stability risks involving oracles, liquidators and arbitrageurs can be mitigated by users.
Overall, these economic risks can be very complex, but can be monitored by looking at factors such as the changes in liquidity in these protocols and the oracles that they use. There is more users can do to protect themselves from these risks as well as technical ones.
As previously noted, 66% of the largest 50 incidents in DeFi were due to technical risks, primarily smart contract bugs. Since the vast majority of people are not well-versed when it comes to smart contract code, it begs the question: what can we do to prevent these risks?
The first and most simple step to take is to check if the protocol has been audited. In terms of incidence, a substantial number of the exploits analyzed were unaudited.
It is worth noting that more than one auditor can review these protocols and still be exploited. For instance, the Poly Network hack of $611M was audited by both Certik and NCC group. Given the track records of these auditors, users can assess the value they can provide and potentially assign weights to the likelihood of protocols being exploited based on previous incidents. Here, however, it is worth looking for not only the number of attacks suffered by these auditors, but also the amount of protocols that have been securely audited by them and the value locked they contain.
Aside from smart contract bugs, we also pointed out the risk that private key management can pose. It is recommended that users conduct due diligence on who has access to private keys behind a protocol. Ideally, not only do these protocols have multi-sigs with several (10+) addresses, but also include reputable people outside their organization. This is the equivalent of directors for DeFi protocols, except with greater exposure to vulnerabilities than in the traditional sense. By adhering to this standard, protocols are less likely to have their private keys accessed and to rug pull users since they would not be programmatically able to do so.
From the economic perspective, it is worth users monitoring key metrics that can affect the safety of their deposits. For deposits in AMMs, particularly stableswaps like Curve, it is worth keeping an eye on liquidity and its composition across assets. Furthermore, the concentration of liquidity in whales’ addresses can also be helpful to assess how vulnerable positions can be to slippage or exit fees if these addresses withdraw funds.
When it comes to lending protocols, it is paramount that they use oracles tracking either off-chain data or use time-weighted average prices for assets. These help prevent price manipulations that can result in losses for depositors. Similarly, if there are illiquid, small cap assets listed in shared liquidity pools, these can also be artificially inflated by attackers looking to extract funds.
These cover risks from the protocol side, but there are also best practices users can implement to avoid further downside. The most common of these include using a hardware wallet, having one “burner” wallet for active use of small funds and avoiding clicking on suspicious links or interacting with phishing scams. There is significant information out there on these, so we will not dive into them in further detail.
Risks abound throughout crypto and DeFi is no exception. While DeFi protocols do provide higher yield opportunities than those available in traditional finance, they are susceptible to greater risks, with very different dynamics. Although it is certainly complex to keep track of all these risks, it is worth keeping these in mind, especially if you are depositing large amounts.
Although incidents have become larger as crypto has grown in magnitude, the industry is establishing best practices to better mitigate these risks. Now more protocols opt for audits despite their hefty costs. It is also less common that they do not use multi-sigs for management of deposits. Oracles used for prices are also more resilient as developers learn from previous attacks. Being open-source, crypto can provide transparent insights into these attacks, strengthening the industry as a whole. Ultimately, risks are likely to remain throughout crypto, but there is more and more developers and users alike can do to mitigate them.
https://finematics.com/lending-and-borrowing-in-defi-explained/
Jakub, Nov 2020
So have you ever been wondering how lending and borrowing works in DeFi? How are the supply and borrow rates determined? And what is the main difference between the most popular lending protocols such as Compound and Aave? We’ll answer all of these questions in this article.
What is Lending and Borrowing
Let’s start with what lending and borrowing is.
Lending and borrowing is one of the most important element of any financial system. Most people at some point in their life are exposed to borrowing, usually by taking a student loan, a car loan or a mortgage.
The whole concept is quite simple. Lenders a.k.a. depositors provide funds to borrowers in return for interest on their deposit. Borrowers or loan takers are willing to pay interest on the amount they borrowed in exchange for having a lump sum of money available immediately.
Traditionally, lending and borrowing is facilitated by a financial institution such as a bank or a peer-to-peer lender.
When it comes to short term lending & borrowing, the area of traditional finance that specializes in it is called the money market. The money market provides access to multiple instruments such as CDs (certificates of deposits), Repos (repurchase-agreements), Treasury Bills and others.
Lending and Borrowing in Crypto
In the cryptocurrency space, lending and borrowing is accessible either through DeFi protocols such as Aave or Compound or by CeFi companies, for instance, BlockFi or Celsius.
CeFi or centralized finance operates in a very similar way to how banks operate. This is also why sometimes we call these companies “crypto banks”. BlockFi, for example, takes custody over deposited assets and lends them out to either institutional players such as market makers or hedge funds or to the other users of their platform.
Although the centralized lending model works just fine, it is susceptible to the same problems as centralized crypto exchanges – mainly losing customer deposits by either being hacked or other forms of negligence (bad loans, insider job etc.).
You can also argue that the CeFi model basically goes against one of the main value propositions of cryptocurrencies – self-custody of your assets.
This is also where DeFi lending comes into play.
Lending and Borrowing in DeFi
DeFi lending allows users to become lenders or borrowers in a completely decentralized and permissionless way while maintaining full custody over their coins.
DeFi lending is based on smart contracts that run on open blockchains, predominantly Ethereum. This is also why DeFi lending, in contrast to CeFi lending, is accessible to everyone without a need of providing your personal details or trusting someone else to hold your funds.
Aave and Compound are two main lending protocols available in DeFi. Both of the protocols work by creating money markets for particular tokens such as ETH, stable coins like DAI and USDC or other tokens like LINK or wrapped BTC.
Users, who want to become lenders, supply their tokens to a particular money market and start receiving interest on their tokens according to the current supply APY.
The supplied tokens are sent to a smart contract and become available for other users to borrow. In exchange for the supplied tokens, the smart contract issues other tokens that represent the supplied tokens plus interest. These tokens are called cTokens in Compound and aTokens in Aave and they can be redeemed for the underlying tokens. We’ll dive deeper into their mechanics later in this article.
It’s also worth mentioning that in DeFi, at the moment, pretty much all of the loans are overcollateralized. This means that a user who wants to borrow funds has to supply tokens in the form of collateral that is worth more than the actual loan that they want to take.
At this point, you may ask the question – what’s the point of taking a loan if you have to supply tokens that are worth more than the actual amount of the loan taken. Why wouldn’t someone just sell their tokens in the first place?
There are quite a few reasons for this. Mainly, the users don’t want to sell their tokens but they need funds to cover unexpected expenses. Other reasons include avoiding or delaying paying capital gain taxes on their tokens or using borrowed funds to increase their leverage in a certain position.
So, is there a limit on how much can be borrowed?
Yes. The amount that can be borrowed depends on 2 main factors.
The first one – how much funds are available to be borrowed in a particular market. This is usually not a problem in active markets unless someone is trying to borrow a really big amount of tokens.
The second one – what is the collateral factor of supplied tokens. Collateral factor determines how much can be borrowed based on the quality of the collateral. DAI and ETH, for example, have a collateral factor of 75% on Compound. This means that up to 75% of the value of the supplied DAI or ETH can be used to borrow other tokens.
If a user decides to borrow funds, the value of the borrowed amount must always stay lower than the value of their collateral times its collateral factor. If this condition holds there is no limit on how long a user can borrow funds for.
If the value of the collateral falls below the required collateral level, the user would have their collateral liquidated in order for the protocol to repay the borrowed amount.
The interest that lenders receive and the interest, that borrowers have to pay are determined by the ratio between supplied and borrowed tokens in a particular market.
The interest that is paid by borrowers is the interest earned by lenders, so the borrow APY is higher than the supply APY in a particular market.
The interest APYs are calculated per Ethereum block. Calculating APYs per block means that DeFi lending provides variable interest rates that can change quite dramatically depending on the lending and borrowing demand for particular tokens.
This is also where one of the biggest differences between Compound and Aave comes in. Although both protocols offer variable supply and borrow APYs, Aave also offers stable borrow APY. Stable APY is fixed in a short-term, but it can change in the long-term to accommodate changes in the supply/demand ratio between tokens.
On top of stable APY, Aave also offers flash loans where users can borrow funds with no upfront collateral for a very short period of time – one Ethereum transaction. More on the flash loans here.
To better understand how the DeFi lending protocols work, let’s dive into an example.
How Does It Work
Let’s dive deeper into the mechanics of Compound and cTokens.
In our example, a user deposits 10 ETH into Compound. In exchange for 10 ETH, Compound issues cTokens in this case cETH.
How many cETH tokens will the user receive? This depends on the current exchange rate for a particular market, in this case, ETH. When a new market is created the exchange rate between cTokens and underlying tokens is set to 0.02. This is an arbitrary number, but we can assume that each market starts at 0.02. We can also assume that this exchange rate can only increase with each Ethereum block.
If the user supplied 10 ETH when the market was just created they would’ve received 10/0.02=500 cETH. Because the ETH market has been operating for a while we can assume that the exchange rate is already higher. Let’s say it is 0.021.
This means that the user would receive 10/0.021=~476.19 cETH. If the user decided to immediately redeem their ETH, they should receive roughly the same amount as it was deposited, which is around 10 ETH.
Now, here is when the magic happens. The user holds their cETH. This is just another ERC20 token and can be sent anywhere. The main difference is that cETH is necessary to redeem the underlying ETH from Compound. On top of that, cETH keeps accumulating interest, even if it is sent from the original wallet that initiated the deposit to another wallet.
With each Ethereum block, the exchange rate would increase. The rate of the increase depends on the supply APY which is determined by the ratio of supplied/borrowed capital.
In our example, let’s say that the exchange rate from cETH to ETH increases by 0.0000000002 with each block. Assuming that the rate of increase stays the same for a month we can easily calculate the interest that can be made during that time.
Let’s say on average we have 4 blocks per minute. This gives us the following numbers.
0.0000000002*4*60*24*30=0.00003456. Now we can add this number to the previous exchange rate. 0.021+0.00003456=0.02103456.
If the user decides to redeem their ETH they would receive 476.19*0.0213456=~10.0165 ETH. So the user just made 0.0165 ETH in a month which is around 0.16% return on their ETH. It’s worth noting that the original amount of cETH that the user received hasn’t changed at all and only the change in the exchange rate allowed the user to redeem more ETH than was initially deposited.
Aave uses a similar model with interest being accumulated every single block. The main difference is that aTokens’ value is pegged to the value of the underlying token at a 1:1 ratio. The interest is distributed to aToken holders directly by continuously increasing their wallet balance. aToken holders can also decide to redirect their stream of interest payments to another Ethereum address.
When it comes to borrowing, users lock their cTokens or aTokens as collateral and borrow other tokens. Collateral earns interest, but users cannot redeem or transfer assets while they are being used as collateral.
As we mentioned earlier the amount that can be borrowed is determined by the collateral factor of the supplied assets. There is also a smart contract that looks at all the collateral across user’s account and calculates how much can be safely borrowed without getting liquidated immediately. To determine the value of collateral Compound uses its own price feed that takes prices from several highly liquid exchanges. Aave on the other hand relies on Chainlink and falls back to their own price feed if necessary.
If a user decides to repay the borrowed amount and unlock their collateral, they also have to repay the accrued interest on their borrowed assets. The amount of accrued interest is determined by the borrow APY and it is also increased automatically with each Ethereum block.
Risks
DeFi lending, although reducing a lot of risks associated with centralized finance, comes with its own risks.
Mainly the ever-present smart contract risks, but also quickly changing APYs. For example, during the last yield farming craze, the borrow APY on the BAT token went up to over 40%. This could cause unaware users who were not tracking Compound interest rates daily to get liquidated by having to repay more than expected in the same period of time.
https://messari.io/article/liquid-staking-with-lido
Ashu Pareek, Jun 2022
Lido is a non-custodial liquid staking protocol for Ethereum, Solana, Kusama, Polygon, and Polkadot.
Lido abstracts away the challenges and risks around maintaining staking infrastructure by allowing users to delegate their assets, in any sum, to professional node operators.
Stakers receive liquid, tokenized staking derivatives, also known as Lido-staked assets (stAssets), to represent their claim on the underlying stake pool and its yield.
stAssets effectively unlock liquidity and remove the opportunity cost of staking since they can be used on a number of popular DeFi protocols to generate additional yield.
Node operators are added to Lido through a DAO vote and are responsible for the actual staking.
Lido is currently the fourth largest protocol by total value locked (TVL) and accounts for almost one-third of all staked ETH.
Staking, a cryptoeconomic primitive that allows participants to earn yield in exchange for locking tokens, has taken center stage over the past two years. Much of the attention comes from the shift to Proof-of-Stake (PoS) as the dominant consensus mechanism for smart contract platforms. Under PoS, instead of using computational power, validators lock (“stake”) a certain amount of the network’s native cryptoasset as collateral to create new blocks. In return, they earn inflationary rewards and transaction fees.
Beyond reducing energy consumption and increasing throughput, the shift to PoS also expands participation in the consensus process. However, most PoS networks still have high barriers to entry and opportunity costs for prospective stakers. Large minimum capital (stake) requirements, technical complexity around the validation process, and extended lockup periods stand in the way of their ability and willingness to stake.
For Ethereum, Solana, Kusama, Polygon, and Polkadot tokenholders, Lido is simultaneously opening up the opportunity to stake while reducing the opportunity cost of staking. Not only does this democratize access and create a more robust DeFi ecosystem, but it can also lead to more secure decentralized PoS networks as Lido progresses along in its roadmap.
The Lido liquid staking protocol launched a few weeks after the Beacon Chain in December 2020. After gaining traction, Lido went multi-chain, adding support for Terra (March 2021), Solana (September 2021), Kusama (February 2022), Polygon (March 2022), and Polkadot (June 2022). Recently, however, the Lido DAO voted against launching on the Terra reboot (Terra 2.0). Lido also continued to diversify its validator set by onboarding additional node operators through governance.
The Lido DAO governs the five Lido liquid staking protocols. While each of the five supported PoS networks, Ethereum, Solana, Kusama, Polygon and Polkadot, have differences in design, the general mechanics around their liquid staking protocols are similar.
Node Operators
The first critical component of a liquid staking protocol is its node operators because they are responsible for the actual staking. As of now, node operators are added and removed through the Lido DAO.
Lido is also non-custodial, meaning node operators can’t directly access user funds. Instead, they must use a public validation key to validate transactions with staked assets. In order to align incentives, Lido node operators are compensated with a commission on the staking rewards generated from delegated funds.
Staking Contracts
Users delegate stake to node operators through Lido’s smart contracts. The three main smart contracts are the NodeOperatorsRegistry, the staking pool, and the LidoOracle.
The NodeOperatorsRegistry holds the list of approved node operators.
The staking pool is the protocol’s central smart contract. Users interact with the staking pool by depositing and withdrawing their cryptoassets and minting/burning stAssets. The staking pool distributes the deposits uniformly (round-robin) to node operators using their addresses and validation keys. The staking pool contract is also responsible for distributing fees to the Lido DAO treasury and node operators.
The LidoOracle is responsible for keeping track of staking balances. The net staking reward, the difference between the staking yield and any slashing penalties, is tallied up daily and sent to the staking pool contract. The staking pool distributes 10% of the net staking reward by minting a proportional amount of the stAsset: 5% goes to node operators and 5% to the Lido DAO treasury. The remaining 90% of net staking rewards go to stAsset holders. Depending on the network, the rewards either show up as increases in the stAsset through its balance (via the rebasing mechanism) or its exchange rate.
Lido-staked Assets (stAssets)
In exchange for depositing assets into one of Lido’s liquid staking protocols, users receive a staking derivative (stAsset). This tokenized claim on the stake pool effectively unlocks the liquidity of staked assets while they continue to secure their respective networks and earn rewards. stAssets come in two different forms: rebase and shares.
Rebasing tokens (stETH, stKSM, stDOT) are minted at a 1:1 ratio with the deposit asset. In order to match the underlying stake, the token balance rebases every day to factor in accrued staking rewards. The daily rebase occurs regardless of where the stAsset is acquired; whether it’s directly from Lido, a decentralized exchange (DEX), or another holder.
Value-accruing tokens (stSOL, stMATIC) earn staking rewards through appreciated value, reflected in the stAsset to deposit asset (e.g., stSOL:SOL) exchange rate. Rebase tokens can be converted to value-accruing tokens by being “wrapped.”
As of this writing, stETH accounts for over 98% of the value of stAssets in circulation.The stETH token is currently a purely synthetic, closed-end derivative since it can’t be directly redeemed for its underlying ETH until after The Merge. Instead, holders looking to convert their stETH to ETH rely on exchange (e.g., Curve, Uniswap, and FTX) pricing/liquidity.
DeFi Integrations
DEXs
The price of 1 stETH should never really go above 1 ETH. This “ceiling” is in place because 1 ETH can always be used to mint 1 stETH through the Lido staking contract. However, the arbitrage mechanics aren’t as clear the other way around.
Since stETH can’t be burned for its underlying ETH on the Lido protocol, the exchange rate currently relies on the market’s price discovery under the ceiling. A number of factors come into play for the current (and historical) discount including the fact that stETH has less liquidity, less utility (e.g., can’t be used to pay gas fees), and more technical (smart contract) risk than ETH. However, the stETH price does not usually dip very far below 1:1 with ETH because it then starts offering arbitrageurs an attractive discount at future (post-unlock) redemption value.
Even for other stAssets, DEX liquidity is still useful because it gives holders the option to exit positions instantly, without having to wait through the stake deactivation period.
Lending and Borrowing
While Lido stakers can just hold their tokens or provide low risk (from impermanent loss) liquidity to a DEX, they multiply their opportunities when they start using stAssets as collateral. Some notable lending protocol integrations include Aave and MakerDAO for stETH and Solend for stSOL.
The most popular strategy so far has been recursive borrowing to get further leverage on stAssets. An example is leveraging stETH on Aave, which allows users to borrow up to 70% of collateral value. Repeatedly borrowing ETH and then resupplying stETH under this parameter allows users to triple their staking rewards, albeit with added risk to themselves and stETH.
The biggest difference between Rocket Pool and Lido is the validator set. Lido’s approach concentrates validators with professional, carefully selected node operators. Rocket Pool’s goal is to allow permissionless entry into the validator set and to secure stake through economic incentives rather than reputation/past performance. While Rocket Pool’s system does lead to wider participation in the validation process, it also creates capital inefficiency (i.e., requiring node operators to put up 16/32 ETH for each validator), which makes scaling a challenge.
Another major difference between the two protocols is around liquidity. The Lido DAO currently spends over 4 million LDO per month to incentivize liquidity across chains and their respective DEXs, with the vast majority of spend on the stETH:ETH pair. Rocket Pool, on the other hand, has no spend allocated towards liquidity. Lido’s incentive system boosts demand for stETH by simultaneously reducing slippage and creating a sort of “built-in”, base yield for stETH holders.
The bottleneck around onboarding stake, less liquidity and yield for rETH, along with the almost 1 year late start makes it difficult for Rocket Pool to catch up at this point. Thus, odds are high that stETH is already the schelling point for non-custodial liquid staked ETH.
The other major player in liquid staked ETH is Binance. Binance, however, is not a direct competitor given its custodial nature. And, while it does issue a liquid tokenized derivative (bETH), the token doesn’t accrue value when outside of the staker’s Binance wallet, making it considerably inferior to stETH and Rocket Pool’s rETH. bETH also lacks critical mass in terms of DeFi integrations. Zooming out, the three largest custodial staking solutions (Kraken, Coinbase and Binance) have deposited almost 2.7 million ETH combined, or about 60% of Lido’s stake. Looking forward, this gap is only widening since Lido is responsible for the vast majority (70%) of recent Eth2 inflows.
The distribution breakdown of the initial supply is 22.2% to early investors, 20% to initial Lido developers, 15% to future employees, and 6.5% to validators and withdrawal key signers. These groups were given roughly 64% of the total supply, with a 1-year lock, followed by a 1-year vesting period. The token generation event (TGE) took place on December 17, 2020; thus, these tokens will fully vest by December 17, 2022.
Given the Lido DAO’s heavy exposure to Ethereum, any setback around the timeline or execution of The Merge could be catastrophic. While a deep discussion around Ethereum 2.0 is out of scope, adverse events such as further delays in the transition date or a crisis of confidence around the transition itself could blow out the ETH:stETH discount. Given stETH’s rehypothecation, cascading liquidations could then put further downward pressure on the price. However, if The Merge executes smoothly, Lido will have a central role in the largest PoS chain.
The second part of the plan addresses the concentration of LDO ownership. It will give stETH holders the ability to provide oversight and veto decisions made by the DAO.
However, with great power comes great responsibility. Lido will play a central role in securing the largest Layer-1 blockchain, and, directly or indirectly, billions of dollars in value, as of this writing. Contrary to what critics might say, this isn’t inherently wrong. The long-term consequences of Lido’s dominance on the Ethereum network, and the crypto ecosystem as a whole, will now depend primarily on the Lido DAO’s decisions around creating a safe and truly decentralized protocol. Regardless, the fact that Lido has ecosystem stakeholders thinking in years rather than days, weeks, or months, is a testament to the protocol’s projected staying power. Combining a robust ecosystem with a powerful flywheel effect is all it takes.
Hybrid insurance markets:
Prediction markets and futures contracts: and
Automated insurance markets:
Tranche-based insurance:
For the next part of this piece we examine the underlying factors behind these risks through the lens of the 50 largest incidents in DeFi. For those interested in the raw data used, feel free to check out.
Moreover, a significant portion of attacks were due to private key management, as recently seen with the of the Ronin network behind Axie Infinity. These incidents are due to hackers being able to access private keys that have control over the protocols’ smart contracts. In Ronin’s case, there was a multi-sig wallet that required 5 out of 9 addresses to approve transactions, with 4 of these belonging to SKy Mavis, the company behind the network. Ronin’s newsletter points to the attack being socially engineered, suggesting that an impersonator sent a link to Sky Mavis’ team that after being opened granted access to their private keys. The fifth address compromised belonged to the Axie DAO, which seemingly had one of their members fall to the same attack.
Via
Via
As a result, an entire industry called has spawned to give tokenholders simple, flexible, and capital-efficient access to staking. The leader of this industry is , a non-custodial, cross-chain liquid staking protocol. Lido abstracts away the challenges and risks around maintaining staking infrastructure by allowing users to delegate their assets, in any sum, to professional node operators. In return, stakers receive a tokenized derivative that represents their claim on the underlying stake pool and its yield. These liquid staking derivatives, known as Lido-staked assets (stAssets), can then be traded or used as collateral on a number of popular DeFi protocols.
The Lido DAO was founded in 2020 by a group of prominent individuals and organizations including P2P Validator, ParaFi Capital, Stani Kulechov (Aave), and Twitter personality Jordan Fish (). The initial goal was to resolve some of the user experience issues in the Ethereum staking process, i.e., the significant upfront capital investment (32 ETH minimum), technical challenges around the validation process, and illiquid funds (locked until after ).
How Ethereum staking works on Lido. Source: , Messari
The two main parties involved are the users (stakers) and node operators (validators). The key protocol components are the staking smart contracts, the tokenized staking derivatives (stAssets), and the external DeFi integrations (e.g., ).
The whitelisting process starts with the (LNOSG). The group currently consists of a representative from each of Lido’s twenty-one Ethereum node operators. Applications open up when Lido is launching on a new network or if LNOSG thinks a network can handle and benefit from additional node operators. The committee evaluates applicants based on several factors including reputation, past performance and the security, reliability, and novelty (uncorrelated nature) of their setup. Once LNOSG evaluates an applicant pool, it submits a list of recommended node operators to the Lido DAO for a tokenholder vote. A good validator set is critical for Lido since earnings and slashing penalties are socialized across all stakers in a given liquid staking protocol (e.g., all stETH holders).
Lido Ethereum staking; under the hood. Source:
How LidoOracle (“Oracle”) interacts with the staking pool on Ethereum. Source:
In order to keep stETH liquid, the Lido DAO incentivizes the , currently the deepest AMM pool in DeFi. The Lido DAO token (LDO) and incentives help attract liquidity by bolstering the pool’s APY. This pool, along with others like Uniswap and Balancer, gives stETH holders the ability to exit their staked positions for ETH before the unlock.
Source: , Messari
Recently structured products, like Index Coop’s , have started offering leveraged stETH through Aave while mitigating some of the risks associated with managing collateralized debt. However, an event like a hack, governance attack, slash across multiple node operators, or market-wide liquidity crunch would still result in cascading liquidations and a large dislocation in the stETH/ETH pair. Nonetheless, the ability to borrow assets gets at the heart of capital efficiency and the illiquidity/lockup dilemma Lido is trying to solve.
As shown above, there’s been massive growth in stETH on Aave since at the end of February 2022. As of this writing, nearly 45% of stETH in circulation (almost 1.5 million stETH) is deposited in Aave. The second largest pool of stETH provides liquidity to the Curve stETH:ETH pair. Together, almost two thirds of stETH is split between the two protocols.
At the beginning of May, Lido briefly overtook Curve to become the largest DeFi protocol by total value locked (TVL). Since the , Lido’s overall TVL rank has hovered around fourth.
In May 2022, about 70% of new Eth2 staking deposits came from Lido. Overall, Lido accounts for .
However, Lido has over in the non-custodial, decentralized liquid staking category.
The next closest competitor, , has gained some ground and fanfare since launching in November 2021. Nevertheless, despite more than tripling its staked ETH, Rocket Pool’s market share in non-custodial liquid ETH staking still remains under 4%.
Even though Ethereum is clearly Lido’s bread-and-butter, Solana staking paints a more nuanced picture around Lido’s multichain expansion. First of all, Lido is not the category winner for liquid staking on Solana — that title goes to . Marinade currently has ~2.5 times . However, more broadly, liquid staking is an incredibly small market on Solana, seeing as the top two protocols (Marinade and Lido) have roughly 2.5% of the total stake. In Eth2, Lido and Rocket Pool (the two top liquid staking protocols) make up almost 36% of the total stake. Some of the reasons behind this could be Solana’s native delegation feature combined with the higher liquidity on stake thanks to the 2–3 day unbonding period. While this complicates the outlook for Lido’s multichain expansion, in terms of competition and product-market fit, its brand, cross-chain synergy, and giant, somewhat cornered, market on Ethereum still paint a rosy picture overall.
LDO is the Lido DAO’s Ethereum-based, ERC-20 governance token. The Lido DAO is an, responsible for making decisions around the DAO itself (e.g., treasury) and the staking protocols (e.g., node operators, fees, etc.). A tokenholder’s voting power is commensurate with the LDO locked in their voting contract.
The remaining tokens, 36.2% of the total supply, belong to the Lido DAO treasury. They continue to be used on an ad-hoc basis, per DAO governance. Some uses so far have been , , , , and funding of the .
The choice to gradually decentralize has allowed Lido to optimize for speed and scalability. While Lido has a first-mover advantage over competitors like Rocket Pool, Lido’s small node operator set has raised concerns about centralization on Ethereum. The is a committee of insiders that controls the initial curation process around node operator selection. Even though there is eventually a Lido DAO vote on the finalized list, the LDO token has concentrated insider ownership as well. This has resulted in a system where 21 professional node operators manage all of Lido’s 32.5% share of ETH on Ethereum’s Beacon Chain. Furthermore, the withdrawal credentials for the ETH staked before July 15, 2021, are held by a 6-of-11 multisig. Once withdrawals are enabled, if more than five signatories lose their keys or go rogue, roughly 600,000 ETH (~15% of current total Lido ETH2 stake) could become locked. However, the Lido DAO team plans to migrate this stake to the 0x01 (upgradeable smart contract) withdrawal credentials as soon as it’s possible to do so.
In addition to this, Lido has for removing many of the other, remaining trust surfaces and decentralizing stake.
One part of the plan involves . Distributed Validator Technology (DVT) will allow Lido to onboard new, unknown, untrusted node operators by pairing them with trusted (whitelisted) node operators. These new validator groups will work together to propose and attest to blocks, while keeping each other in check.
At the moment, only about . After the merge, stakers will have the option to withdraw, and staking rewards could double. If this catapults the proportion of ETH staked to the current average for top PoS chains, the percent of total ETH staked could increase by a factor of six. In addition to benefiting from a rising tide, Lido will also allow stakers to circumvent what could be a multi-month validator activation queue and immediately start earning staking rewards. Whether Lido offers lower yields or uses some of its treasury to plug the gap, Lido will have another great opportunity to cement itself as the market leader in Ethereum liquid staking.
https://defiyield-app-guides.medium.com/why-impermanent-loss-calculators-are-wrong-and-how-to-avoid-incorrect-assessment-of-the-money-waste-d349607706fc
DeFi Yield, Nov 2020
AMM technology or Automated Market Maker is one of the key spheres that makes DeFi an open decentralized financial ecosystem. To make a long story short, AMM brings the possibility to exchange digital assets like DAI for ETH without the need to interact with centralized service providers. Opposite to traditional exchange systems such as Coinbase, Binance, CEX, etc., AMM is just a set of smart-contracts which is based on distributed ledgers like Ethereum. In comparison to centralized counterparties, AMMs do not suffer from censorship, regulations, or pressure on token buyers’ and sellers’ privacy.
AMMs operability is completely user-driven and relies on liquidity mass provided by service users. Each time liquidity providers commit their digital assets, or so-called token pairs, to liquidity pools the system gets “re-fueled’. As a live example, ETH for DAI swap on Uniswap is a direct trade of assets against the liquidity pool.
With this brief explanation of AMMs, we can move forward and dive deeper into “impermanent loss”.
Essentially, an impermanent loss can be defined as funds loss during liquidity provision. As an alternative explanation, it can be considered as a difference between funds users hold in AMMs versus funds in the wallet.
Impermanent loss usually occurs in standard liquidity pools where the liquidity provider obligated to keep both assets in a correct ratio but the price of tokens volatile and diverge in one or another direction where higher difference means greater impermanent loss.
Let’s consider Uniswap DAI/ETH 50/50 liquidity pool as an example. Generally, if one of the tokens goes up or down the pool has to fully rely on uninterrupted arbitrageurs to ensure reflection of the real price of the pair to rebalance the value of both tokens. When this situation occurs, profit from the token is taken away from the liquidity provider. What is more, impermanent loss becomes permanent when liquidity providers decide to withdraw their assets (liquidity). Nonetheless, in rare cases, the loss might be reversed if token prices in the AMM return to their original state.
In order to understand how impermanent loss occurs, we can review an example of DAI/ETH tokens pair on the Uniswap pool.
Let’s consider a 50/50 liquidity supply in the pool. Note, to achieve it, liquidity providers have to ensure an equal value for both DAI and ETH.
In our case, the price of ETH goes up and raises on Coinbase to $550 for 1 ETH token. This is the exact spot when arbitrageurs get triggered. Differences in prices on Coinbase and Uniswap is a direct opportunity to make a profit for arbitrage. Uniswap utilizes AMM to keep the ratio of tokens balanced in the pool, thus when ETH is constantly bought from the pool, the price of it becomes higher and higher. Obviously, this continues until the price of the ETH stabilizes between the exchanges. So how much ETH arbitrageur has to buy to remove the discrepancy?
Using certain AAM formulas and putting ETH on Coinbase exchange we can determine when ETH price on Uniswap will increase to $550. This will happen once the following supply reached:
10,488.09 DAI
19.07 ETH
The calculation shows us that arbitrageur will buy 0.93 ETH to get Uniswap’s and Coinbase ETH price equal. This will cost 488.09 DAI and result in a price of 524.83 DAI per 1 ETH.
ETH that was bought on Uniswap can be sold for any stablecoin on any other exchange to earn approximately $20–25.
Accordingly, to the example above, liquidity providers would have $23.41 more just for holding their assets in the wallets. As already mentioned, losses could be canceled (become impermanent) if the price of ETH returned back to $500 and no withdrawal made by liquidity providers.
If you examine different price movements, you can see that even small changes in the price of ETH cause liquidity providers to suffer from the impermanent loss. The graph below shows that a 500% price increase against the trading pair can result in a 25% loss of providers’ liquidity reserve:
The possibility of losing money due to impermanent loss started the process of developing strategies to, at least, mitigate it. An important starting point for the in-depth studies was the realization that the risk of impermanent loss can be reduced by minimizing divergence in tokens pair prices. If prices between tokens remain constant for AMM, liquidity providers can trade with less fear of losing their funds. So-called “mirror assets” (sETH/ETH, DAI/USDC, etc. ) — token pairs with a balanced price ratio have proved their resistance against impermanent loss due to notable liquidity and stable profit earnings.
Nonetheless, AMMs that are based on tokens with minimal divergence in price are limited to stablecoins or wrapped into synthetic tokens. Additionally, it is not possible to play a long position without holding extra assets for “backup” when liquidity is provided.
The mirror-assets concept marked the beginning of further investigations on token pairs. The original idea to combine stablecoins and volatile tokens into pairs appeared almost instantly. This leads us to pools that have a proportion which differs from 50:50 in stablecoins but set to 80:20, 95:5, or even 98:2 in stablecoin/volatile token pair.
Such pool schemes work for liquidity providers who are interested in keeping a high exposure to a certain asset and wish to mitigate the risk of funds loss when the price of volatile token goes down. But what happens to the impermanent loss when volatile tokens occupy only 2–5% of the entire pool value?
Generally, pools with an unbalanced proportion of assets can reduce the impact of impermanent loss (depending on the weights in the pool). The higher the price of a token, the less the difference of holding the token in the wallet and providing liquidity to the pool. At first glance, the 3% difference in 95:5 and 98:2 should not be noticeable, but don’t judge too quickly. We prepared a comparative review of potential impermanent loss on 50:50, 95:5, and 98:2 pools of USDT/XYZ pair.
⚡ NOTE: You can substitute “XYZ” with any volatile token, such as YFI, YF, sETH, etc.
In the below case, we will simulate a 50% drop in the XYZ token price. Such a price drop can be perceived as one that rarely occurs but as it turned out it is common when it comes to real-life examples. All of the calculations below are based on exclusive formulas and the concept of death spirals. By utilizing these we’ve tried to illustrate the iterative nature of token price drops and reproduce possible scenarios of pool corrections per balancer tact.
Before we move on to the comparison, let’s define the concept of death spirals.
The death spiral is a phenomenon of value loss that occurs when a pool repeatedly eliminates its resources. The concept refers to situations in which the pool falls into a spiral of constant value drops until its complete depreciation. Spirals of death tailored to show literally the worst possible scenario when pool degradation goes in a progressive manner and rollback to the initial state are likely, not possible. To make it simpler, the spiral of death can be considered as a combination of factors which play against pool condition repeatedly. Constant price drops, liquidity withdraws, decreasing level of trust triggers the absolute devaluation of the pool value.
A particularly striking example of a death spiral can be observed on the so-called “Rug Pulls” — are when the creator of the farming pool hides up the back door that allows printing an infinite amount of tokens. Typically, such a backdoor is well designed and disguised in extremely complex smart contracts. Since yield contracts are very difficult to read and the pool creator promised that the token supply is limited, users invest without much fear. As soon as the pool gains the necessary weight, the pool owner triggers a special function that boosts the price just before the tokens unload on the market. This is the exact behavior of “Rug Pull” — a move that destroys trust, transparency, and finally, the price of the token.
In our case, we will not focus on the specific causes of the losses, but rather consider how the degradation progresses on different pool types and which of those are less susceptible to losses. Plus, we will cover the percentage of impermanent loss per each pool to reveal a dependency between assets proportion and cost damage progression.
Below you will see three tables based on the concept of death spirals. The first row of the table corresponds to the initial state of the pool when the last one shows what would happen to the pool resources and what percentage of impairment loss we would have by a 50th drop in the price of the token.
Columns definition:
Balancer swing — an iteration during which the balancer equalizes the value of assets in the pool by selling/buying one of the tokens from the pair. It is worth mentioning, that time duration between swings may vary, and therefore, price corrections may occur irregularly.
USDT Balance — the amount of USDT assets per one balancer swing
XYZ Balance — the amount of XYZ assets per one balancer swing
Buy XYZ — the amount of XYZ tokens that need to be purchased to equalize the value of assets in the pool
Sell USDT — the amount of USDT tokens that need to be sold to equalize the value of assets in the pool
Pool $Value -the total amount of assets in the pool
Pool Loss $Values — percentage by which a pool is worth less in comparison to the initial Pool $Value.
Impermanent loss — the percentage by which a pool is worth less than it would have if liquidity providers had just held their tokens outside of the pool
#1 Pool 50:50 example
#2 Pool 95:5 example
#3 Pool 98/2 example
Pay close attention to the impermanent loss values in row #10. It shows us that just in 10 balancer swings impermanent loss will be as follows:
Pool #1 50:50–93.73%
Pool #2 95:5–25.57%
Pool #3 98:8–11.17%
As you can see, losses on pool 95:5 (25.57%) are twice as large as on 98:2 (11.17%), moreover, it is getting worse with each balancer attempt to stabilize assets in the pool. Nonetheless, these two are still in good condition in comparison to 50:50 where losses got to 93.73% that literally means liquidity providers just lost almost everything they invested.
In practice, tokens will likely drop in price gradually (e.i. $100>$75>$50>$25…N) thus it is hard to predict the exact percentage of losses.
However, it is still possible when XYZ token price drops almost instantly, like from $100 to $1(-99% in value). If this is the only one and final drop, the balancer will stabilize the price between tokens by selling the required amount of USDC and buying XYZ. Such a scenario will end up in fairly low losses of pool value and percentage of impermanent loss.
Sadly, but in real life, it won’t stop just in one correction cycle. This particular case is the one that is simulated by most of the currently known impermanent loss calculators. That’s the exact point where the bone of contention lies. Basically, this method of calculation is limited and does not take into account further balancer corrections, future price drops, and liquidity withdraws which in turn leads to completely wrong numbers of impermanent loss percentage.
On the other hand, in our example, we have made a calculation of multiple drops with the step of 50% on the volatile token price each time the balancer performs swing. This approach is not a panacea as well, since XYZ token price fluctuations between swings may vary. With this being said, we did an advance forecast to see potential losses, not only at a certain point in time but rather to observe the progression of loss growth.
Are 98:2 pools the ultimate solution to the problem of impermanent loss?
No, they are certainly not, but they are definitely worth considering. Despite the fact that 98:2 pools in comparison to 95:5 are much less risky, there is still a possibility that even 1–4% of daily yield might be “eaten” by 5–15% of the impermanent loss. Small exposure, only 2% of the entire pool value, won’t guarantee the lack of losses completely, therefore it is always better to investigate before investing and calculate impermanent loss impact upfront.
How can I correctly calculate the possible Impermanent Loss?
In order to calculate an impermanent loss, you can use the following formulas:
Where:
PC1 — priceChange1 in asset #1
PC2 — priceChange2 in asset #1
W1 — the weight of asset #1
W2 — the weight of asset #2
The above formulas should only be used to calculate impermanent loss at a certain point in time and do not show a possible percentage of final losses.
The calculator is free to use so anyone interested is welcome to play around with values and decide whether it’s worth enrolling in a particular lending opportunity or not. To get started, just fill in the required fields and press the “Calculate” button.
Deposit Value USD — the amount of the deposit in USD that you invest in the pool.
Asset 1/2 — asset name. Can be determined by either token name or token address
Asset Pool Weight — a percentage share of assets in the pool. Used to set token pair ratio (pool type) for example, 80:20, 95:5, 98:2, etc.
Asset Price Change — total asset price change in percentage, e.g. +50% or -90%
Detailed Configuration (optional setting) here you can configure the so-called Curve Slope as long as the market price swings can be drastic (e.g. a -50% price collapse), or gradually reaching the target price (e.g. consecutive 2–5% drops to reach the 50% of the starting price) by selecting one of the following options: - Sharp slope corresponds to a swift price increase/decrease - Normal slope corresponds to a medium price increase/decrease - Mild slope corresponds to the gradual price increase/decrease
Below you will find an example of calculations for 50:50, 95:5, and 98:2 pools with a pair consisting of a stable coin — USDT and a volatile token — YFV. Since USDT is stable, we suppose that its Asset Price Change is 0. On the other hand, the Asset Price Change of YFV is set to -90%.
Pool 50:50
Input Data for 50:50 PoolImpermanent Loss Calculation for 50:50 pool
Pool 95:5
Input Data for 95:5 PoolImpermanent Loss Calculation for 95:5 pool
Pool 98:2
Input Data for 98:2 PoolImpermanent Loss Calculation for 98:2 pool
⚡ NOTE: defiyield.info impermanent loss calculator allows you to filter assets shared in the pool. To do so, simply click on the appropriate button next to the filtered columns section.
Impermanent loss is a direct threat to the popularization of AMM principles and decentralized markets of passive income for anyone with idle assets. However, recently discovered strategies of risk-minimization turn automatic market makers into an efficient solution for maintaining liquidity decentralization.
Impermanent loss risk mitigation has a direct impact on the yield framing industry, “mirror assets” pairs, and volatile-token exposures on pools with 95:5, 98:2 proportions are one of the best options offered so far.
3% of the difference between 95:5 and 98:2 pools transforms into 2x losses when it comes to a significant price drop in token pairs.
https://coinmarketcap.com/alexandria/article/hedging-against-impermanent-loss-a-deep-dive-with-finnexus-options
Ryan Tian, Aug 22
AMM stands for an automated market maker. This is a class of decentralized exchanges that rely on mathematical formulas to price assets. Different from using order books like in traditional finance, assets are priced according to a pricing algorithm coded in the smart contracts. Liquidity pools are created for each trading pair of tokens, while projects like Balancer even allow for the creation of pools with more tokens.
The pool provides liquidity for asset trading, in an automated manner. In other words, traders don’t need to find someone else to sell their coins to or buy their coins from. Transaction fees are distributed automatically among all the liquidity providers, according to the shares they are holding in the liquidity pool.
XYK Model
Though the liquidity is pooled and shared, the mechanisms behind AMMs may vary.
The XYK Model is also called the “x∗y=k market maker.” The idea is that you have a contract that holds x coins of token X and y coins of token Y, and always maintains the invariant that x∗y=k for some constant k. The value of the token X and token Y always stay the same, or the pool has 50/50 shares for both tokens. The changes in the numbers of tokens will change the price.
Suppose there are no transaction fees, anyone can buy or sell coins by essentially shifting the market maker’s position on the x∗y=k curve as below; if they shift the point to the right, then the amount by which they move it right is the amount of token X they have to put in, and the amount by which they shift the point down corresponds to how much of token Y they get out.
According to XYK, in the chart above, x1∗y1= x2∗y2=(x1+Δx)∗(y1-Δy)=k
For example, consider the case where a liquidity provider adds 40,000 DAI and 100 ETH to a pool (for a total value of $80,000), the price of ETH is 400 DAI at this time. The constant k=40,000∗100=4,000,000
Suppose there are no transaction fees. Someone wants to sell ETH for DAI, so he/she sells 5 ETH into the pool. Then, the pool has 105 ETH. With the XYK mechanism, k stays constant as 4,000,000. Therefore, the number of DAI in the pool becomes 4,000,000/105=38,095.24. The ETH seller gets 1904.76 DAI in return. The pool price of ETH becomes 38,095.24/105=362.81 DAI. The real ETH exchange price to the seller is 1904.76/5=380.95 DAI.
Impermanent loss is not the total loss of the net worth measured in USD, as we usually evaluate the financial behavior of a portfolio. It is the loss in the pool compared with the case when just holding the assets.
Following the example above, suppose there are no transaction fees, 1 DAI is worth $1, and the ETH market price changes to the same price as in the pool.
The impermanent loss is calculated as the difference between the value of tokens when not in the pool and the one in the pool as a liquidity provider at T2.
IL=$76,281-$76,190.48=90.52
The impermanent loss seems to be not much in this case, but it may grow a lot larger if the price moves more dramatically in either direction.
The blue line below is the change in value for just holding 100 ETH and 40,000 DAI. The yellow line is the value when one puts them into the 50/50 AMM liquidity pool. The difference between the two lines is the impermanent loss.
As we may notice, the IL grows when the ETH price moves in either direction, away from the one when he/she makes inputs — and it grows more significant as the price moves further away. If the transaction fees and mining incentives allocated are not enough to compensate for the IL, the liquidity providers will end up with a loss.
Where LR is the impermanent loss ratio, p1 and p2 are the price at the Time of T1, T2 respectively.
Impermanent loss happens no matter which direction the price changes — instances when the price drops can be more dangerous than when the price increases.
Perpetuals and futures are common instruments hedging against price movement risks, especially for the spot crypto market. However, from the analysis above, the losses suffered by the liquidity provider in AMM pools is not linear, but bidirectional. IL is undertaken in either direction of price movement. While perpetuals and futures are linear hedging tools, they cannot effectively protect the liquidity providers in both directions.
Options are ideal instruments for hedging IL risks.
Options’ profit and loss are not linearly distributed. For option buyers, different from perpetuals or futures, they have only rights, but not obligations. This means that they can choose to stay put when the market moves in an unfavorable direction.The loss is capped to the option premium, but the potential gains are not limited. The P&L of calls and puts holders are shown in the following chart.
Although, when the option holders are in profit, as the green part in the chart, the profit goes linear, we can still combine options with different strike prices and quantities, to make the convexity of the IL offset.
As an option buyer/holder, one just needs to pay the option premium without needing to lock any additional collateral/margin in the contracts. Option holders have no risks of being liquidated. The premium is all the cost, that one has to pay for the rights of buying or selling the underlying assets in the future.
Also, IL becomes more significant when the price moves away, to a greater extent, from the point when one contributes assets into the pool. Therefore, a trader may need to buy OTM puts and calls to hedge against the loss. The strike price is one of the key factors in pricing options. For OTM options, the prices will be much lower than ATM or ITM options and can lower the cost for liquidity providers.
The shape of the IL curve appears to have some convexity, so the basic idea is to make the curve as horizontal as possible, ideally like the red line below.
To make a good hedge, the P&L of the instrument needs to perfectly offset that of the IL, as with the blue curve below. Yes, it can be difficult. Let’s have a trial with a call and a put and check how it works.
With a strategy of longing a strangle, which holds both a call and a put on the same underlying asset with different strike prices, as it is shaped below, we may offset the IL to some extent.
Let’s try with one set of calls and puts, with ±30% of the spot as the strike price, and we hold the same amount of options as the unstable assets in the liquidity pool.
Following the same example above, the liquidity provider buys 100 units of call options with the strike price $520/ETH and 100 units of put options with the strike price $280/ETH, with an expiry of 30 days.
From the chart above, it is observed that:
1. The IL has been totally balanced out when the price increases above 40%. In addition, the call option ends up in profit and it is more than enough to hedge against the IL, as shown in the right wing of the orange curve.
2. In the left wing, the hedging strategy with the puts goes well and ends up with profit. However, due to the case that the loss accelerates as the price collapses further, this put option will not be enough to compensate for the loss, and the return finally ends up in the negative.
3. For holding calls and puts together, with the same amount as the ETH in the liquidity pool, the fixed cost of this hedging strategy is high. According to the calculation, the fixed cost of the option premiums is 2.25% of the initial investment, and it is a protective strategy for 30 days.
The cost can reach 27% yearly if the strategy repeats every month. The cost is high.
Some adjustment can be made for a better hedging result.
1. The IL ratios when the ETH price going up and down are not in a symmetric manner;
2. Multiple combinations of calls and puts should be used for upward and downward hedging;
3. The quantities of options can be adjusted to make the yellow line less upwarping to fit different strategies;
4. The effective range of hedging doesn’t necessary to cover all of the price movement from -100% to +500%.
Due to the linear characteristics in the profit behavior against the price changes, when the options are in-the-money, it is impossible to perfectly hedge against the IL (which has a convex nature) with just one kind of calls and puts. However, with combinations of multiple puts and calls, we may find a way to offset the IL more effectively while adjusting the weights of different options with various strike prices.
Let’s take a look at the following strategy. We follow the same example above.
1. Hedging range of prices: from -60% to +100%;
2. Hedging period: 30 days;
3. Call options: 6 ETH calls with strike price $480 (+20%), 8 ETH calls with strike price $520 (+30%), 10 ETH calls with strike price $560 (+40%);
4. Put options: 8 ETH puts with strike price $360 (-10%), 10 ETH puts with strike price $320 (-20%), 15 ETH puts with strike price $280 (-30%), 5 ETH puts with strike price $240 (-40%).
The orange curve after hedging is much flatter in the hedging range, and the cost of option premiums are more acceptable than the last strategy.
With different combinations of calls and puts, the results could be varied much and tailored to different strategies.
These strategies work only in the highly liquid market of options. Even in Deribit, which contributes to over 90% of the total volume, the order books of BTC and ETH options are still not deep enough, especially for OTM options, which are particularly needed for this IL hedging strategy.
Is this a deadend? No.
Decentralized options with the peer-to-pool model show us the way out. With pooled liquidity, the collateral pool acts as the sole counterparty for all options and the buyers can customize options with much flexibility, while with little price slippage. Liquidity is shared, and when buying options, the buyer is trading against the pool, rather than matching with a specific option writer.
The liquidity for OTM options is the same as any other kinds.
Typical decentralized option platforms as Hegic and FinNexus provide us with these choices.
FinNexus has pioneered this IL hedging strategy. One of the many advantages in the FinNexus Protocol for Options (FPO) model is the creation of a stable coin USDC pool and to trade and settle options with stable coins.
This hedging strategy can be a great combination with vaults mining in AMM platforms; and it is also useful as a protective plan for institutions and individuals with AMM liquidity pool contributions. FinNexus is expecting to build an easy UI for IL hedging in the future.
https://insights.deribit.com/market-research/why-i-have-changed-my-mind-on-tokens/
Deribit Insights, Dec 2020
Governance tokens (though I prefer the term equity tokens, really) typically entitle the holder of a share of the project’s fees and some voting power in its governance.
Take for example SUSHI, the native token of the sushiswap exchange. When staked in the Sushibar contract, stakers receive fees in size of 0.05% of all trading volume. They also receive “Sushipowah” that represents voting rights in Sushiswap’s off-chain governance system.
Tokens like SUSHI are among the most hotly debated topics over the last years and divide most crypto fans and researchers into two camps. The first camp sees tokens as a liability to be minimized, often accompanied by cries of “why does project XYZ need a token?” I used to fall firmly into this first camp, but now find myself increasingly in the second, which sees tokens both as a necessity and an important incentive mechanism.
Let me explain how I changed my mind.
I’ll start by steelmanning the first camp’s thesis. It comes down to three main arguments:
Governance itself is an attack vector because it allows bad actors to change the rules of the protocol and – in the worst-case – steal user deposits. this defeats the whole purpose of using smart contracts to begin with.
Our goal in crypto is to replace rent-seeking companies and institutions with open and fair protocols. Extracting rent from users is regressive and violates this core value.
Since protocols are open-source and can be forked by anyone, the equilibrium rent is always zero. It’s only a matter of time until we converge on that equilibrium, and then the value of governance tokens will likewise collapse to zero. That’s why anyone selling them today must be a scammer.
To make one thing very clear, I find the first argument to be highly sound. Much of the value of crypto networks and applications comes from being hard to change. This allows users to trust that the application will do what it says, and developers to build on them without platform risk.
Tacking on governance to a system that doesn’t need it turns this logic on its head. When we allow humans to change a system in a top down way, we lose the aforementioned assurances. And since some of the possible changes are very bad for users, we need to pay these governors fees in order to bribe them to prefer honest behavior over malicious one. In other words, the security model of that application changes from cryptographic to economic (guaranteed by economic incentives) – which is strictly worse.
This argument goes way back to BTC itself. Some critics point out that we pay miners a lot of money to protect the network, but the only people who can attack the network are the miners themselves! So do we actually pay protection money to these thugs?
If we could get rid of miners and save that money, that would be highly preferred. But alas, we need *human inputs* in Bitcoin to order transactions and blocks. And so we need to pay enough money to the human workers to incentivize good behavior.
Need for human input -> Need for incentive -> Need for fees
This argument holds for many systems in Defi in their current iterations. Compound or Maker couldn’t work without human input and hence couldn’t work without fees. That is because the risk from changes isn’t isolated. someone needs to control what collateral gets added because one bad piece of collateral could destroy the entire system.
The same isn’t true for Uniswap or Sushiswap. Every pool is a standalone entity. And if one pool gets drained because one of its tokens went to zero, that risk does not spread to the other pools. As a result, there is no need for governance to manage what pools can exist.
So already, some projects need human inputs to work, and those projects are naturally exempt from the arguments that fees are unethical and fees will be forked out because similar projects without fee can’t exist.
That doesn’t mean Uniswap and Sushiswap shouldn’t have a token. In fact, I will now explicitly argue that they should. Token maximalism simply shouldn’t go along with governance maximalism just because developers feel a need to give additional functionality to their tokens. Governance minimalism is still king, even if the project has a native token.
Occasionally it’s good to be reminded how all of our wealth is the result of the capitalist system. Capitalism is great because it aligns the incentives of society as a whole with the incentives of the individual. It allows people to be selfish and still benefit society, by serving each other.
I think it’s quite illogical to call people who perform a service to others (e.g. by building a crypto application) in return for some compensation “unethical”. I don’t think it’s a surprise that defi is the most rapidly innovating market I’ve ever seen. smart people have an incentive to work there because they – among other things – can get rich in the process.
If we have an ethical responsibility in this space, it shouldn’t be to minimize the rent we seek, and thereby walk away from capitalism. Instead, we should ensure that the social standards we set for this space are compatible with how humans want to behave, and funnel that energy into a better world for everyone. The mechanisms of the market (competition, open-source code, etc.) will themselves ensure over time that this rent won’t be larger than necessary.
The token-skeptics argue that because protocols can be forked, the equilibrium rent will be zero. I think this increasingly looks like a pipe dream for two reasons.
First, without rent, there is nothing worth forking to begin with: it’s simply too hard to compete with incumbent networks if you can’t reward your early adopters. And these rewards need to be taken from somewhere.
Proponents of the first camp will now typically say, but bitcoin got there without rent. Yes, Bitcoin got there without rent, but not without a token, and in many ways that serves the exact same purpose. Bitcoin wasn’t useful early on, but people knew if its useful later then every BTC would be worth a lot of money. And hence they bought and traded it, and in the process of doing that increased Bitcoin’s liquidity and public profile.
Based on a german fable, Baron Münchhausen pulled his own hair to lift himself from the swamp.
If you’re early to Bitcoin, and Bitcoin ever takes off, you will be rewarded handsomely. And hence there is a real incentive to be early. But if you compare that to Uniswap, Ethereum’s biggest DEX, it has no such incentive baked in. If you’re an early liquidity provider or trader to Uniswap, then you get a much worse deal: the UX is shit, the market’s illiquid, and there are no organic takers.
It doesn’t mean there can’t be early adopters, but they must find the system immediately useful to them, and that is a huge restriction for early networks. Imagine if BTC couldn’t appreciate in price, and the only people who have a direct incentive of owning it would be people who need it to make immediate transactions – it likely wouldn’t even exist today, because none of these use cases would have materialized to a meaningful degree.
That’s why it’s so important for networks and two-sided market places to be able to reward early adopters with something taken from late adopters.
And that is why I think the generalization of mining pioneered by Synthetix and Compound is such a big deal. Projects have found a hack to bridge the difficult early adoption phase by funneling the utility of future adopters to early ones, thereby smoothing out their respective utility functions.
We have touched on how capitalism works exactly because it leans into people’s self-interest. People need to be able to experiment in serving each other, and outside of fairytale socialist ideals, that simply doesn’t happen at scale when you preclude the ability to profit from your own work exclusively via some concept of equity.
This is what tokens (and pre-mines, to be precise) afford as well: They allow projects to raise money and hire developers and designers and community managers etc. most of which would not work for free.
Now people typically bring up two counterarguments, depending on what community they are from.
Bitcoiners say “but Bitcoin didn’t have a premine and look where it is today”. Bitcoin has the ambition to solve one of the world’s most fundamental problems, hard money. That allows it to attract volunteers who contribute their work for ideological reasons instead of money (and besides, most BTC contributors work on grants today). But not every project can do that, and shouldn’t have to. there are thousands of smaller problems to solve that – all taken together – can change the world just as much.
Ethereans would say, Uniswap was founded with a grant from the Ethereum foundation (which is true). But the EF’s money was itself premined, and Uniswap would never be where it is today if it hadn’t raised venture money very soon after to hire and retain more talent. These investors didn’t give Uniswap their money because they wanted to use Uniswap – a token launch was always planned from the start, it’s what made the investment and the success of Uniswap possible.
No other project since Bitcoin has managed to bootstrap itself without rewarding their early contributors AND been more than a mere copycat (thus excluding litecoin). Even Monero, which is often seen as other fair launch coin, has credible allegations of a premine via cripple mining. Maybe we shouldn’t condemn that, but simply acknowledge that very few people are willing to dedicate years of their life to a highly speculative endeavour without any financial upside.
At this point, we have established that protocols that tokenize late adopter utility to reward early adopters will most likely outcompete protocols that don’t and rely entirely on organic growth. Now the final question is, if a project from the first category has gotten big, can someone come along and fork it to remove the fee/token, to socialize its utility?
First, remember that protocols reliant on human input can never remove the fee, because its necessary to incentivize miners/governors. That leaves projects that function without any human work necessary.
Those can still have tremendous network effect that a new fork would have to overcome. The network effect of Maker and Synthetix, for example, exists in the form of their synthetic tokens. Any fork would start with zero collateral locked and zero synthetics in circulation.
Without a direct financial incentive, it’s incredibly difficult to get both sides of a market to stop whatever they are doing and move to a new system together. A new system, that – even if its slightly cheaper to use – would have a weaker brand, weaker liquidity, no developers, no community, no integrations with other projects, etc. Overall, I think there’s a very good chance that beyond a certain size, projects are basically resistant to forks. It’s just a matter of getting to that point.
Crypto is already at a disadvantage to traditional companies because everything is open source, making it harder to monetize innovation. That’s why all the highly successful crypto projects will rely on network effect, enabled by the superior properties (credible neutrality, permission access, etc.) of public blockchains, as their moat.
But new networks are extremely hard to bootstrap. Tokens, and liquidity mining in general, are a brilliant innovation in bootstrapping liquidity in two-sided market places that crypto urgently needs to overcome the network effect of incumbent networks.
Don’t just ask “why does xyz need a token?” but also “how can xyz support a token”? Because if it can, its chances of success will be greatly increased.
AUTHOR(S)
THANKS TO
https://medium.com/flipside-governance/to-ve-or-not-to-ve-9e3d14d4ccc5
Raphael Spannocchi, Jun 2022
Yet since its inception, DAO governance has been looking for a way to align the behavior of token holders with the long-term.
One emerging way is requiring holders to lock tokens in escrow before they are eligible to vote and allot more votes the longer tokens are locked. In theory this should mean that token holders want to see the protocol flourish, because their tokens are locked up there. This scheme is called voted escrow tokens — or veTokens.
Well, we want to know: does it work? To find out, we’re going to dive deep into the concept and then analyze some data. At Flipside we have access to very detailed databases filled with Ethereum and Solana transactions (among others) that give us unique insights into what’s actually happening on-chain. So we asked our analysts to tell us if veTokens do what they say on the tin. TL;DR: they don’t! At least not in the long term. But they do other things that you, dear reader, can profit from.
Let’s dive in.
Curve also has its own token, $CRV, that gets distributed as a reward to depositors of its liquidity pools, and which has a rather unique use case. Owners of $CRV can direct emission by voting on which pool they should go to. More emissions mean higher yield, of course, and we all know that DeFi is awash in mercenary capital. More yield attracts liquidity, and more liquidity results in a tighter peg & less slippage. A win-win-win. More CRV emissions, more liquidity, less slippage. This triple whammy makes owning tons of CRV almost mandatory for stable coin issuers, who want to incentivize liquidity of their pools.
Curve developers’ artistic touch shines through in the Windows 3 style interface on top of the most sophisticated technology in DeFi.
Wanting to “align token holders’ interests with the long-term success of the project”, Curve introduced veCRV, given out in proportion to how long you locked your tokens up. You can also boost your earnings by up to a 2.5x if you lock your tokens for the maximum duration of four years. So not only do you get voting power by locking up your tokens, but you also get more rewards! But veCRV cannot be transferred. You need to HODL yourself.
If a protocol or a stablecoin issuer wants to control CRV emissions, they are now likely choosing to buy and lock CRV for the full four years. Remember: more voting power means directing more CRV emissions to your pool, attracting more liquidity. This voting is sometimes called gauge voting, because the number of votes is gauged and rewards adjusted according to the votes plus the size of the pool. Bigger pools (meaning more liquidity deposited), need more CRV to get to the same rewards multiple.
Curve profits from this model because sell pressure is reduced. Token holders and the DAO are aligned. To up the ante, veCRV voting power decreases over time, incentivizing users to top up their stash so they stay relevant. Pretty smart, right?
Let’s hold here for a second and review what we just learned from a 30,000ft perspective. So what happened? Pools get CRV emissions, CRV’s utility lies in directing CRV emissions, and locking them up means more power to direct. I see a circle-j@@k forming, but I digress. Let’s see what happened next.
From a customer perspective the veCRV model created a new pain point. Locking up tokens for four years feels like burying them and losing the map with the location. Will Curve even be around in four years? Will CRV be worth anything? You’d be hard pressed to answer these questions with anything but hot air.
Curve’s tokenomics also have a Catch-22: more veCRV mean more rewards from your staking pool. But as more funds are locked in that pool, even more veCRV are needed to get the maximum boost.
Convex offers 50% yield for the CRV you deposit. That’s more than double what Curve offers. Plus the cvxCRV you get are liquid and can be used for further DeFi shenanigans. “How’s that possible?”, you might ask. Here’s the magic:
Convex takes your CRV and gives you liquid cvxCRV instead
It locks your CRV on Curve for the max of four years and farms yields
In addition to those yields it gives you CVX tokens.
As you might expect, you can then lock the CVX to get slightly better yield and voting rights on Convex
Just below the staking interface (sceenshot above) is a sweet little caveat telling you that “Converting CRV to cvxCRV is irreversible. You may stake and unstake cvxCRV tokens, but not convert them back to CRV. Secondary markets however exist to allow the exchange of cvxCRV for CRV at varying market rates.” Remember the four-year lock up? That’s why. Once you give your CRV to Convex, they’re theirs.
So Convex gets CRV deposits which they get yield from, plus they accrue CRV voting power. As you can imagine Convex becomes a massive Curve whale after some time. Read on in the next chapter to find out exactly what happened.
Just how big of a CRV whale Convex has become is a sight to behold. Look at what Flipside Analysts found out: Convex is just scooping up CRV non-stop and is currently hodling 47% of all circulating tokens. That’s more than eight times the amount of the number two, yearn.finance. This means that Convex is in charge of Curve emissions.
Now, what does that mean for CRV’s price? Do protocols still need to buy a metric ton of CRV to direct yields their way? Quite frankly, the answer is No! Convex is so dominant that even the largest buyers of CRV would have no way to direct yields in other directions than those that Convex chooses.
So instead of buying CRV to influence emissions, protocols started to buy CVX to influence what Convex did on Curve and on Convex. Two birds with one stone. Amazing, right?
We mentioned that Convex also had a token, CVX. Well, Convex enablestheir token holders to vote on how they should put their CRV to work, but of course only token holders who lock up their CVX to get vlCVX (vote locked CVX) can vote. Don’t let the ‘l’ fool you, this is another veToken. Every CVX controls about six CRV making CVX holdings an efficient way to direct Curve yields, as advertised.
Convex pays very modest yields for locking up CVX, usually between 2% to 3%. But the lock duration is just 16 weeks plus seven days, or around five months.
If you look at Convex as a way to make voting on Curve more efficient, you can probably see where this is going. A couple of clever developers had the idea that they could make voting on Convex more efficient, by aggregating CVX liquidity and yields. This became [Redacted] Cartel, which quickly turned into a DeFi powerhouse.
[Redacted] offered an Olympus like rebasing strategy with insanely high yield, paid out in BTRFLY tokens. Bonding CRV or CVX offers the highest BTRFLY yields, and these two tokens currently make up almost 70% of their treasury. Soon [Redacted] became a powerful Convex whale and could also control a good portion of Curve yields that way.
Apart from the yields of locking CRV and CVX, [Redacted] actively pursued bribing incomes on behalf of their depositors. Let’s see how that works.
Protocols pay CVX holders to vote for them. This is called bribing. They do so because it’s cheaper to bribe holders than to buy CVX themselves. How much a protocol needs to pay to convince CVX holders depends on the competition, so bribes change semi-weekly. They usually range between $0.40 to $0.80 per CVX vote. One CVX costs $9 at the time of this writing, May 2022, which makes bribing a great way to maximize votes to your pools!
Convex holders profit, too. Instead of earning ~3% from locking they can earn 58% in bribes. Pretty neat! But voting takes time and transaction fees.
Is your head spinning by now? Let’s break it down step by step:
Convex controls Curve emissions by owning so much veCRV
vlCVX control Convex
Protocols bribe vlCVX holders to make them vote their way
Votium automates the voting process for maximum bribes
Llama Airforce Union aggregates bribe rewards for users
[Redacted] Cartel aggregates bribing and staking yields for users
As you can see veTokens used for emissions and voting power gave birth to a whole ecosystem, an entire economy. But what did they do for the price of CRV?
Let’s look at what the Curve wars did for the price of CRV and CVX, as well as the interplay with Tokemak’s TOKE and [Redacted] Cartel’s BTRFLY token. Tokemak is another yield and bribe aggregator that covers multiple protocols, not just Convex, like Votium does.
Convex’s launch depressed CRV price significantly. The chart above has a logarthmic scale on the y-axis, so the “tiny” drop in May 2021 is actually a precipitous fall from $3.78 to $1.32. CRV price recovered very slowly until it surpassed its former level more than half a year later. We can see very clearly that the value is accrued on Convex, not on Curve, in CVX’s price.
[Redacted] cartel’s BTRFLY token suffered the fate of many other ultra-high yield DeFi tokens inheriting from OHM: A short pump followed by a long, steep and painful decline. BTRFLY is currently down 99% from its all-time high.
You may ask if the veToken model is unique to Ethereum? Let’s look over to the Solana blockchain where there’s a highly liquid, low slippage stablecoin exchange with gauge voting too.
Saber only operates pools of similar cryptocurrencies with little to no slippage. Thanks to its efficiency it became a Solana mainstay, with up to $4bn in TVL. Unfortunately, its SBR token had high inflation and little utility to start with. To dry up supply and boost use cases, the Saber team copied the veCRV model with a few small tweaks and launched veSBR.
Just as on Curve, veSBR wasn’t transferable and holders could use it to influence SBR emissions. And just like on Curve, another platform appeared that made locked SBR liquid and directing SBR emissions more efficient. Enter Sunny Aggregator and their SUNNY token.
Sunny Aggregator is the Convex of Solana. Sunny also allows user to earn more yields from their SBR tokens, by aggregating SBR and issuing SUNNY on top. Sunny also features a sophisticated bribe network where depositors can earn additional income.
Unfortunately for both Sunny and Saber, the tokenomics and especially the high inflation rates of SBR meant neither token ever took off. SBR is down 99% from its all-time high and Sunny even more. Saber and Sunny are a great example that veTokens are not tokenomics Valhalla. Without good underlying incentives design, tokens are doomed.
Which leads us to the question: What is the dark side of veTokens?
So far, we have seen that veTokens drive demand, and that entire economies can be created on top of these illiquid and non-transferable tokens. But what about the downsides?
First, the long token lock periods are very prohibitive for many guppies. The more funds you have the easier it is to lock a portion of your portfolio for the maximum duration. And only the maximum duration gets the full benefits.
This allows whales to consolidate their control. Richer users or protocols usually need less liquidity. Protocols with large treasuries who can justify large deposits as market making can make it hard for smaller protocols and especially for retail investors to gain any influence.
Then, there’s a lot of rug-pull, hack, and fraud potential when tokens are locked and illiquid.
Four year lock-ups on a protocol with a large TVL gives hackers a lot of time and incentive to find flaws to exploit.
Of course, hacks and rug-pulls happen on protocols without voted escrow as well, but the long lock-up periods make this especially painful, because you can’t get your funds out, even if a flaw is discovered and you’d want to withdraw as a precaution. Many crypto projects fail to be maintained for four years, and if users cannot move their funds out, they will be dragged down alongside it.
If you’re designing a veToken model, we advise you to choose shorter locking periods. That goes a long way to mitigate almost all the problems outlined in this chapter.
Let’s circle back to Curve and Convex, because these two are part of the longest running veToken experiment. We have seen that veTokens can create a whole economy of aggregator and bribing protocols. We have also seen that after an initial spike in prices the value of these tokens was falling rapidly.
This is not a surprise. veTokens create incentives to lock tokens up, but ongoing CRV emissions on a growing TVL result in a steadily growing supply. And a portion of users will want to swap CRV gains to more useful tokens or stablecoins, instead of locking them up, so selling pressure doesn’t go away. If the sole utility of a token is directing emissions and nothing else, the portion of users who wants to sell will always be substantial.
CRV circulating supply, Source: Messari
Let’s take a look at another smaller-scale experiment. Platypus Finance tried to implement the ve model in a different way. Platypus is a stablecoin savings protocol, where users can stake USDT, USDC and DAI and receive PTP rewards. You can lock PTP to generate vePTP, which is neither transferable nor tradable. One staked PTP generate 0.014 vePTP every hour, and the more vePTP you have, the more yield your deposit gets. You’re incentivised to lock up every PTP to get more and more yields, until your reach 100x the vePTP of your PTP staked, which happens after about ten month.
How did this experiment turn out?
PTP price in USD. Source: coinmarketcap.com
veTokens have a strong financial component, and little other utility. In the long term this makes them hard to justify as part of a portfolio. CRV, CVX, SBR, and SUNNY are all emitted as yield. But yield in a token with no value except the ability to direct more yield simply has low fundamental value. This creates incentives to head for the door as long as the music is still playing and increases selling pressure.
Snapshot.org allows analysts to access their data via REST APIs or a JavaScript library, and our analyst Nhat Nguyen took a closer look at what happened on Curve and Convex votes. Let’s look at Convex votes first. Convex upgraded its smart contracts to v2 in March 2021. We separated v1 and v2 voting behaviour.
You probably notice two things: First, weighted votes have at least an order of magnitude more participants than single-choice votes. Second: voting participation does increase over time.
Why is that? Weighted votes are gauge votes that govern emissions. Participation is extremely high because this is where the money is. Bribed voters have to participate to be eligible for payout and emissions are money in the bank.
Voting increased overtime, because only locked CVX can vote. A 17 weeks lock exposes users to volatility and if they don’t vote they can’t receive bribes and can’t have any influence. That would be like setting money on fire.
The same pattern repeats on Convex v2, as you can see in the chart below:
As time progresses and more and more tokens are locked up, voting increases to justify lock up and reap the rewards. The plateau in both charts coincides with a plateau in additional rewards that users could get via Votium or [Redacted] Cartel. DeFi capital is notorious for its mercenary nature and only stays as long as milk and honey flow, or until even juicier liquidity incentives are launched elsewhere. The amount of CVX locked also plateaued over time as investors started to sell their yield instead of locking it up.
We can see that requiring holders to lock their tokens in escrow does increase voter participation, but not as much as when holders actually get paid to vote. Paying for participation is a can of worms, because it introduces a sense of entitlement and a professional voting class that has substantial incentives to vote on ever larger voting remuneration. Convex gauge votes don’t have this problem because CVX emissions are integral to the protocol.
The question remains: To ve or not to ve?
But we have learned a lot in this blog post and this leads us to a better question: What’s the tokens utility? If it is only good for creating ve then the design is a circular conclusion and ultimately will not work. veTokens were brand new when Curve and Convex first duked it out in the interwebz. By now the token design is an old hat and will not get investors excited to jump on board, just by itself.
Imbuing a token with actual utility and good tokenomics is the first step. Only after you figured that out can you apply the ve model to create further demand.
How else could you align users and the long-term success of a project?
We want to give a short outlook on Soulbound tokens.
You’ve probably heard the proverb “those who most want to rule are those least suited to do it”. When you follow this line of thinking you can see that tokens that represent influence should not be transferable. Transferable and financialized influence means accumulation in the hands of those most versed in finance, which aren’t necessarily those aligned with your long-term success.
Souldbound NFTs are tokens that are rewarded to individuals as recognition for their contributions. They are not transferable and cannot be sold. Individuals might still be open to bribes, though, but we think they represent something more profound, more oriented towards the transcendent good.
veTokens are here to stay, in the meantime. We hope this blog post equips you with the necessary background to use and design them well.
https://fehrsam.xyz/blog/value-of-the-token-model
Fred Ehrsam, Jan 2017
Governance People will want to have governance over their own communities separate from the global governance of Ethereum (or any other base blockchain). A token is necessary for this sub-governance. Not having sub-governance would be like anyone who owns USD being able to walk into a Google shareholder meeting and voting without owning Google stock just because Google shares happen to be denominated in USD. Or like everyone in the US being able to vote on the bylaws of a social club in San Francisco, just because it happens to exist in the US.
A token offers security to community governance. While anyone can buy a token with Ether, in order to meaningfully influence the governance of that token community they’d have to buy a lot, increasing the price drastically while doing so. Finally, if they voted on something bad for the community, they would be destroying the value of what they own.
It may be that the rules around some communities and their protocols can be static and thus don’t need governance — but I imagine most will want the option to evolve.
Monetary policy Ethereum may make monetary policy decisions like “let’s do 1% inflation to support the ongoing development of the Ethereum protocol.” A token built on Ethereum might want to do the same. And you’d want each to be able to do that independently. This is similar to what we are seeing with the Euro: you don’t necessarily want monetary union without fiscal union.
Addressing the original critiques “Just using Ether is more efficient because it eliminates the cost to convert between Ether and tokens”
The marginal cost argument for why it’s cheaper to do a contract on ETH is weak in my opinion. The cost of exchanging a token for ETH is already low at ~0.25% and will approach 0.
The more interesting economic force, in my opinion, is that Ether holders have a strong economic incentive to take any token protocol that works and port it directly to Ether. Let’s say Filecoin gets 30% of the Ethereum market cap. If Ether holders successfully port it — a simple matter of copying the code of the smart contracts — they absorb that value, and the value of their Ether goes up. So it seems like it will be tried. The question is whether or not it will work. In cases where a token is really needed for the reasons described above, I think the answer is no.
Separately, I wonder what happens here if/when tokens start to span multiple base blockchains. For example: what if Filecoin is its own virtualized blockchain, where its ledger state is maintained across blockchains. If that is possible, you can imagine a token not being dependent on any one blockchain.
“People will prefer use Ether instead of tokens because it will be more stable in value”
Holding/pricing in Ether instead of a token can be both a benefit and a drawback. On the benefit side, Ether may be more stable. On the drawback side, these communities may not want to be subject to changes in the price of Ether. This becomes especially true if tokens can start living across multiple base blockchains. In any case, I think this point is least important since holding tokens is not required because they can instantly be converted in/out of Ether.
Thanks to Linda Xie.
https://multicoin.capital/2017/12/08/understanding-token-velocity/
Kyle Samani, Dec 2017
Basically, all token pitches include a line that goes something like this: “There is a fixed supply of tokens. As demand for the token increases, so must the price.”
This logic fails to take into account the velocity problem.
In this post, I’ll explain the velocity problem by providing an in-depth example. Then I’ll examine mechanisms that reduce velocity.
A high-velocity example
There’s a reasonable case to be made that tickets for live events should be issued on blockchains. If venues come to accept blockchain-issued tickets, this solution should stomp out all fraud. You can’t double spend blockchain-based assets.
Issuing tickets on blockchains can bring other benefits, including disallowing resale, profit sharing on resale back to the venue, capping resale amounts, etc. Ticketing on-chain should create a lot of value for venues, artists and consumers: it eliminates fraud, reduces scalping, and reduces fees to middlemen like Ticketmaster and StubHub.
Consumers want to pay for tickets denominated in dollars. They may purchase Karn tokens as part of the ticket acquisition process, but they won’t hold the tokens for more than a few minutes at a time. There’s simply no incentive to hold them and incur price risk relative to the dollar.
Venues also don’t have an incentive to hold Karn tokens because they, too, want to avoid price risk. After consumers trade their tokens for concert tickets, venue hosts will trade Karn tokens for their preferred currency. Note that this cycle can be completed in seconds by leveraging decentralized exchanges such as 0x:
No one actually wants to hold Karn tokens. The presence of a proprietary token actually creates a worse UX for consumers by introducing an unnecessary layer of friction into the ticket purchasing process. The moment anyone receives Karn tokens, they exchange them for something else – either a ticket (consumer) or dollars (venue).
Even if Karn becomes the global standard for ticket issuance, no one will want to hold it. BTC/ETH/USD-denominated trading volume for Karn tokens may skyrocket as the platform becomes the global ticketing standard, but the price will grow sub-linearly relative to transaction throughput.
The primary stakeholder group who will profit from the rise in trading volume of Karn tokens will be market makers who provide liquidity for those entering and exiting the market. This is not a bad thing. As asset pairs increase in volume and become highly liquid, bid-ask spreads collapse to near 0 percent, which is good for consumers and venue hosts.
To be clear, in this scenario the venue hosts still win by cutting out scalpers, and consumers win because of increased fraud protection. But despite delivering real, tangible benefits to marketplace participants, our fictional Karn token won’t actually capture the value the protocol is creating.
QUANTIFYING VELOCITY
Here’s where velocity comes in. It’s defined as:
Velocity = Total Transaction Volume / Average Network Value
Therefore:
Average Network Value = Total Transaction Volume / Velocity
Velocity can be measured over any time span, but is normally measured annually. Trading volume can be difficult to measure. This not only includes trading volume that occurs on exchanges, but over-the-counter trades and actual usage of the platform.
In the case of a proprietary payment token that nobody wants to hold, velocity will grow linearly with transaction volume. Per the second equation above, transaction volume could grow a million-fold and network value could remain constant. Almost all utility tokens suffer from this problem.
REDUCING VELOCITY
There are a few ways a protocol can reduce the velocity of its associated asset.
2) Build staking functions into the protocol that lock up the asset: This includes proof-of-stake mechanisms for achieving network-layer consensus. However there are far more compelling reasons to stake than simply to achieve node consensus. For example, FunFair is a platform that powers online casinos. FunFair only supports one-against-one games such that the player is playing the house directly (therefore no poker). The house must maintain reserves to pay out highly unlikely events, such as a user winning big in slots or winning 10 times in a row in blackjack. The casino operators will need to lock up far more than 50 percent of all tokens.
3) Balanced burn-and-mint mechanics: Factom is the best, and perhaps only, example.
A number of protocols have implemented the burn concept (without minting), notably FunFair. I am highly skeptical of currencies that are explicitly deflationary to create upwards price pressure on the value of the token. In the long run, deflationary currencies will create weird incentives for holders, causing unnecessary volatility due to excessive speculation. Burn-and-mint addresses this problem.
In Factom, the cost of using the protocol is denominated in U.S. dollars at $.001. Each use is $.001, regardless of the price of FCT. Users burn tokens to use the protocol as designed. Independently, the protocol mints 73,000 new tokens each month and distributes them to validators (Factom is its own chain, not an ERC20 token). If users don’t burn 73,000 tokens in a month, supply increases, which should exert downwards price pressure. Conversely, if users burn more than 73,000 tokens per month, supply decreases, exerting upward price pressure. In the long run, there should be linear relationship between the usage of protocol and price.
Note: see the Multicoin Capital Factom analysis and valuation.
4) Gamification to encourage holding: Let’s revisit ticketing. Since many concerts sell out quickly, venues could prioritize customers based on having held X tokens for Y days. If enough venues adopt this mechanic, velocity will fall.
One reason to hold an asset is an expectation that it will increase in price. In theory, this should dampen velocity and drive up the price of the asset. This basically defines bitcoin today. Bitcoin’s value is a function of a speculative value game, not from intrinsic utility as a payment system.
Another reason to hold an asset is the expectation that its value will be stable. A number of stablecoin projects such as Maker and Basecoin are trying to create trustless assets that are price-stable on the open market.
Becoming a general-purpose store of value is extremely difficult. There are only a handful of projects even attempting to fulfill this vision today. It’s not clear how dominant the long-run winner will be. You can make perfectly rational arguments for a handful of currencies with 20–30 percent of global value each, a 75-5-5-5-5-5 percent split, or an 80-20 percent split. Although money has strong network effects, it’s not clear how strong those effects are, or how much the market will demand viable competition to mitigate macro-level risk associated with a single mega currency.
CONCLUSION
Velocity is one of the key levers that will influence long-term, non-speculative value.
Most utility tokens don’t provide a compelling reason for token holders to hold the token for more than a few seconds. Absent speculation, assets with high velocity will struggle to maintain long-term price appreciation.
Hence, protocol designers will be well served to incorporate mechanisms into their protocols that encourage holding, not just usage.
https://medium.com/@FEhrsam/blockchain-governance-programming-our-future-c3bfe30f2d74
Fred Eshram, Nov 2017
This post describes why blockchain governance design is one of the most important problems out there, its critical components, current approaches, potential future approaches, and concludes with suggestions for the community.
For these reasons, I believe blockchain governance system design is one of the highest leverage activities known.
It’s rare a new government or central bank gets created, and even more rare to see experimentation with a new form of governance when it does.
1. Incentives
2. Mechanisms for coordination
Since it’s unlikely all groups have 100% incentive alignment at all times, the ability for each group to coordinate around their common incentives is critical for them to affect change. If one group can coordinate better than another it creates power imbalances in their favor.
In practice, a major factor is how much coordination can be done on-chain vs. off-chain, where on-chain coordination makes coordinating easier. In some new blockchains, on-chain coordination allows the rules or even the ledger history itself to be changed.
What follows is a dissection of the benefits and drawbacks of today’s two largest blockchains: Bitcoin and Ethereum. We are currently in the primordial ooze phase of blockchain governance. Systems are simple and little has been tried.
Bitcoin was the first successful attempt to create a standalone blockchain. Let’s examine it as a base:
1. Incentives
Developers: increase value of existing token holdings, social recognition, maintain power for control over future direction.
Miners: increase value of existing token holdings, expected future block rewards, and expected future transactions fees.
Users: increase value of existing token holdings, increase functional utility (e.g. store of value, uncensorable transactions, file storage).
2. Mechanisms for coordination
Resulting system
The checks and balances system created is somewhat analogous to the US government and has a number of benefits. Similar to the Senate submitting new bills, developers submit pull requests. Similar to the judiciary, miners decide whether or not to actually adopt the laws in practice. Similar to the executive branch, the nodes of the network can veto by not running a version which aligns with what the miners are running. And similar to citizens, the users can revolt. Finally, economic incentives dictate that it is in everyone’s best interest to maintain trust in the system. For example: if miners alienated all the users, the tokens would decrease in value and they would go out of business. As the first system of its kind, it’s incredible that Bitcoin is still going strong.
The systemic incentives and mechanisms of coordination in Ethereum are similar to Bitcoin at the moment.
New blockchains are making it much easier to coordinate by enabling on-chain governance.
Tezos
This also enables users to directly coordinate on-chain, dramatically increasing their power and reducing the power of miners compared to a system like Bitcoin or Ethereum.
DFINITY
DFINITY is maximally flexible. Depending on what parts of the protocol Tezos allows to be changed, it is possible protocol changes effectively let you re-write the ledger as in DFINITY. As a result, it’s likely these systems will have different voting thresholds for different changes, perhaps requiring a supermajority for some things and a simple majority for others.
The most interesting learnings will come from exploring the balance of mutability so systems can evolve and immutability for stability.
Next we’ll talk about future governance strategies with potential which have yet to be tried.
Futarchy
This system could be incredibly powerful for a few reasons. First, voting becomes extremely simple. People don’t need to vote, they are just asked about one thing once a year: their satisfaction. Second, people do not need to develop extensive knowledge of candidates or bills. This is important because candidates are often persuasive and bills are complex to the point where it is hard for a domain-specific researcher to understand their implications, let alone an elected official or an average citizen. Instead, we rely on the wisdom of the markets. Like trading in stocks, only people who are extremely well informed on a topic will bet on it — otherwise they are likely to lose money to others who are better informed. Finally, it is a system where market incentives are aligned with societal values.
Liquid Democracy
This seems like it will be used in proof of stake blockchains given its simplicity.
Quadratic Voting
A major problem with one person = one vote systems on the blockchain is their susceptibility to sybil attacks. Near zero cost to create infinite accounts means it’s easy to generate infinite votes. This is why the default model in proof of stake and Ethereum-based token governance is one token = one vote.
As mentioned in quadratic voting, other mechanisms which weight community members differently independent of real-world identity are likely to evolve. For example, a new token holder may have diminished voting power until they have been a member of the community for a while, similar to not being able to vote until you are a full citizen of a country.
In any case, today’s world would look a lot different if modern governments ran on voting with money, so this change in defaults is not to be taken lightly.
Finally, reputation within a token community will be critical. This is already shown through indirect means, where Vitalik’s suggestions carry a lot of weight in the Ethereum community. In a liquid democracy system reputation manifests in the number of votes delegated to a particular person. Someone with high reputation and no money could have 10 million Ether delegated to them and have tremendous governing power.
Futures markets on Bitcoin’s SegWit2x
We’ve seen many examples of forking so far, and this is great! In physical nations forking is nearly impossible. This was also the case in software until blockchains emerged. They make it easy to take all the code and state of a system and try a new path. In the Web 2.0 world, forking is the equivalent of Facebook allowing any competitor to take their entire database and codebase to a competitor. Don’t like how Facebook is operating it’s newsfeed? Create a fork with all the same code, social connections, and photos.
Reducing network effects. Not everyone is speaking the same language anymore.
Creating work. Anyone who was using the forked protocol probably had their code broken. In a world that is increasingly interconnected through transparent and trustless code execution, these effects compound.
Reducing trust. Now that we’ve had a breaking change, those previously referencing the protocol must now go outside the blockchain and somehow figure out what the “right” new version is to use.
Because of the dramatically reduced friction for exit, the need for effective voice (governance) is more critical than ever. It is trivial to fork a blockchain and copy all of its code and state. So the value isn’t in the chain of data, it’s in the community and social consensus around a chain. Governance is what keeps communities together and, in turn, gives a token value.
For users: Spend more time looking at the governance system of your blockchain, less time on the issue of the day. Current events are just a manifestation of the larger system that caused them. So while it’s easy to get riled up by the news, the highest leverage point for change comes from designing or changing the system, not arguing about its current manifestations.
For everyone: Watch and learn from the experiments that will be run on the new on-chain governance systems.
I believe governance should be the primary focus of investors in the space. The fundamentals of cryptoeconomics and overarching governance schemas of these networks are critical to survival, under-appreciated, and poorly understood. Investors can add significant value through the luxury of being able to observe and learn from multiple projects at one time. They should be active in the governance of the tokens they participate in and transparent with a community if they feel the design of the system can be improved.
⚡ NOTE: One of the most popular 50:50 pools can be found by following and
⚡ NOTE: Pay attention to these examples (, , ), if you are wondering about real-life 95:5 pools.
⚡ NOTE: Another example of a real pool with a 98:2 proportion that are worth visiting :
There are a lot of tools for impermanent loss estimation but most of the known utilities are wrong in their calculations or pretty limited in terms of functionality. Therefore, in order to rectify this, the defiyield.info team decided to come up with its own impermanent loss calculator accessible by following the direct link:
The market is full of utilities for calculating potential impermanent losses, but most of them are useless for real-life calculations. In this regard, the defiyield.info team has developed a unique free-to-use IL calculator —
is changing finance in many ways, while also bringing in additional risks. Impermanent loss in automatic market maker () liquidity pools is one of the best known new types of risks that has sprung out of the DeFi boom. Can impermanent loss be hedged in a decentralized way? And if so, how?FinNexus is a decentralized cross-chain options platform with a peer-to-pool model. It pools all the liquidity together in a collateral pool and collectively acts as the seller for writing and settling options. , now live on both Ethereum and Wanchain, provides keys to hedging against impermanent loss potentially suffered by AMM pool participants. This article will explain how the combinations of options may work together to tackle this problem.
To understand (IL), we need to grasp several basic concepts and models.
There are a number of AMM projects on the market, like , , , , , , etc.. As compared to centralized exchanges (), we tend to call these projects decentralized exchanges () with AMM mechanisms.
The XYK model is the most well-known and widely applied AMM mechanism in DEXs. I wrote a to this model before. Uniswap and SushiSwap, both of which had over 1 billion USD locked as of Nov. 2, 2020 and ranked top second in all DEXes, are representative projects of the model.
Impermanent loss is the loss suffered by the liquidity providers in AMM . It happens when you provide liquidity to a liquidity pool, and the price of your deposited assets changes compared to when you deposited them. The bigger this change is, the more you are exposed to impermanent loss. But if the price shifts back to the point when you make the deposit, the loss will disappear. Therefore, we call this loss “”
Impermanent loss is usually observed in standard liquidity pools where the liquidity provider has to provide both assets in a correct ratio, and one of the assets is volatile in relation to the other; for example, in a Uniswap DAI/ETH 50/50 liquidity pool, as in the .
With the same logic above, we could derive the formula for the size of the impermanent loss in terms of the price ratio between when liquidity was supplied and now. If you are interested, you could read for the derivation process.
Please refer to details of calculation .
Please refer to details of calculation .
Of course, we can be more flexible with the plan, to make the orange curve in a more or manner.
This article is inspired by from Huobi.
Any DAO that has token-based voting faces the same inherent issue: that votes can be bought — and sold — quickly if those tokens are traded on exchanges. This has led to predatory voting attacks, where enough tokens to ram a proposal through were bought solely for voting and sold quickly afterwards. (You can read our blog post explaining for an extreme version of this behavior)
veTokens were pioneered at . Curve’s main use case is as an extremely efficient trading engine that minimizes stable coin swap slippage. This has propelled the decentralized exchange to a place in DeFi heaven.
veToken explanation courtesy of
Meme source:
What if there was an opportunity to have voting power and rewards boosts on CRV without locking your tokens for what feels like eternity? If that sounds attractive, this is just what built.
Percentage of CRV tokens owned by other protocols. Data:
Enter ! From their website: “Delegating to votium.eth automates voting for the best $ / vlCVX possible.” Votium makes it easy for CVX holders: instead of voting on the weekly allocations yourself, you simply delegate to Votium, and they’ll maximize rewards on your behalf. Since protocols pay bribes in their native tokens, users could end up with a plethora of miniscule rewards that are more expensive to claim, than they’re worth. Gas fees for claiming them are often higher than the amounts. Don’t despair, there’s another aggregator.
manages your Votium rewards by aggregating them and buying even more cvxCRV to stake.
As you can see it worked for a few months before investors headed for the door. As soon as PTP’s price was tanking, yields shrank and the incentive to keep PTP locked went with it. That led to more PTP selling pressure, and even less yield, and so on. Negative network effects are setting in and the price is caught in a vicious cycle. You can read more about network effects in .
Snapshot is one of the most widely used tools for off-chain voting in the DAO space. We’ve covered the basics in our “”, so please make sure to read that if you want more information.
Since my on app coins (I think the more appropriate term is now tokens or blockchain tokens) we’ve seen projects use this model. The tide is turning. While of venture capital was invested in Bitcoin and blockchain based startups up through 2015, 2016 saw over $100m of crowdsourced, non-venture money fund 60+ projects.
As momentum has increased, there have been some intelligent critiques of the model which help to hone in on the value a token really provides. I especially enjoyed , where the author argues it’d be more efficient to port the smart contracts of a token to use Ether directly, eliminating the token. He argues this will happen because 1) it eliminates the cost to convert between Ether and tokens, and 2) Ether will be a “better money” than any token because it will be more broadly used, thus more stable, so people will prefer to just use Ether. I want to succinctly describe the core value of the token model and address some of the common critiques below.
Aligning incentives Tokens align incentives between developers, contributors, users, and investors. They allow everyone who wants to contribute to a project early the opportunity to get in on the ground floor. This overcomes the classic .
Conclusion As with early internet startups, some token models don’t make sense. For every , so we shouldn’t be surprised when some fail. However, the fundamentals of the token model are valuable and powerful. They allow communities to govern themselves, their economics, and rally a community in powerful ways that will allow open systems to flourish in a way that was previously impossible.
Ticket fraud (literally reprinting and selling a ticket multiple times) for events is a .
Although I love this use case for blockchains, there is no reason that I, as a full-time crypto investor (speculator), want to actually hold , or tokens (all three are blockchain-based ticket issuance platforms). Even if these platforms become widely used and process tens of billions of dollars of transactions, their underlying token mechanics are not structured so that the price of the underlying token will materially appreciate.
Consider a hypothetical platform that we’ll call Karn, in honor of the .
We can say that an asset has a velocity of 0 if, over the course of a year, no one buys or sells it. The lack of liquidity would cause the asset to trade at a discount to its “intrinsic” value. Assets need some velocity to achieve their full intrinsic value. The difference is known as the .
1) Introduce a profit-share (or buy-and-burn) mechanism: For example, the ($REP) network pays REP holders for performing work for the network. REP tokens are like taxi medallions: you must pay for the right to work for the network. Specifically, REP holders must report event outcomes to resolve prediction markets. A profit-share mechanism reduces token velocity because as the market price of an asset decreases, its yield increases. If the yield becomes too high, market participants seeking yield will buy and hold the asset, increasing price and reducing velocity.
Also, a cash-flow stream makes a token easier to value using a traditional . Note: see the Multicoin Capital .
The burn-and-mint dynamic is possible because Factom is its own chain. ERC20 tokens do not have network validators who can be compensated via inflation. Burn-and-mint is possible for , albeit trickier. There’s not a generic, obvious set of network participants who should receive the tokens that are generated by inflation. Also, more technically: inflation is tricky to implement because smart contracts cannot run as daemons that auto-inflate; they must be triggered.
Another example: is rolling out a proprietary in-app cryptocurrency called that allows users to tip content creators during live video broadcasts. YouNow also has a “discover” tab. The YouNow service is more likely to rank a creator’s content highly if they hold tokens. This creates an interesting dynamic in which content creators are paid in PROPS, but need to convert to fiat to pay the bills. On the other hand, they want to hoard tokens to become more discoverable, fueling more attention and generating more tip-based income.
5) Become a store of value: This is by far the most difficult to achieve as it’s not a function of a specific design mechanic, but rather a question of broader technical viability and market acceptance. If people genuinely come to believe in a token as a , there will be a significant probability that they’re willing to hold onto excess tokens rather than sell them for something else.
As with organisms, the most successful blockchains will be those that can best adapt to their environments. Assuming these systems need to evolve to survive, is important, but over a long enough timeline, the mechanisms for change are .
As a result, I believe governance is the most vital problem in the space. Other fundamental problems like scalability are by using governance to set the right incentives for people to solve them. Yet little research has gone into governance and it feels poorly understood.
Evolutionary tree of life, from .
Satoshi showed us the immense power of releasing a blockchain-based incentive structure into the world. A 9 page spawned a $150bn cryptocurrency, a computer network , and a diverse ecosystem of developers, users, and companies. This was arguably one of the highest leverage actions in human history. It showed the power of blockchains as networks that can connect everyone and bootstrap themselves into existence if well constructed.
We are increasingly living in digital networks, spending an average of in the US with over half of that on internet connected devices, growing 11% each year. However, these networks are highly centralized (Facebook, Google, Apple, Twitter) and . In the current model, all of the profit and power of a network is within one company, and you’re either inside or you’re outside. It’s important the networks we live in serve our best interests. With blockchains emerging as the new global infrastructure, we have the opportunity to create vastly different power structures and program the future we want for ourselves.
Blockchains are unique because they 1) allow thousands of governance systems and monetary policies to be tried at the speed of software with 2) in some cases, much lower consequences of failure. As a result, there will be a of economic and governance designs where many approaches will be tried in parallel at hyperspeed. To be clear, I am including economic design and monetary policy (said another way, incentive structure) in governance because, like other aspects of the system, they can be modified as time passes.
Many of these attempts will be spectacular failures. With millions of algorithmic central banks we will have millions of crypto . Through this process, blockchains may teach us more about governance in the next 10 years than we have learned from the “real world” in the last 100 years.
Each group in the system has their own incentives. Those incentives are not always 100% aligned with all other groups in the system. Groups will propose changes over time which are . Organisms are biased towards their own survival. This commonly manifests in changes to the reward structure, monetary policy, or balances of power.
Mostly off-chain. Developers coordinate through the Bitcoin Improvement Proposals () process and a . Miners can coordinate on-chain in the sense that they are creating the chain itself.
Bitcoin as branches of US Government. Image from .
There are risks to the system caused by asymmetries in incentives. Miners push for changes which increase future cumulative transaction fees, while developers don’t care as long as the value of Bitcoin keeps going up. Developers’ direct economic incentives are weak. New developers have little incentive to work on Bitcoin because there is no direct way to earn money by doing it. As a result, they often work on new projects — either by creating Ethereum tokens, entirely new chains, or companies. No new blood entering increases the perception and reality of early developers as the most knowledgable and experienced. This in a self-reinforcing cycle of more power becoming concentrated in a small group of early core developers, slower technological advancement, and conservatism. Developers are at risk of being bribed since they have a lot of power but weak economic incentives. Some and have sponsored developers, but with limited impact thus far.
Similarly, asymmetries in ability to coordinate give miners disproportionate power. Communication amongst miners is easier because they are a small and concentrated group. Since mining is a business with economies of scale, we’d expect a continued trend towards in mining and even greater coordination advantage. As a reference point, 2 years ago. Miners can also gain disproportionate power by bribing developers or hiring their own. Finally, the checks and balances system of Bitcoin relies on some level of transparency: for example, users becoming aware of a single miner gaining more than 51% of the hashing power or developers having some level of independence. And a miner who was able to gain >51% of hashing power would be incented to remain anonymous. Rather than sparking a specific catastrophic event, this would cause an unknowing descent into a centralized world of control through censorship and asset freezing.
Dynamics will change as Ethereum . The power of miners will be replaced by anyone who holds a sufficient amount of Ether to run a virtual miner (a “validator”). This is especially true as solutions like will allow even the smallest Ether holder to participate, flattening the distinction between a miner and a user, and potentially reducing the biggest centralization risk in Bitcoin.
The incentives of core developers remain the same. Coordination around challenging issues has been swifter and smoother than in Bitcoin to date. This is due to 1) a culture more open to change because and 2) direction from Vitalik who is widely trusted in the community.
Current weaknesses in the model include 1) over reliance on its creator (Vitalik) and 2) like Bitcoin, limited ways to incentivize core development, forcing more projects to create tokens to support themselves. Vitalik is , which will be a delicate process.
In , in the form of a code update. An on-chain vote occurs, and if passed, the update makes its way on to a test network. After a period of time on the test network, a confirmation vote occurs, at which point the change goes live on the main network. They call this concept a “self-amending ledger”.
Such a system is interesting because it shifts power towards users and away from the more centralized group of developers and miners. On the developer side, anyone can submit a change, and most importantly, everyone has an economic incentive to do it. Contributions are rewarded by the community with newly minted tokens through . This shifts from the current Bitcoin and Ethereum dynamics where a new developer has little incentive to evolve the protocol, thus power tends to concentrate amongst the existing developers, to one where everyone has equal earning power.
One step further would be a system which allows on-chain votes to the rules of the system like Tezos and . In other words, if something happens that tokenholders do not like (ex: a hack, a marketplace selling drugs), they can roll back or edit the ledger in addition to the rules of governance themselves. , an in-development blockchain, is taking this approach. Proponents of this system point to events like hard fork caused by The DAO hack and the recent and suggest such events would be much smoother if everyone could just vote to undo them. On the flip side, this system allows direct censorship and peoples’ tokens to be forcibly taken. As we saw with Ethereum’s hard fork to revert The DAO hack, this is possible with existing blockchains, but requires higher friction through off-chain coordination and hard forking instead of on-chain coordination with no forking.
On-chain governance is a double edged sword. On the upside it helps make sure a process is consistently followed which can increase coordination and fairness. It also allows for quicker decision making. On the downside it’s risky because the metasystem becomes harder to change once instituted. Like anything put directly into code, it can be exploited or more quickly and easily if flawed. Vlad Zamfir, a principal architect of Ethereum’s proof of stake, “the risks far outweigh the rewards” and “represent an extremely risky proposition”.
For some use cases, tending towards being static may be good. This may be especially true for store of value. Perhaps lower level protocols should lean toward stasis and conservatism — “measure twice and cut once” — while higher level protocols should be more flexible — “move fast and break things”. In the words of Calvin Coolidge: . Like established companies, some of the more established protocols may be able to watch what new protocols do and adopt techniques which seem to be working. This seems especially true of Ethereum which has shown a willingness to hard fork and the ability to maintain network value through them. Consequently, I’d expect to see the most innovation in the next few years from Ethereum tokens and entirely new chains.
It’s probable we haven’t found the best governance systems yet, which means a more general system which allows many different methods to be tried is valuable, if for nothing more than .
In , society defines its values and then prediction markets are used to decide what actions will maximize those values. Said another : . It was in 2000 by , an economics professor at George Mason University.
has a particularly eye-opening proposal for a blockchain implementation of futarchy in his paper called . In his proposal, every citizen is polled once a year and asked the question “how satisfied were you this year on a scale of 0 to 1?”. Averaged together, these give an overall societal welfare score. A prediction market on this welfare score is developed for every year for the next 100 years, where traders can speculate on the welfare score for any year into the future. An overall future welfare score is then created by averaging the scores for the next 100 years, weighting earlier years more than future years. When a new bill is introduced, there is a 1 week period where markets speculate on whether the overall welfare score will go up or down if the bill is passed. If the bill is passed, the traders who bet on the overall welfare going up now own the overall welfare contracts they bet on. They will make money if they are right and lose money if they are wrong.
Example of futarchy for deciding whether or not to fire a CEO where the value to maximize is revenue. Image from .
In futarchy the devil is in the implementation details. the governance metaproblem of how to decide on the societal value(s) to maximize in the first place and making sure people aren’t incented to an extreme satisfaction score of 0 or 1 to swing policy.
Setting goal functions is both important and tricky, as there are always unforeseen consequences. For example, in the case of capitalism, this can manifest in rising wealth inequality and environmental externalities. In the case of artificial intelligence, this can manifest in or rapidly maximizing something at the unexpected expense of other things, commonly illustrated through the example of a which destroys everything to build as many paperclips as possible. These are serious concerns, as I believe by using tokenized incentives for everyone to feed them the best data and algorithms. If this is true, blockchain governance is the largest determinant of our future trajectory as a species. More on this in a future post.
is system where everyone has the ability to vote themselves, to delegate their vote to someone else, and to remove the delegation of their vote at any time. In the U.S. we do not have a liquid democracy because we cannot vote on many bills directly (our representatives do that for us) and once we elect a representative, they are typically in office for 4 years.
is a system of buying votes, where each additional vote costs twice as much as the one before it. In other words, money buys votes, but with strong diminishing returns. a variant on this he calls “quadratic coin lock voting” where N coins let you make N * k votes by locking up those coins for a time period of k². This is a nice modification because it aligns incentives over time: more voting power requires living with your decisions for longer. In a tokenized world with little friction of entering or leaving a community, this is especially important.
Blockchain-based identity systems like can help enable one person = one vote systems. However, anonymity is likely to be preserved in most cryptocurrencies. Identity gives each coin its own unique history, which can be subjectively judged to be more or less clean than that of another coin, causing to break down. One potential approach is a balance between identity and money: a fully verified identity gets 100% of the voting power of their money, a partially verified identity gets 50%, and an entirely anonymous identity gets 25%.
Simple off-chain futures markets have already shown themselves as a powerful tool. In the recently proposed and contentious Bitcoin fork, futures markets speculated on the expected value of SegWit2x vs. non-SegWit2x chains. The markets consistently valued a SegWit2x chain of a non-SegWit2x chain for 3 weeks. Supporters of SegWit2x then their forking efforts because they felt they had “not built sufficient consensus”. While it’s hard to know exactly what caused them to reach this conclusion, it seems like futures markets were a strong indicator of lack of support.
Other tools are being built for governance and standardization at different layers. is which are commonly being used as the base of Ethereum token systems, covering things like token sale mechanics, token vesting, and access controls to the treasury of the project. is trying to create standard implementations of these systems in the same way Delaware C corporations implement corporations in a standard way.
It’s worth noting that forking is always an option. Applying Albert Hirschman’s classic paradigm for affecting change in a system, voice is governance, weak exit is selling your coins, and strong exit is forking.
The ability to fork significantly reduces lock-in and increases diversity, allowing many to be tried than we would ever see in modern governments, central banks, or Web 2.0 companies. As in , forking is also beneficial when two niche chains can more effectively serve distinct needs than one chain ineffectively serving both sets of needs.
However, it is still valuable to avoid hard forks when possible. A is a non-backwards compatible change. Downsides of hard forks include:
For developers: Try . And if you are creating a new token using a simple 1 token = 1 vote system, consider quadratic coin lock voting as a low risk/high return alternative.
Like organisms, the ability of a blockchain to succeed over time is based on its ability to evolve. This evolution will bring about many decisions on direction, and it is the governance around those decisions which most strongly determine the outcome of the system. If programming in the system is important, the of the system itself is most important.
We are birthing into existence systems which transcend us. In the same way democracy and capitalism as systems determine so much of the emergent behavior around us, blockchains will do the same with even greater reach. These systems are organisms which take on and are more concerned with than the individuals which comprise them. As technology stretches these systems to their limits, the implications become more pronounced. So we’d be wise to carefully consider the structure of these systems while we can. Like any new powerful technology, blockchains are a tool that can go in many different directions. Used well, we can create a world with greater prosperity and freedom. Used poorly, we can create systems which lead us to places we .
Thanks to , , , , , , , , , , , , , , , , , , , , , , , , , and for conversations and ideas which contributed to this post.
https://dirtroads.substack.com/p/-43-first-principles-of-crypto-governance?r=k87cd&s=w&utm_campaign=post&utm_medium=web
https://www.kanaandkatana.com/valuation-depot-contents/2019/4/11/the-case-for-a-small-allocation-to-bitcoin
Wences Casares, Apr 2019
Summary
Bitcoin is a fascinating experiment but it is still just that: an experiment. As such it still has a chance of failing and becoming worthless. In my (subjective) opinion the chances of Bitcoin failing are at least 20%. But after 10 years of working well without interruption, with more than 60 million holders, adding more than 1 million new holders per month and moving more than $1 billion per day worldwide, it has a good chance of succeeding. In my (subjective) opinion those chances of succeeding are at least 50%. If Bitcoin does succeed, 1 Bitcoin may be worth more than $1 million in 7 to 10 years. That is 250 times what it is worth today (at the time of writing the price of Bitcoin is ~ $4,000).
I suggest that a $10 million portfolio should invest at most $100,000 in Bitcoin (up to 1% but not more as the risk of losing this investment is high). If Bitcoin fails, this portfolio will lose at most $100,000 or 1% of its value over 3 to 5 years, which most portfolios can bear. But if Bitcoin succeeds, in 7 to 10 years those $100,000 may be worth more than $25 million, more than twice the value of the entire initial portfolio.
In today’s world where every asset seems priced for perfection, it is hard, if not impossible, to find an asset that is so mispriced and where the possible outcomes are so asymmetrical. Bitcoin offers a unique opportunity for a non-material exposure to produce a material outcome.
It would be irresponsible to have an exposure to Bitcoin that one cannot afford to lose because the risk of losing the principal is very real. But it would be almost as irresponsible to not have any exposure at all.
What is interesting about the Bitcoin Blockchain?
Throughout this essay I refer to the “Bitcoin Blockchain” when I am referring to the Bitcoin platform as a whole, including the Bitcoin Blockchain and the Bitcoin currency. Many different systems for different use cases may one day run on top of the Bitcoin Blockchain. When I refer to “Bitcoin” I am referring to Bitcoin the currency, that can be bought, sold, sent, received, held, etc. You can think of the Bitcoin currency as the first system to run on top of the Bitcoin Blockchain.
The current state of the Bitcoin Blockchain is similar to the state of the Internet in 1992. Back then the Internet was very nascent and experimental. Just like with the early days of the Internet there are many bold claims about how the Bitcoin Blockchain will revolutionize the world and solve so many problems. Many of these claims are exaggerated or wrong. Even though right now most of us feel like we do not fully understand the Bitcoin Blockchain, over time we will all understand it as well and as intuitively as we understand the Internet today. You do not need to know the technical underbelly of the Internet to understand The Internet and, similarly, you do not need to know the technical intricacies of the Bitcoin Blockchain to understand it. If the Bitcoin Blockchain succeeds, the investors who develop this understanding and this intuition earlier will have an advantage over the investors that take longer to do so.
Understanding the Bitcoin Blockchain first principles will allow you to form your own judgment about its potential applications without you having to trust any expert. To understand the Bitcoin Blockchain first principles let’s understand what changed when the Bitcoin Blockchain first started running in January 2009. All of the Bitcoin Blockchain separate components (Public key cryptography, distributed databases, open databases, tokens and proof of work) existed many years before Bitcoin went live. What changed when Bitcoin went live? What was new and potentially revolutionary? The only thing that changed, that may potentially be revolutionary, is that all of those components were combined in a new, creative and intelligent way to create the first potentially sovereign computer platform. Up until that moment, all computer platforms belonged to a person, to a company or to a government and those platforms had to obey the will of their owners and the rules of the jurisdiction where they resided. A sovereign only obeys its own rules, no one can impose rules on a sovereign. Kings and Queens used to be sovereign, then nation states became sovereign and now, for the first time, a humble computer platform has the aspiration to be sovereign. That is potentially revolutionary.
The Bitcoin Blockchain is sovereign in that no one can change the transactions that already exist in its database and nobody can keep the system from accepting new transactions.
The main resources securing the Bitcoin Blockchain sovereignty are the Bitcoin miners and the Bitcoin nodes. If my laptop was the only computer mining Bitcoin in the world and it was also the only Bitcoin node in the world, the Bitcoin Blockchain would not be a sovereign platform, anyone who used it would simply be using my platform and trusting me. The Bitcoin miners and the Bitcoin nodes make sure that each transaction is valid, that new bitcoins are not being created out of thin air, etc. The more miners and the more nodes that join the Bitcoin network, the more sovereign the Bitcoin Blockchain is.
In the world of crypto you see the word “decentralized” a lot, often hailed as an end in itself when in reality decentralization is the means by which the Bitcoin Blockchain achieves the end goal of sovereignty.
Today the Bitcoin mining network consumes more than 5 GW of electricity a day which is equal to the total electricity production of the largest hydroelectric dam in the United States. Often this exorbitant electricity consumption is cited as a criticism of Bitcoin because of its environmental impact. I believe those criticisms are misplaced: the Bitcoin Blockchain’s value to society is proportional to its electricity consumption. If the Bitcoin Blockchain did not consume any electricity it would not be sovereign and it would be worthless. Only if you believe that society does not get any value from having a sovereign platform can you be correct to assume that the Bitcoin Blockchain electricity consumption is an enormous waste.
Bitcoin miners secure the Bitcoin Blockchain because they get paid in bitcoins to do so. The Bitcoin Blockchain is secured, to an important degree, by the bitcoins that the miners earn. If you were to remove the bitcoins, most miners would stop mining and, therefore, the Bitcoin Blockchain would not be very robust and not very sovereign. In corporate circles, especially in financial institutions, it has become fashionable to say “I am interested in the Blockchain but not in Bitcoin”, which is the same as saying “I am interested in the web but not interested in the Internet” (remember Intranets?), not understanding that the web could not exist without the Internet. The only innovation of the Blockchain is it’s sovereignty, the only sovereign Blockchain so far is the Bitcoin Blockchain and the fuel that keeps it sovereign is the Bitcoin currency. It is like a boa eating its own tail.
If a group of people wanted to take away the Bitcoin Blockchain sovereignty today they would not only need an extraordinary amount of capital and the capacity to develop specialized mining hardware in very large quantities, but they would also need access to the equivalent of the United States largest hydroelectric dam for an extended period of time. That would be hard to do but not impossible. Every day that goes by it gets even harder to “break” the Bitcoin Blockchain sovereignty. The Bitcoin Blockchain sovereignty has been attacked in the past (in fact, one of those attacks found me on the wrong side of history and that is how I painfully learned many of these lessons, but that’s another story...) and so far it has always survived intact. We can expect the Bitcoin Blockchain sovereignty to come under attack from more and more resourceful bad actors, coalitions of bad actors or even from nation states eventually. Only time will tell if Bitcoin is truly sovereign or not.
Where can a sovereign platform add value?
It is a lot easier to see where the Bitcoin Blockchain will NOT add any value. For any Blockchain to add value it has to be the ultimate arbiter of truth: nothing has to be able to contest it or change it. For any use case in which the Blockchain information can be contested or changed by a government, by a registrar of deeds, by a court, by the police, by the SEC or by any other authority it does not make sense to use a Blockchain. Claims that the Blockchain can solve property titles, securities settlement, supply chain management, the authenticity of works of art and many other similar cases are misplaced. It is true that the systems that we are using today in all of those cases are old, antiquated and inefficient. And it is true that all of those cases involve many stakeholders that use different data formats and transaction protocols that are often proprietary, but all of those problems would be better solved if those stakeholders agreed to use open standards and if they used better technology. Most often the word “Blockchain” is being waved frantically by consultants who want to scare their corporate customers into buying new technology projects, or by executives at those corporations who do not yet understand the Blockchain but understand that they may get the budget they want if they say their project is using “Blockchain”, or by entrepreneurs who think they are more likely to get the funding or press coverage they want if they add the word “Blockchain” to whatever they are doing.
So, where does a sovereign platform add value? As an example, an identity system may benefit from a sovereign platform. We would rather not keep all of our identity information (full name, social security #, date of birth, name of our parents, name of our spouses and kids, our address, passport information, payment information, etc.) on our phone which can be easily hacked, but we also do not want to give all that information to Google or Facebook or to our government. A sovereign system that no one can corrupt or control that will keep our information safe and will ask us every time someone wants a piece of our information may make sense. With this example we are simply trying to be creative and guess one possible use case, I am sure we will be surprised by creative and revolutionary entrepreneurs coming up with uses cases that take full advantage of a sovereign platform and that we cannot imagine right now.
But there is a use case that makes a lot of sense and, in fact, it is already working quite well. That is to use this sovereign platform to run a global system of value and settlement which is what Bitcoin, the currency, may become. Similar to what gold was for 2,000 years and similar to what the US dollar has been for the last 70 years. Bitcoin is potentially superior to gold and to the US dollar as a global non-political standard of value and settlement because there will never be more than 21 million bitcoins and because Bitcoin is open and uncensorable. There will never be more than 21 million bitcoins because it runs on a sovereign platform so no one can change or inflate that number. Additionally, Bitcoin is uncensorable because it runs on a sovereign platform so no one can change the transactions that already exist in the system and no one can keep the system from accepting new transactions. This allows for unprecedented economic freedom in the same way the internet allowed for unprecedented freedom of information. Gold has the advantage that it is tangible and many people (especially older ones, who tend to have more capital) strongly prefer something that they can touch. Gold also has in its favor that it has been around for over 2,000 years, and it may be impossible for Bitcoin to match that history and reputation. The dollar has the advantage that it is already easily understood and accepted globally and it is a platform with remarkable network effects. These qualities may be too much for Bitcoin to overcome. Or it may be that we collectively come to appreciate the advantages of a digital unit that cannot be inflated or censored. Only time will tell.
Bitcoin is not an asset. It does not produce earnings or dividends and it does not generate interest. And Bitcoin has no intrinsic value. Bitcoin is simply money and most forms of good money have no intrinsic value. Gold, the US dollar and national currencies do not have any intrinsic value either but because they have had a monetary value for a long time most people perceive them as being intrinsically valuable, which is a big advantage. The main hurdle Bitcoin has to clear to become successful is to develop a similar widespread social perception of value and achieving that is quite an ambitious goal.
What does a world in which Bitcoin succeeded look like?
If Bitcoin succeeds it will most likely not replace any national currency. It may be a supranational currency that exists on top of all national currencies. If Bitcoin succeeds it may be a global non-political standard of value and settlement.
The world already has a global non-political standard of measure in the meter, and a global non-political standard of weight in the kilo. Could you imagine a world in which we changed the length of the meter or the weight of the kilo regularly according to political considerations? Yet that is what we are doing with our standard of value. Today we use the US dollar as a global standard of value which is much better than nothing but quite imperfect: it has lost significant value since inception, it is hard to know how many dollars will be outstanding in the future and, increasingly, the ability or inability to use it as a platform depends on political considerations. The world would be much better off with a global non-political standard of value.
The same is true for a global non-political standard of settlement. Only banks can participate in most settlement networks (like SWIFT, Fedwire, ACH in the US, CHAPS in the UK, SEPA in Europe, Visa and Mastercard, etc). Individuals, corporations and governments can only access these settlement networks through banks. Using these settlement networks takes time (sometimes days), the process is opaque and costly and, increasingly, the ability to use them is determined by political considerations. Imagine an open platform where any individual, corporation or government could settle with any other individual, corporation or government anywhere in the world, in real time and for free, 24/7 and 365 days of the year. This would do for money what the Internet did for information.
In a world in which Bitcoin succeeds all currencies may be quoted in satoshis (the smallest fraction of a Bitcoin). When your granddaughter asks what is the price of the New Zealand dollar she may receive an answer in satoshis: the New Zealand dollar is 72 satoshis today. And the price of the Turkish Lira? 21 satoshis today. The US dollar? 107 satoshis today. A barrel of oil? 5,600 satoshis today. Global GDP? 97,356,765 bitcoins. The GDP of Indonesia? 1,417,007 bitcoins. The reserves of the South African Reserve Bank? 53,230 bitcoins. You get the idea. Then all of these values would be easily comparable across time and across geographies.
When your granddaughter asks “Grandpa, how did you guys keep track of all these things when you did not have Bitcoin?” your answer will be “We used the US dollar”. Then she may ask, “Really? But isn’t that the currency of the United States?” after you say yes she may ask “And how did you keep track of the US dollar?” to which you will say “Well… mostly in Euros, sometimes in Yen, Swiss Francs or other currencies depending on what we were talking about”. She may think we were weird.
Why not another cryptocurrency instead of Bitcoin?
There are about 1,000 cryptocurrencies that have at least one transaction a day. So why Bitcoin and not any one of those other ones? Over 60 million people own Bitcoin and over 1 million people become new owners every month. The other 1,000 cryptocurrencies have less than 5 million owners combined, so Bitcoin will add more users in the next 5 months than those 1,000 cryptocurrencies added in their combined history. Bitcoin is moving over $1 billion a day which is also more than all the other cryptocurrencies combined.
The most important metric of all, though, is how much can we trust these platforms or how sovereign they are. The measure of how sovereign these platforms are is the square of the computing power they have. If we use electricity consumption as a proxy of the computing power each of these platforms have, all of those 1,000 cryptocurrencies combined have less than 1% of the Bitcoin Blockchain processing (mining) power so none of them is (yet) really sovereign and in many cases their code is controlled by a person or a small group of people. New technologies may achieve sovereignty without relying on processing power and that may seriously challenge the Bitcoin Blockchain. But if those technologies do not get developed or it takes too long it may be difficult to unseat the Bitcoin Blockchain.
The Bitcoin Blockchain is a open protocol, not a company. The history of protocols is very different than the history of companies. In the history of companies there is a lot of change, disruption and churn (Microsoft-Apple, eBay-Amazon, Altavista-Google, MySpace-Facebook, etc.). However, the history of protocols is very different. Once a protocol gets established it almost never changes. For example, we are using IP (Internet Protocol, or just “the Internet” colloquially) for almost all transport of data (until the late 90s cisco routers used to route dozens of protocols, today they only route IP). We are using only one web protocol and only one email protocol. The email protocol, for example, is quite lousy. At the protocol level there is no way for me to know if you received my email, much less if you read it, there is no way for you to verify my identity when you receive my email, there is no way to handle spam and many, many other things that could be fixed at the protocol level. I am sure some people have already developed much better email protocols, but we never heard about them and most likely we never will: once a protocol gets established it becomes the only protocol for that use case and it is not possible to displace it with a better protocol. Right now it looks like the standard protocol for a sovereign platform will be the Bitcoin Blockchain.
Many interesting technologies and applications that are being tested with other cryptocurrencies and other Blockchains and, if they are successful, they may be implemented on top of the Bitcoin Blockchain. It is not efficient to invest massive amounts of new hardware and electricity to replicate sovereignty when we already have a most solid and robust sovereign Bitcoin Blockchain. It is more efficient to simply build on top of it. For example, the Bitcoin Blockchain is limited in that it can only process approximately 3,000 transactions every 10 minutes, you have to wait 10 minutes for the transaction to be recorded in the Blockchain and up to 1 hour if you want to make sure it is irreversible. And you have to pay anywhere from 5 cents to 50 cents in transaction fees for the miners to process your transaction. The Lightning Network takes advantage of the robustness of the Bitcoin Blockchain and it works as a “Layer 2” solution on top of the Bitcoin Blockchain, enabling thousands of transactions per second of as little as 1 satoshi ($0.00004), for free and in real time. Similarly, other early examples of Layer 2 solutions that work on top of the Bitcoin Blockchain are RSK which enables the full functionality of Ethereum but on top of the much more robust Bitcoin Blockchain. Liquid is an open source wholesale settlement network developed by Blockstream that operates on top of the Bitcoin Blockchain. There are many more examples of technologies being developed to take advantage of the sovereignty and robustness of the Bitcoin Blockchain and enhance its capabilities by building on top of it.
How can Bitcoin fail?
Bitcoin can fail in many different ways. It could be taken over by a bad actor. It could be displaced by a better platform. It could be hacked. And Bitcoin can probably fail in many ways that we cannot imagine yet. Because Bitcoin does not have any intrinsic value, and because it’s value depends on a social consensus which is a sort of collective delusion, in my opinion, the most likely way in which Bitcoin could fail is a price panic. If we all decide at the same time that we think Bitcoin is worthless, then it will be worthless. It is a self-fulfilled prophecy. If the price of Bitcoin were to plummet to zero or near zero, even if the platform remained intact, its reputation would suffer immensely and it could take a generation to rebuild that credibility. This could happen if people buy amounts of Bitcoin they cannot afford to lose, for example if people invest their retirement funds or their kids' college funds into Bitcoin, and as the price goes does down they are forced to sell, pushing the price further down and forcing others to sell. So, in my opinion, the biggest risk to Bitcoin is people investing amounts they cannot afford to lose.
Most of the capital invested in Bitcoin today seems to be capital that people can afford to lose. That is not because people are wise, or because the regulators have been very effective or that the industry has been prudent. The only reason why most people today do not have an amount of Bitcoin they cannot afford to lose is because of Bitcoin’s price volatility. Ironically Bitcoin’s price volatility is the best insurance against Bitcoin’s biggest risk. If Bitcoin ever begins to be perceived as a safe asset before it has matured and people begin to allocate capital they cannot afford to lose we should be concerned. This happens to some degree during every Bitcoin price rally but, fortunately, so far each rally has corrected without destroying Bitcoin, but one day that could not be the case.
After 10 years of Bitcoin working well without interruption more concerning than a complete failure is a scenario where Bitcoin does not fail but it becomes irrelevant. Something similar to what happened to the BitTorrent protocol, which still exists but is less and less relevant as the real revolution in digital file sharing and entertainment happened through Dropbox, Spotify, Netflix, and many others. Similarly, there is a chance that Bitcoin does not fail but that it never becomes mainstream, that is only used by a group of believers and fanatics but not much more beyond that. That could happen because financial institutions, governments, and regulators manage to keep Bitcoin separate and ostracized from the rest of the financial world, like a non-convertible currency, but it could also happen even if financial institutions, governments, and regulators keep going on their current path of enabling Bitcoin to be fully connected to the financial world. If Bitcoin never becomes mainstream bitcoins will still have a price but most likely lower than what it is today. In my (subjective) opinion the chance of this happening is 30%.
Bitcoin’s price action
Bitcoin launched in January 2009 but it did not have a price until July 2010 when it began to to change hands informally at $0.05 cents per bitcoin. In November 2010 Bitcoin had its first price rally that took the price to a peak of $0.39 cents to then “crash” to $0.19 cents. The price was at its peak of $0.39 cents only very briefly and the volume on prices near $0.39 cents was negligible, for most casual observers the rally simply took the price of Bitcoin from $0.05 cents to $0.19 cents, an increase of 280%, but most of the commentary at the time focused on the Bitcoin “crash” of over 50% from $0.39 to $0.19 cents. This exact same story has repeated itself 6 times in Bitcoin’s history so far. There have been 6 of these rallies in Bitcoin’s 10-year history and in between the rallies the price of Bitcoin has traded sideways or downward for months or years at a time. During most of Bitcoin’s 10-year history, the press has been commenting and worrying about Bitcoin’s latest “crash”. How can something that constantly crashes go from $0.05 cents to $4,000 you ask? If you want something to go from $0.05 cents to $4,000 and fool everybody into believing that it is failing, do it with as much volatility as possible.
The second Bitcoin price rally happened in February 2011 and it took the price of Bitcoin over $1.00 for the first time to then “crash” to $0.68 cents. The third rally happened in August 2011 and it took the price of Bitcoin to $29 to then “crash” to $2. The fourth rally happened in April 2013 and it took the price to $230 to then “crash” to $66. The fifth rally happened in December 2013 and it took the price to $1,147 to then “crash” to $177. The 6th (and currently last) rally happened in December 2017 and it took the price of Bitcoin to $19,783 to then “crash” below $3,200 (and until this bear market is over we don’t know how low it may go).
The Bitcoin price rallies are the most important feature of how Bitcoin propagates, how people spread the word and how more people want to own it. It is a risky mechanism, so far it has worked well but it could lead to a disaster one day. The Bitcoin price rallies are Bitcoin’s best moments but they are also it most dangerous and vulnerable moments.
Every Bitcoin bear market is about working out the excesses of the rally. During the rally too many people buy too many bitcoins thinking that they will be able to sell them for a big gain very soon and that usually does not happen. Imagine a fruit tree that has some good fruit and some rotten fruit. The Bitcoin bear markets resembles a period in which the tree is shaken until all the rotten fruit has fallen to the ground. Every time the tree is shaken some rotten fruit falls to the ground. The Bitcoin tree is shaken by the price going down and by letting time pass by. The more the price goes down and the more time passes without another rally the more people give up their original expectations, they sell, they adjust their exposure and their expectations. Eventually, no matter how much you shake the tree there is no more fruit to fall to the ground and the market may be getting ready for another rally.
If Bitcoin succeeds it is likely that the price will do another 6 of these rallies over the next 7 to 10 years. Anyone who tells you that they know what the price of Bitcoin will be next week, much less next year is either ignorant or outright lying to you. It is not possible to know when the price will hit bottom or when the next rally will come and the penalty for trying to time the bottom or the top and getting it wrong can be much higher than the money you were trying to save. If you decide to buy Bitcoin simply decide what is the amount of money you can afford to lose (ideally less than 1% of your net worth), deploy it at market and at once and forget about it for 7 to 10 years. I have been giving this advice for 6 years and, by watching what people do with this advice, I can tell you that “Forget about it for 7 to 10 years” is the most difficult part of the simple recipe I am proposing. This lack of discipline destroys a lot more value than you would anticipate. The price volatility rattles people and makes them trade. If the price goes down a lot they want to buy more to reduce their average their cost, they buy more and now they have more than they can afford to lose so they care even more about the price volatility. Even worse: when the price goes up 10 times they decide to sell to rebalance because now Bitcoin represents too much of their net worth and it is too risky (it is hard to double your portfolio with a 1% exposure if you rebalance it every time it multiplies by 10). If you think this may happen to you, I suggest you invest in two buckets: keep one bucket that you will not trade for 7 to 10 years, and another bucket that you will trade as much as you want (but be responsible and be sure that both buckets combined add to an amount then you can afford to lose).
Why do I believe 1 Bitcoin may be worth $1 million in 7 to 10 years?
How much a Bitcoin may be worth if Bitcoin succeeds is pure speculation. Today Bitcoin is worth a total of ~ $70 billion (~ 17.5 million bitcoins in circulation x ~ $4,000 per Bitcoin). If Bitcoin ever becomes the world’s standard of value and settlement it may have to be worth more than gold and less than the world’s narrow supply of money. All the gold that was ever been mined is worth ~ $7 trillion the world’s narrow supply of money is ~ $40 trillion. If Bitcoin is ever worth as much as gold each Bitcoin would be worth ~ $300,000, and if Bitcoin is ever worth as much as the world’s narrow supply of money it would be worth ~ $2 million.
My preferred way of guessing how the price of Bitcoin may evolve is much more prosaic. I have noticed over time that the price of Bitcoin fluctuates around ~ $7,000 x how many people own bitcoins. So if that constant maintains and if 3 billion people ever own Bitcoin it would be worth ~ $21 trillion (~ $7,000 x 3 billion) or $1 million per Bitcoin.
In closing
This essay is focused on making the case for a small allocation to Bitcoin and, therefore, it focuses on the possible financial gain to be had if Bitcoin succeeds. But if Bitcoin does for Money what the Internet did for information the prospect of unprecedented economic freedom is much more exciting than any possible financial gain.
I grew up in Patagonia, Argentina, where my parents are sheep ranchers. Growing up I saw my family lose their entire savings three times: the first time because of an enormous devaluation, the second time because of hyperinflation and the last time because the government confiscated all bank deposits. It seemed like every time we were recovering, a new and different economic storm would wipe us out again. My memory of these events is not economic or financial but very emotional. I remember my parents fighting about money, I remember being scared, I remember everybody around us being scared and returning to desperate, almost animal like behavior. I also remember thinking how unfair it was that these crises hit the poor the hardest. People who had enough money to get some US dollars protected themselves that way, people who had even more money and could afford to buy a house or apartment protected themselves that way, and people who had even more money and could have a bank account abroad protected themselves that way. But the poor could not do any of those things and got hit the hardest. When I saw the emergence of the Internet I was young and idealistic and I sincerely thought the Internet was going to democratize money and fix money forever. But it has been 30 years since the Internet was created and it has fixed many problems but increasing economic freedom is not one of them. I was about to give up hope for the Internet to fix this problem when I ran into Bitcoin by accident. At first I was very cynical but the more I learned about it the more curious I became, after six months of studying and using Bitcoin I decided to dedicate the rest of my career, my capital and my reputation to help Bitcoin succeed. Nothing would make me prouder than to be able to tell my grandkids that I was part one of a very large community who helped Bitcoin succeed. And that because Bitcoin succeeded now billions of people can safely send, receive and store any form of money they want as easily as they can send or store a picture. So that what I saw happen to my parents and countless others can never happen again.
https://git.dhimmel.com/bitcoin-whitepaper/
Satoshi Nakomoto, Oct 2008
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they’ll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone.
Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model. Completely non-reversible transactions are not really possible, since financial institutions cannot avoid mediating disputes. The cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for small casual transactions, and there is a broader cost in the loss of ability to make non-reversible payments for non-reversible services. With the possibility of reversal, the need for trust spreads. Merchants must be wary of their customers, hassling them for more information than they would otherwise need. A certain percentage of fraud is accepted as unavoidable. These costs and payment uncertainties can be avoided in person by using physical currency, but no mechanism exists to make payments over a communications channel without a trusted party.
What is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party. Transactions that are computationally impractical to reverse would protect sellers from fraud, and routine escrow mechanisms could easily be implemented to protect buyers. In this paper, we propose a solution to the double-spending problem using a peer-to-peer distributed timestamp server to generate computational proof of the chronological order of transactions. The system is secure as long as honest nodes collectively control more CPU power than any cooperating group of attacker nodes.
We define an electronic coin as a chain of digital signatures. Each owner transfers the coin to the next by digitally signing a hash of the previous transaction and the public key of the next owner and adding these to the end of the coin. A payee can verify the signatures to verify the chain of ownership.
The problem of course is the payee can’t verify that one of the owners did not double-spend the coin. A common solution is to introduce a trusted central authority, or mint, that checks every transaction for double spending. After each transaction, the coin must be returned to the mint to issue a new coin, and only coins issued directly from the mint are trusted not to be double-spent. The problem with this solution is that the fate of the entire money system depends on the company running the mint, with every transaction having to go through them, just like a bank.
For our timestamp network, we implement the proof-of-work by incrementing a nonce in the block until a value is found that gives the block’s hash the required zero bits. Once the CPU effort has been expended to make it satisfy the proof-of-work, the block cannot be changed without redoing the work. As later blocks are chained after it, the work to change the block would include redoing all the blocks after it.
The proof-of-work also solves the problem of determining representation in majority decision making. If the majority were based on one-IP-address-one-vote, it could be subverted by anyone able to allocate many IPs. Proof-of-work is essentially one-CPU-one-vote. The majority decision is represented by the longest chain, which has the greatest proof-of-work effort invested in it. If a majority of CPU power is controlled by honest nodes, the honest chain will grow the fastest and outpace any competing chains. To modify a past block, an attacker would have to redo the proof-of-work of the block and all blocks after it and then catch up with and surpass the work of the honest nodes. We will show later that the probability of a slower attacker catching up diminishes exponentially as subsequent blocks are added.
To compensate for increasing hardware speed and varying interest in running nodes over time, the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they’re generated too fast, the difficulty increases.
The steps to run the network are as follows:
New transactions are broadcast to all nodes.
Each node collects new transactions into a block.
Each node works on finding a difficult proof-of-work for its block.
When a node finds a proof-of-work, it broadcasts the block to all nodes.
Nodes accept the block only if all transactions in it are valid and not already spent.
Nodes express their acceptance of the block by working on creating the next block in the chain, using the hash of the accepted block as the previous hash.
Nodes always consider the longest chain to be the correct one and will keep working on extending it. If two nodes broadcast different versions of the next block simultaneously, some nodes may receive one or the other first. In that case, they work on the first one they received, but save the other branch in case it becomes longer. The tie will be broken when the next proof-of-work is found and one branch becomes longer; the nodes that were working on the other branch will then switch to the longer one.
New transaction broadcasts do not necessarily need to reach all nodes. As long as they reach many nodes, they will get into a block before long. Block broadcasts are also tolerant of dropped messages. If a node does not receive a block, it will request it when it receives the next block and realizes it missed one.
By convention, the first transaction in a block is a special transaction that starts a new coin owned by the creator of the block. This adds an incentive for nodes to support the network, and provides a way to initially distribute coins into circulation, since there is no central authority to issue them. The steady addition of a constant of amount of new coins is analogous to gold miners expending resources to add gold to circulation. In our case, it is CPU time and electricity that is expended.
The incentive can also be funded with transaction fees. If the output value of a transaction is less than its input value, the difference is a transaction fee that is added to the incentive value of the block containing the transaction. Once a predetermined number of coins have entered circulation, the incentive can transition entirely to transaction fees and be completely inflation free.
The incentive may help encourage nodes to stay honest. If a greedy attacker is able to assemble more CPU power than all the honest nodes, he would have to choose between using it to defraud people by stealing back his payments, or using it to generate new coins. He ought to find it more profitable to play by the rules, such rules that favour him with more new coins than everyone else combined, than to undermine the system and the validity of his own wealth.
A block header with no transactions would be about 80 bytes. If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year. With computer systems typically selling with 2GB of RAM as of 2008, and Moore’s Law predicting current growth of 1.2GB per year, storage should not be a problem even if the block headers must be kept in memory.
It is possible to verify payments without running a full network node. A user only needs to keep a copy of the block headers of the longest proof-of-work chain, which he can get by querying network nodes until he’s convinced he has the longest chain, and obtain the Merkle branch linking the transaction to the block it’s timestamped in. He can’t check the transaction for himself, but by linking it to a place in the chain, he can see that a network node has accepted it, and blocks added after it further confirm the network has accepted it.
As such, the verification is reliable as long as honest nodes control the network, but is more vulnerable if the network is overpowered by an attacker. While network nodes can verify transactions for themselves, the simplified method can be fooled by an attacker’s fabricated transactions for as long as the attacker can continue to overpower the network. One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user’s software to download the full block and alerted transactions to confirm the inconsistency. Businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.
Although it would be possible to handle coins individually, it would be unwieldy to make a separate transaction for every cent in a transfer. To allow value to be split and combined, transactions contain multiple inputs and outputs. Normally there will be either a single input from a larger previous transaction or multiple inputs combining smaller amounts, and at most two outputs: one for the payment, and one returning the change, if any, back to the sender.
It should be noted that fan-out, where a transaction depends on several transactions, and those transactions depend on many more, is not a problem here. There is never the need to extract a complete standalone copy of a transaction’s history.
The traditional banking model achieves a level of privacy by limiting access to information to the parties involved and the trusted third party. The necessity to announce all transactions publicly precludes this method, but privacy can still be maintained by breaking the flow of information in another place: by keeping public keys anonymous. The public can see that someone is sending an amount to someone else, but without information linking the transaction to anyone. This is similar to the level of information released by stock exchanges, where the time and size of individual trades, the “tape”, is made public, but without telling who the parties were.
As an additional firewall, a new key pair should be used for each transaction to keep them from being linked to a common owner. Some linking is still unavoidable with multi-input transactions, which necessarily reveal that their inputs were owned by the same owner. The risk is that if the owner of a key is revealed, linking could reveal other transactions that belonged to the same owner.
We consider the scenario of an attacker trying to generate an alternate chain faster than the honest chain. Even if this is accomplished, it does not throw the system open to arbitrary changes, such as creating value out of thin air or taking money that never belonged to the attacker. Nodes are not going to accept an invalid transaction as payment, and honest nodes will never accept a block containing them. An attacker can only try to change one of his own transactions to take back money he recently spent.
The race between the honest chain and an attacker chain can be characterized as a Binomial Random Walk. The success event is the honest chain being extended by one block, increasing its lead by +1, and the failure event is the attacker’s chain being extended by one block, reducing the gap by -1.
We now consider how long the recipient of a new transaction needs to wait before being sufficiently certain the sender can’t change the transaction. We assume the sender is an attacker who wants to make the recipient believe he paid him for a while, then switch it to pay back to himself after some time has passed. The receiver will be alerted when that happens, but the sender hopes it will be too late.
The receiver generates a new key pair and gives the public key to the sender shortly before signing. This prevents the sender from preparing a chain of blocks ahead of time by working on it continuously until he is lucky enough to get far enough ahead, then executing the transaction at that moment. Once the transaction is sent, the dishonest sender starts working in secret on a parallel chain containing an alternate version of his transaction.
To get the probability the attacker could still catch up now, we multiply the Poisson density for each amount of progress he could have made by the probability he could catch up from that point:
Rearranging to avoid summing the infinite tail of the distribution…
Converting to C code…
Solving for P less than 0.1%…
We have proposed a system for electronic transactions without relying on trust. We started with the usual framework of coins made from digital signatures, which provides strong control of ownership, but is incomplete without a way to prevent double-spending. To solve this, we proposed a peer-to-peer network using proof-of-work to record a public history of transactions that quickly becomes computationally impractical for an attacker to change if honest nodes control a majority of CPU power. The network is robust in its unstructured simplicity. Nodes work all at once with little coordination. They do not need to be identified, since messages are not routed to any particular place and only need to be delivered on a best effort basis. Nodes can leave and rejoin the network at will, accepting the proof-of-work chain as proof of what happened while they were gone. They vote with their CPU power, expressing their acceptance of valid blocks by working on extending them and rejecting invalid blocks by refusing to work on them. Any needed rules and incentives can be enforced with this consensus mechanism.
https://www.matthuang.com/bitcoin_for_the_open_minded_skeptic
Matt Huang, May 2020
Bitcoin has grown from idea (2008), to working system (2009), to its first real-world use at <$0.01 per coin (2010), to a global currency valued at $8K+ per coin and $150B+ in aggregate (May 2020).
Although Bitcoin is empirically one of the best investments of the past decade, it still remains controversial. Is it a new form of money? A speculative bubble? Or a bit of both?
Investors have well-established frameworks for evaluating assets like equities, credit, and real estate. But a new monetary asset such as Bitcoin appears so infrequently that no clear framework exists.
This paper outlines a simple and intuitive framework for Bitcoin as a new monetary asset.
In the course of our work, we are often in the position of explaining Bitcoin to investors and institutions approaching it for the first time. Never before have we seen more interest in Bitcoin and its potential as a digital companion to gold.
Financial crises stress the limits of existing systems and can highlight the need for new ones. This was true during the financial crisis of 2008 (out of which Bitcoin was born), and it is perhaps more true today with the unprecedented levels of monetary and fiscal stimulus being pursued by governments worldwide.
There has been no shortage of writing about Bitcoin over the past 11 years. This paper does not claim any novel insight. Instead, it is a summary of the conversation we often have with investors seeking to understand Bitcoin for the first time.
“The two greatest inventions of the human mind are Writing and Money — the common language of intelligence and the common language of self-interest.”
—Mirabeau
Money is an old and complex idea. Historically, it has taken many forms: from decorative axes and cowry shells to precious metals and representative paper. The last major shift was arguably in the early 1970s with the end of the US gold standard and the beginning of the modern fiat currency system.
We can think of money as a competitive market like any other. Gold dominated for centuries not by accident but by possessing important features such as being scarce and unforgeable. Today, fiat currencies dominate largely through local monopoly power, but all monetary assets still compete globally, with gold, US Dollars, and Euros favored as reserve assets.
Like written language, money is a protocol standard with immense network effects. A new monetary asset can only emerge if it better fulfills the core functions of money, and it can overcome the adoption hurdle of a new money. We believe Bitcoin offers a compelling answer to both.
One of the primary functions of money is to be a store of value: a mechanism to transfer purchasing power across time and geography.
All successful money fulfills this function. If a monetary asset loses trust as a store of value, then savings quickly flow elsewhere, as seen in hyperinflationary economies like Venezuela.
Gold has been trusted as a store of value for millennia. Importantly, the supply of gold on Earth is scarce. Confidence in this scarcity rests in humanity's understanding of nature: that gold cannot yet be cost-effectively synthesized (despite alchemists' best efforts throughout history).
Gold also has many other desirable properties, such as being easy to recognize (no tarnishing), easy to divide, easy to measure (by weight), and easy to verify (through melting), so it is no surprise that gold replaced predecessors to become a global standard.
Paper currencies emerged to simplify the daily use of precious metals as a means of exchange (another core function of money). Although paper notes were initially linked to precious metals, today most paper currencies are free-floating and established by government fiat.
The US Dollar is the leading fiat currency and has been the global reserve currency for much of the last century (replacing the British sterling before it). In addition to being a trusted store of value, the US Dollar is the leading means of exchange and unit of account. A significant share of global trade is priced and settled in US Dollars, whether or not the United States is directly involved.
Confidence in the US Dollar rests on trust in the government (e.g., to wisely manage its monetary policy). There is great efficiency in placing such trust in a single institution, but there is also risk. Fiat currencies can lose credibility and be devalued through the actions of the government, who in times of crisis may face short-term pressures that outweigh concerns for long-term credibility. Countries like Venezuela offer an extreme precedent for currency value in the face of eroding trust: the currency becomes worthless.
Many investors, including central banks, own both gold and US Dollars (or US Dollar-denominated assets) because they offer complementary trade-offs. We can think of the US Dollar as a centralized monetary asset, which can be devalued by a single actor, and gold as a decentralized monetary asset, which cannot.
Bitcoin is a new decentralized monetary asset, akin to gold. It combines the scarce, money-like nature of gold with the digital transferability of modern currency. Although it remains relatively nascent, Bitcoin has great potential as a future store of value based on its intrinsic features.
As with any monetary asset, Bitcoin must be scarce, portable, fungible, divisible, durable, and broadly accepted in order to be useful. Bitcoin rates strongly across most of these dimensions, except for broad acceptability:
Scarcity: Bitcoin supply is scarce, and asymptotically approaches 21 million coins. Achieving scarcity in digital form was Bitcoin's great technical breakthrough (building on decades of computer science research).
Portability: Bitcoin is extremely portable, especially relative to gold. Arbitrary amounts of value can be held in a USB stick, or digitally transported across the globe in minutes.
Fungibility: Any two Bitcoins are practically interchangeable, although each Bitcoin has a distinct history on the public ledger.
Divisibility: Each Bitcoin can be divided into 100 million smaller units (called "satoshis").
Durability: Bitcoins are durable and do not degrade over time.
Broad Acceptability: Bitcoin’s primary weakness: it is far less broadly accepted than gold or US Dollars, although it has made impressive strides over the past decade. We can think of broad acceptability along two dimensions, both of which are important: the % of people who trust and accept Bitcoin, and the % of wealth that trusts and accepts Bitcoin.
Beyond these classic monetary features, Bitcoin is also:
Digital: Digital money like Bitcoin is cheaper to store and easier to transfer than gold, which is physically cumbersome. Bitcoin is also instantly verifiable, whereas gold can require a slow and manual verification process.
Programmable: Bitcoin is programmable, which has subtle but far-reaching implications. Today Bitcoin scripting enables applications like escrow or micropayments. Over time we may be surprised by what can be built with Bitcoin (much as we were surprised by the Internet, another programmable substrate).
Decentralized and Censorship-Resistant: The rules of the Bitcoin network (such as its monetary policy) are governed by a decentralized peer-to-peer network, involving a disparate and global user base of consumers, investors, companies, developers, and miners. It is impractical (if not impossible) for a single actor to unilaterally influence the rules of the system. This affords Bitcoin holders a special kind of confidence: that Bitcoin cannot be devalued by arbitrary monetary policy decisions, and that they will always be able to hold and transfer their Bitcoin freely. This could be valuable not just to individuals and companies but also to governments whose foreign currency reserves may be subject to the whims of foreign entities.
Universal: Similar to physical bearer assets like US Dollar bills or gold, Bitcoin is a digital bearer asset that anyone can hold and transfer. The same is not true of digital US Dollars (which require a bank account that supports US Dollars) or digital exposure to gold (which requires a brokerage account).
A broadly accepted store of value with the above features would represent a significant improvement over gold, but Bitcoin still lacks broad acceptance and remains nascent as a store of value (as compared to gold's millennia of history and credibility). A better product is not enough—Bitcoin must have a go-to-market strategy to reach broad acceptance.
Since Bitcoin’s inception, many intelligent investors have observed that it appears to be a bubble. They are more right than they know.
If we define a bubble asset as one that is overvalued relative to intrinsic value, then we can think of all monetary assets as bubble assets. By definition, a store of value is an intermediate asset that people demand, not for its direct utility, but for its ability to be valuable in the future. This value is reflexive: people will believe in a store of value if they expect others to believe in it (who in turn should expect others to believe in it, and so on).
This phenomenon is distinct from other asset classes, which have utility-based demand, with speculation occurring around this underlying utility. For monetary assets, the utility is in the collective speculation itself.
As Nobel-laureate Robert Shiller observes: "Gold is a bubble, but it's always been a bubble. It has some industrial uses, but basically it's like a fad that's lasted thousands of years." This is not an argument against gold (or Bitcoin) as a valuable monetary asset, but an astute insight into the bubble-like, reflexive nature of money.
We can think of money as a bubble that never pops (or that hasn’t popped yet) and the value of fiat currency, gold, or Bitcoin as relying on collective belief. Other factors like a government's power, the industrial utility of gold, or the robustness of Bitcoin's codebase can help reinforce this belief, but belief is critical.
Such large amounts of value emerging from collective belief may seem circular and nonfundamental. However, there is real value in the social and economic coordination that monetary assets facilitate (much as there is real value in common language). Moreover, such collective belief cannot arise around any arbitrary asset—a successful monetary asset must compete to earn this belief based on intrinsic features. Having superior intrinsic features explains why gold is preferred to silver or fur pelts and Bitcoin is preferred to any number of Bitcoin copycats.
If Bitcoin succeeds in becoming a trusted store of value, then its end state is to be a bubble. Bubbles are also how Bitcoin gains broader acceptance.
Throughout Bitcoin's 11-year history, there have been at least four Bitcoin bubbles of note.
2011: From ~$1 (Apr 2011) to ~$31 (Jun 2011) to ~$2 (Nov 2011)
2013: From ~$13 (Jan 2013) to ~$266 (Apr 2013) to ~$65 (Jul 2013)
2013-2015: From ~$65 (Jul 2013) to ~$1242 (Nov 2013) to ~$200 (Jan 2015)
2017-2018: From ~$1000 (Apr 2017) to ~$19500 (Dec 2017) to ~$3500 (Dec 2018)
Each bubble has a familiar pattern. High conviction investors start buying when Bitcoin is boring and unloved. The resulting rise in Bitcoin price attracts media attention, which then attracts investors (or speculators), many with lower conviction and shorter time horizons. This drives the price of Bitcoin higher, which drives further attention and investor interest. This cycle repeats until demand exhausts and the bubble crashes.
Although painful for those involved, each bubble leads to broader awareness and motivates Bitcoin's underlying adoption, gradually expanding the base of long-term holders who believe in Bitcoin's potential as a future store of value. This dynamic is evident in the successively higher price floors that Bitcoin reaches during times of maximum disillusionment: ~$2 in 2011, ~$200 in 2015, and ~$3500 in 2018. Broader awareness also encourages the building of Bitcoin infrastructure by startups like Coinbase and incumbents like the CME and Fidelity, further improving Bitcoin's liquidity and utility as a monetary asset. Through successive bubbles, Bitcoin reaches greater levels of scale in users, transaction volumes, network security, and other fundamental metrics.
As Bitcoin becomes more broadly accepted, what will its future look like? Some wonder whether people will be earning salaries or making everyday payments in Bitcoin. While these behaviors may exist to some degree, Bitcoin seems unlikely to challenge the US Dollar as the leading means of exchange and unit of account (at least anytime soon). Instead, Bitcoin is likely to earn a place alongside gold as a sensible part of many investment portfolios. This has already begun with an early-adopter, tech-forward crowd, and we expect it to grow to include a broader set of investors and institutions over time. Eventually, central banks may come to view Bitcoin as a complement to their existing gold holdings.
Ultimately, monetary assets rise and fall on timescales that stretch beyond human lifespans, making them a challenge to forecast. There was a time before the US Dollar reigned when the reserve currency was British, or French, or Dutch, or further into ancient history, Greek or Roman. Similarly, there was a time before the adoption of gold when more primitive forms of money were dominant. The idea of a fiat currency like the US Dollar being untethered to gold is itself a recent phenomenon that seemed unthinkable half a century ago. In the future, it seems likely that the global monetary order could change in ways that would be unthinkable to us today, with digital currencies such as Bitcoin playing a significant role.
As a decentralized store of value, it is most natural to consider Bitcoin's market size relative to gold, whose aggregate value is estimated to be ~$9T (May 2020) between central bank reserves (17%), private investment holdings (22%), jewelry (47%), and other miscellaneous forms (14%). Some but not all of this value is addressable by Bitcoin.
Over time, the market demand for assets like gold and Bitcoin could expand to exceed ~$9T, especially given the prevailing direction of global monetary policy. According to the IMF, total international reserves reached ~$13T in 2019 between gold (11%), foreign currency reserves (86%), and IMF-related assets (3%). If foreign governments (some of whom already bristle at their dependence on US Dollar FX reserves) begin to adopt Bitcoin as a complement to existing gold holdings, the market size for Bitcoin could expand significantly.
Beyond complementing gold's investment demand, Bitcoin may also address broader store of value markets indirectly. Consider, for example, people who hold fiat currencies with eroding credibility such as the Argentine Peso or the Turkish Lira, but who may have difficulty accessing US Dollars or gold. Or consider various collectibles like art or gemstones, some of which are owned primarily as stores of value. Or consider the empty NYC apartment that is owned by a foreigner interested in storing value outside his or her native country. Bitcoin could plausibly address subsets of these behaviors more effectively.
Deferring a precise estimate of market size, we believe it is clear that Bitcoin has significant headroom if it continues to gain broader acceptance.
Although it has come a long way in 11 years, many risks remain for Bitcoin:
Crossing the Chasm: Bitcoin has gained credibility with early adopters, including some large institutional investors, but it remains niche relative to incumbent monetary assets like gold. There is risk that Bitcoin never achieves the broad acceptance that its proponents hope it will. Of course, therein also lies the opportunity. If Bitcoin were already a broadly accepted store of value, then it would likely be worth orders of magnitude more with relatively little remaining upside.
Volatility: Bitcoin has been (and continues to be) quite volatile relative to US Dollars. There is risk that this volatility limits adoption or prevents investors from considering Bitcoin as a credible store of value. For better or worse, this volatility may be inherent to the process of Bitcoin adoption as natural swings in investor confidence (as faced by any early-stage upstart) are reflected in Bitcoin prices. Bitcoin’s bubble-like adoption process exacerbates this effect. As Bitcoin matures and becomes more broadly accepted as a monetary asset akin to gold, investor confidence and Bitcoin prices should stabilize.
Regulation: Bitcoin is a new currency and payment rail that sits outside of existing systems, posing a potential challenge to existing regulatory frameworks. Similar to early Internet regulation, there is hope that governments pursue nuanced regulation(s) that allow innovative use-cases to prevail. However, there is risk that regulation is onerous and ultimately hinders broader Bitcoin adoption. One mitigating factor is that Bitcoin is a global, decentralized network like the Internet, which is difficult to control for any single government, although governments can plausibly limit access to Bitcoin in various ways.
Technical Risk: The Bitcoin codebase and network have been battle-tested for over a decade, but it continues to evolve and there remain some open questions about how the system might behave in the long run (for example, when the Bitcoin supply approaches its asymptote and miners must be compensated primarily with transaction fees rather than block rewards).
Competitive Risk: Other cryptocurrencies could compete with Bitcoin, as could digital fiat currencies sponsored by governments. Relative to other cryptocurrencies, Bitcoin has a strong first-mover advantage in acceptance, security, and credibility that will be difficult for competitors to overcome. Relative to digital fiat currencies, Bitcoin remains differentiated in its scarce, gold-like nature. Digital US Dollars or digital Renminbi would still be subject to local monetary policy decisions, although they have the benefit that they are currency units people already know and use.
Unknown Unknowns: We must acknowledge that a digital monetary asset such as Bitcoin has never existed before. We are in uncharted territory with more uncertainty than is typical.
Bitcoin is a new monetary asset that is climbing an adoption curve. Although it is not yet a broadly accepted store of value, Bitcoin has great potential as a future store of value based on its intrinsic features.
Since monetary assets do not arise frequently, Bitcoin is likely to challenge our ordinary intuitions, and it has stirred (understandable) controversy in the investment world.
Therein lies the opportunity, of course. We believe Bitcoin offers a compelling risk/reward profile for patient, long-term investors willing to spend the time to truly understand Bitcoin. We hope this paper provides a helpful starting point.
https://vas3k.com/blog/blockchain/
Today I want to tell you why did blockchain appear, how is the cryptocurrency world organized and why is it the smartest system in terms of pure logic during past years. You’ll find out below.
I am far from the hype around bitcoin or stock exchange courses. Blockchain for me is just a speck of technology. New, strange, tricky, but it seems to be moving the world, unlike other stories. Apparently, it will stick around for a long time.
I wrote this post like if I were explaining blockchain to my parents. Even if my non-techie friends were here surfing over the shoulder, surely, they would figure it out.
Here is my buddy Bill. He’ll help me to illustrate what I am about to say. And if he blows it, we’ll kick him out.
Blockchain was born as a constitutive part of the bitcoin system, but its principles can be applied and modified independently. Anyone can stock up a personal blockchain, even from the laptop.
Blockchain is a chain of blocks or, in other words, it’s a linked list. Each entry in this list is linked to the previous one and so on it connects transitively to the very first one. Think of it as train car analogy where each one is chained to the next one. There is a worth reading Russian article by Nikita Likhachev, where the same concept is spelled out for mere newcomers. My analogies are partially borrowed from there.
Let’s consider the example below.
Bill’s friends constantly drain him of money. Bill is kind and very forgetful. A week later, he no longer remembers who did not return the debt and hesitates to ask his friends to remind him. Therefore, one of those days he finally decides to get organized and writes it all down on his handy chalkboard.
From now on, Bill no longer forgets that Max has returned everything and Bob’s debt is over $700 while keeps growing. On Saturday Bill invites Bob into his place for a drink. When Bill turns away to mix a nice drink, Bob wipes off the entry “Lent to Bob: $200” and fills out the empty line with “Bob brought back $500.”
Bill trusts his list. Therefore, he forgets about the debt and loses $700. Bill is disappointed. He decides to lock down his records.
Let’s say, adding just a dot at the end of the sequence would make the final hash unrecognized. It gives him an idea!
Bill applies the well-known SHA-256 hash to each record on his chalkboard. Then he scribbles the resulting hash right next to its unencrypted corresponding part. Now he can sleep soundly knowing that his records aren’t been altered. When in doubt, he can always decrypt them and compare with the originals.
But the EVIL RUSSIAN GENIUS IVAN is also skilled in SHA-256 and can change the information with its hash. Especially if the hash is scribbled right on the board next to its unencrypted original.
That’s why, for a better protection, Bill decides not only to encrypt the record itself but to chain it up to the hash from the previous transaction. Now all his following entries depend on the previous ones. If you change one single dot, you will be doomed to recalculate the entire hash cascade tailing below.
Now Bill has a personal linked list.
One day Ivan creeps in at night, changes one of the records and updates hashes for the entire list down to the very bottom. It takes him a great deal of effort, but Bill sleeps soundly and does not know what’s going on. In the morning, Bill discovers a perfectly correct list – all hashes match. But he still feels deceived. How else on earth could he protect himself from the nightmare Ivan?
Bill decides to make Ivan’s life even more complicated. Now, before adding a new entry to his list, Bill will solve a complex mathematical equation. Then he will chain up the solution to the final hash.
Bill is a brilliant math student. But adding a record takes about ten minutes even for him. However, this time is worth it! If Ivan breaks in again, he will have to solve all the equations for each transaction below. There might be dozens of them. This will make him think twice since each record equations are unique and logically tailored to the original content.
However, keeping an eye on the list is still simple. First, you compare the hashes and then check the solutions of the equations with a simple substitution. If the ends meet, the list was not altered.
In reality, the equations are not always smooth. Computers quickly crack them easily. And where to store tons of unique equations? Considering all that, the blockchain inventors came up with a more elegant task. There should be a number (nonce) which final hash of the entire record would start with 10 zeros. The nonce is difficult to dig up but the result can always be examined simply with bare eyes.
Try this. It’s too exhausting to search manually for a hash starting with ten zeros, so let’s try to search for the one starting with two zeros. Write anything in the field Nonce. The game stops as soon as the hash of your input starts with two zeros (00).
Put any characters in the field Nonce until their hash starts with two zeros (00):
Nonce: Hash: bd4aeafc255acfa9df974dd3e68795e6ac7f198856a9d12c922cd0577b6c564fAttempts: 0
Now smart Bill verifies all hashes and additionally(!) makes sure that each of them starts with a specified number of zeros. Nightmare Ivan, even on a powerful laptop, won’t have enough time and patience to calculate all hashes according to the rule.
This financial mechanism invented by Bill is a simple model of the blockchain. Its security is guaranteed by mathematicians. They ensured that hashes cannot be calculated anyhow but by the search for each individual record hash. It is called mining. Let’s take a closer look at how it works.
Our friends liked the idea of keeping a forgery-proof list of “who owes whom”. They also do not want to bother themselves remembering who paid for whom at the bar and how much they still owe to each other: everything is written on the board. They discussed ups and downs of the idea and came to the agreement that now they need to combine one list for all.
But who may be entrusted to run such important bookkeeping? When it comes to money, trust becomes the primary criterion. We would rather not trust strangers with our money. Our ancestors invented the whole banking idea for this very purpose. Later on, it became credible backed by licenses, laws, and insurances of the Central Bank.
Friends trust each other. They choose the most responsible person to do bookkeeping. But what if we deal with strangers? A big city, country, or the whole world, as for instance in bitcoin mining? In this case, no one can trust anyone.
So they came up with an alternative approach: everyone keeps the copy of the list. The attacker will not just have to rewrite one list but to sneak into each house and to rewrite everyone’s list. Then it turns out that someone kept several lists at home that nobody knew about. This is called decentralization.
The downside of this approach is that in order to make new entries you will have to stay in touch with all other participants and to constantly bend their ears about new changes. However, if the participants aren’t humans but even-tempered calculating mechanisms, bothering them ceases to be any problem at all.
There is no single point of trust in this system, and hence no possibility of a bribery or a fraud. All network participants act in accordance with a strict rule: no one trusts anyone. Everyone trusts only personally possessed information. This is the main law of any decentralized network.
When you buy lunch you may enter your debit card PIN allowing the food chain to ask the bank if you have 5 bucks on your account. In other words, you confirm with your PIN a $5 transaction, which the bank confirms or rejects.
Our records like “Lent to Bob: $500” are also transactions. But we have no bank authorizing the person initiating them. How could we verify that Bob on the sly did not insert into the list a new entry “Max owes Bill $100,500”?
For this purpose blockchain uses a mechanism of public and private keys, IT people for a long time use them for authorization in the same SSH.
Briefly how this complicated yet beautiful math works: you generate a pair of large prime numbers on your computer – public and private keys. A private key is considered super-secret because it can decrypt what is encrypted publicly. But it perfectly works backward as well: if you disclose the public key to your friends, they will be able to encrypt with it any message addressed to you so that only you as a private key holder can open and read it. Besides, with the public key, you can verify that the data was encrypted with your private key without decrypting the data itself.
We live in the world of the decentralized Internet where no one can trust anyone. A transaction signed with both private and public keys together is sent to a special place – depository of unconfirmed transactions so that any member on the network could verify that it was you who initiated it and not someone else attempting to steal your money.
This mechanism safeguards openness and security of the network. If in the real world the banks are usually the ones responsible for keeping the money safe, in blockchain this function is delegated to math.
Your public key is the number of your crypto wallet. It means that you can create a wallet for any cryptocurrency without even leaving the network.
Plain users, not interested in learning about private keys, can always get help from online wallet services. Convenient QR codes were invented for the purpose of copying of long public keys.
As you could see, both Bill’s chalkboard and blockchain consist of transaction history only. They do not keep track of balance in each wallet. If they did, we would have to find additional protective measures.
The wallet holder’s identity is verified by the private key alone. But how would other network members know that I have enough money to buy?
Since we do not track the balance, you must prove it. Therefore, the blockchain transaction includes not only your signature and how much you want to spend but also links to the previous transactions in which you received the corresponding amount of money. That is, if you want to spend 400 dollars, you run through all your income and expenses history. You attach the proof of income where you were given 100 + 250 + 50 dollars to your transaction thereby proving that you have 400 dollars.
Each member of the network will double check that you have not attached the income twice and that you haven’t spent those $300 that Max gave you last week.
In blockchain, these transaction-linked earnings are called inputs, and all the recipients of the money are called outputs. Since one of the outputs will most often be you, the sum of all inputs is rarely exactly the same as you want to transfer at a time. In other words, the blockchain transaction looks like “I received 3 and 2 BTC, 4 BTC out of them I want to spend and the remaining 1 BTC want to send back to myself”.
Going a bit forward, you can also specify some little commission for your transaction so that the miners would more actively add it to the blocks. In this case, the miner will get some petty cash, and you’ll get a little less change. Mining is discussed in details below.
The good thing about blockchain is that the inputs do not necessarily have to come from the same wallet. Nothing is checked but the key. If you know the private key of all inputs, you can easily attach them to your transaction and pay off with that money. As if you were paying in a supermarket with several cards at once.
However, if you lose your private key, if your hard drive dies or laptop gets stolen, your bitcoins will be locked out forever. Nobody can use them anymore as input for new transactions. This amount will be unavailable to the entire world forever as if you had burned a bundle of banknotes. There is no “bank” on the network where you could drop a complaint and get a refund for your lost crypto money. And if there were, then the “bank” would have to create a certain additional amount of new bitcoins.
I mentioned that transactions are added to a special “unconfirmed transactions depository”. Why would we need some kind of intermediate entity if we sign all our transactions? Why not write them directly into the blockchain?
Because the signal from point A to point B always travels with delay. Two transactions can choose two different paths. The transaction initiated earlier can reach the recipient later as it followed a longer way. That is how double-spending occurs. The same amount is sent to two recipients at once. And they do not even know about that! This is not how the regular paper bills work.
For a decentralized network where no one can trust anyone, this problem is particularly acute. Here’s how you make sure that one transaction happened exactly before the other one? Ask the sender to attach the dispatch time to it, right? But remember – you can not trust anyone, even the sender. Time on different computers will always vary and there is no way to synchronize it. A copy of the blockchain is stored on every computer on the network that each participant trusts.
So how can you make sure that one transaction was earlier than the other?
The answer is simple: it is impossible. There is no way to confirm the transaction time in a decentralized network. And here comes the third important idea of blockchain invented by Satoshi and called blocks.
Each working computer on the network selects any preferable transactions from a common depository. The first choice is usually given to the highest commission offered. The computer collects the transactions until their total size reaches the agreed limit. In Bitcoin, this limit on the block size equals to 1 MB (after SegWit2x will be 2 MB), and in Bitcoin Cash – 8 MB.
In networks like Ethereum, everything is a little more complicated, the number of transactions per block depends on the computational complexity of the included smart contracts. But the idea remains the same – there is a limit.
The entire blockchain, in fact, is a list of blocks, where each one depends on the previous one. It can track any transaction for the entire history unwinding the blockchain down to the very first record. This list weighs hundreds of gigabytes and should be copy-pasted to all participating computers. However, possessing a copy is not necessary to simply create new transactions and transfer the money. It gets downloaded from all nearest computers on the network as if you were downloading a series from the torrents. The only difference is that new series appear every 10 minutes.
After collecting transactions in the depository your computer starts organizing them into the same type of list Bill had on the board. But it’s structured as a tree – hash records go in pairs, the result is paired again, and so on until there is only one hash left – the root of the tree added to the block. I did not find the answer why it should be necessary a tree form, but I guess it’s just quicker. For details, see The Merkle Tree on the wiki.
Since the actual blockchain is already downloaded, our computer knows exactly what its last blocks. Everything it needs is to add a link to it into the header of the block, hash it all and inform the other computers on the network “look, I made a new block, let’s add it to our blockchain.”
The others should check that the block is built according to the rules and that we did not sneak unnecessary transactions into it. After that, they add it to their chains. Now when all incoming transactions are verified, the blockchain increased by a block and everything goes well, right?
Not really. Thousands of computers are working on the network at the same time. As soon as they assemble a new unit, they rush to report almost simultaneously that their unit was built first. From the previous section, we know that it is impossible to prove who really the leader was in a decentralized network.
Thus, to add the block to the chain, our computers must solve some complicated problem that would take some of their time.
Like in high school, when a class was solving a serious math problem, the answer was submitted simultaneously on the very rare occasions.
For a human, a complicated task is planning some vacation getaway, for the machine – to add a specific number (nonce) to the end of the block. The resulting SHA-256 hash for the entire block would start with 10 zeros. This particular problem must be solved in order to add a block to the Bitcoin network. Requirements for other networks may vary.
So we come closer to the concept of mining that became so popular in recent years.
Bitcoin mining is not some kind of sacred mystery. Mining has nothing to do with digging new bitcoins somewhere on the Internet. It’s called mining when thousands of computers around the world are buzzing in the basements, grinding millions of numbers per second, trying to pick a hash starting with 10 (or even 16) zeros. They do not even need to be on the network for this.
Video cards with their hundreds of parallel cores solve this problem faster than any CPU.
Why exactly 10 zeros? Just because. There’s no point in it. This is what Satoshi offered because this is one of those tasks where there is always a solution. But it certainly cannot be discovered faster than by a long monotonous search of options.
The complexity of mining directly depends on the size of the network, its total capacity. If you create your own blockchain and run it yourself at home on two laptops, then the task should be simpler. You can generate, for example, the hash starting with one zero or the sum of the even bits to be equal to the sum of the odd ones.
One computer will spend decades searching for a hash that starts with 10 zeros. But if you combine thousands of computers into a huge network and search simultaneously, then according to the theory of probability this task will be solved on average in 10 minutes. This is the exact time window for the new blocks to be added to the blockchain. Every 8-12 minutes someone on earth finds a requested hash and gets the privilege of announcing their discovery and thus avoiding the question of who was the first.
To find a correct answer, each computer (according to information of 2017) receives 12.5 BTC – this is the amount of compensation generated by the bitcoin system “from the thin air”. The amount decreases every four years. Technically, it means that each miner always adds another transaction to his block – “create 12.5 BTC and send them to my wallet.” When you hear “the number of bitcoins in the world is limited to 21 million, now 16 million are already being cashed out” – they are mostly spent on the network generated rewards.
Any blockchain exists only while its miners exist.
It is the miners who add the emerging transactions to the blockchain. So if someone tells you that he or she “will make a block for ***”, the first question they should answer is who and why he will be mining there. The most common correct answer is “everyone will be mining because we offer bitcoins for that and the miners’ wallets will grow “. But it does not apply to all projects. For example, if the Ministry of Health creates its own closed blockchain for medical personnel, who will mine it? Therapists on the weekends?
But what benefit will miners have afterward, when compensations run out or become too miserable?
According to the Creator’s idea, by that time people will be believing in the reality of bitcoin and mining will begin to pay off with the number of commissions included in each transaction. In 2012, all commissions were zero, miners worked only for rewards from the blocks. Today, a transaction with a zero commission can hang in the pool for several hours, because there is also a competition, and many people are ready to pay for the speed.
That means that the essence of mining lies in resolving meaningless tasks. Would it be so impossible to redirect all this enormous power to something more useful, e.g. search for a cure for cancer?
The essence of mining is to solve any computational problem. This task should be simple enough that the network participants have a stable probability of finding the answer. Otherwise, it will take the eternity to confirm those transactions. Imagine that at the checkout in the store you need to wait every time for half an hour until the bank confirms your transaction. Nobody would work with such a bank.
But at the same time, the task should be complex so that all users of the network would not find the answer immediately and at the same time. Because if they do, they will announce a lot of parallel blocks with the same transactions to the network. In turn, will trigger the probability of “double spending” we spoke before. Or even worse – a split-up of the entire blockchain into several branches, where no one longer will tell apart confirmed and confirmed transactions.
If the reward in 12.5 BTC is awarded only once every 10 minutes and only to the one who found the block, it turns out I’ll need to burn my video cards for a number of years thinking that one day I will win $ 40,000 (at the current rate)?
This is how bitcoin works. But it was not always like that. Previously, the networks were smaller, the difficulty was lower, and therefore, the probability of finding a hash for a new block single-handedly was higher than now. But also bitcoins themselves were not so expensive.
Today, nobody mines bitcoins individually anymore. Instead, the participants join special groups, mining pools where every miner tries to find the right hash. If someone in the group succeeds the entire reward is split between the participants, depending on the size of their contribution to the mutual work. It turns out that you rush and you lose a penny every week from the total share.
From the other side, individual mining is quite possible on some other networks. Until recently it was easy to mine Ethereum where the blocks are added every 10 seconds. The reward for the block is much lower but the probability of generating some little money is higher.
So even if we burn thousands of video cards and there will be no way out?
Some guys start building blockchains utilizing other principles of mining. Today, the second most popular concept is Proof-of-Stake (proof of ownership)*. In this kind of mining, the more coins are gathered by the network participant the better is this alpha’s chance to insert his block into a blockchain.
Anyone is free to bring forth other types of mining. As it was suggested, all computers on the network could possibly collaborate on the cancer treatment research with the only difference that you will have to figure out how exactly they are to contribute to the cause. Maybe I could also claim that I was there but with my video card turned off. How to measure each team member’s contribution and his effort made towards? You think it up. If you dare to mine your CancerCoin, be ready for the media at your doorsteps.
Imagine a situation when despite all our theory of probability, two miners still found the right answer at the same time and sent two absolutely faithful blocks down the network. These blocks are guaranteed to be different because even if the miners miraculously pick the same transactions from the pool built absolutely identical trees and guess the same random number (nonce), their hashes will still be different since each one of them will still provide his own wallet number for the reward.
Now we have two valid blocks and again there is a problem of whom to consider the winner. How will the network behave in this case?
The blockchain algorithm specifies that the network participants simply accept the first correct answer they receive. After that, they keep playing by their standards. Both miners will collect their rewards, and all the others will start mining, relying on the latter block each one of them personally received, discarding all other correct replicas. That, in turn, creates two absolutely correct blockchains on the same network. What a paradox!
This is a regular situation when probability theory comes again handy. The network will function in such bifurcated state until the time one of the miners discovers the next block linked to one of these chains. As soon as this block is inserted, the chain becomes longer, and thus one of the blockchain network agreements comes to power: under any circumstances, the longest chain of blocks is accepted as the only true for the entire network.
A short chain despite all its correctness is rejected by all network members. Its transactions return to the pool (if they have not been confirmed in the other transaction), and their processing starts from the zero again. The miner loses his reward because his unit no longer exists.
With the growth of the network, such coincidences from “very unlikely” go into the category of “well, sometimes it happens.” Old-timers remember the cases when a perfect chain of four blocks had been discarded with no regrets.
Three blockchain tail security (end of chain insecurity) rules were presented in order to address the above problem:
Mining fees can be spent only after 20 more confirmed blocks after they were received. For bitcoin, it’s about three hours.
If you received bitcoins, they can be used for an input in new transactions only after 1-5 blocks.
Rules 1 and 2 are only written in the settings of each client. Nobody watches their observance. But the law on the longest chain will still destroy all your transactions if you try to deceive the system without respecting them.
Now when you’ve learned everything about mining, blockchain and the rule of the longest chain, you might have a question: would it be possible to outrun blockchain by building the longest chain on my own, and thereby legalizing all my previous fake transactions?
Suppose you have the most powerful computer on earth! Google and Amazon data centers merged at your disposal and aim collectively at calculating the longest blockchain on the network.
Since you cannot calculate several blocks of the chain in advance because each next block depends on the previous one you decide to count each block as fast as possible on your huge data centers, faster than all other participants in their joined efforts to increase the main blockchain. Would it be possible to outrun them? Probably yes.
If your computing power exceeds 50% of the power of all network participants, then with a 50% probability you can build a longer chain faster than all other network members together. This would be (in theory) a possible way to deceive the blockchain by building a longer chain of transactions. Then all transactions of the real network would be considered incorrect, you would collect your pot of gold and lay yet another cornerstone in the history of the cryptocurrency titled “separation of the blockchain.” It did happen once in the history of Ethereum due to the bug in the code.
But in reality, no data center could be comparable in power to all computers in the world. One and a half billion Chinese with ASICs, another half billion of Indians – that is a HUGE computing power. No one in the world can compete with them alone, even Google.
That would be like running out the door and brainwashing every person in the streets that 1 dollar now costs 1 ruble, indoctrinating the entire world before you get busted in the media. If you accomplished all that, you could even cause the global economy to collapse. In theory, it is possible, right? But in practice, for some reason [wink], nobody ever succeeded that far.
This probability bears the entire blockchain concept. The more participants-miners are involved in a network, the more security and trust exist on the network. Therefore, when in China another large mining farm shuts down, the cryptocurrency course collapses. Everyone is worried that somewhere in the world there is an evil genius who has already gathered a mining pool with a whopping ~ 49% capacity.
In fact, it had already happened several times back in 2014, when one of the mining pools temporarily became more powerful than the rest of the network. Luckily, there were no manipulations reported regarding this issue.
Blockchain is not a strictly defined set of algorithms. It is a robust model for building up a forgery-proof decentralized network where no one can trust anyone. I am pretty sure that while you were reading this text, you kept thinking about your own list of possible blockchain applications and how this very idea could be utilized better or socially more responsible. It means that you understand blockchain, I congratulate you.
Some folks worldwide also understood it and decided to improve or adapt it to certain specific needs. Cryptocurrencies aren’t everything the world wants even though they get very diverse too. Below is a short list of some ideas and projects gaining a certain popularity due to a rethinking of the concept of blockchain.
https://vitalik.ca/general/2021/08/16/voting3.html
Vitalik Buterin, Aug 21
There are two key problems inherent to such an environment that need to be solved:
Funding public goods: how do projects that are valuable to a wide and unselective group of people in the community, but which often do not have a business model (eg. layer-1 and layer-2 protocol research, client development, documentation...), get funded?
Protocol maintenance and upgrades: how are upgrades to the protocol, and regular maintenance and adjustment operations on parts of the protocol that are not long-term stable (eg. lists of safe assets, price oracle sources, multi-party computation keyholders), agreed upon?
But now, increasingly, that luck is running out, and challenges of coordinating protocol maintenance and upgrades and funding documentation, research and development while avoiding the risks of centralization are at the forefront.
Here is the situation in a chart:
Enter DAOs. A project that launches as a "pure" DAO from day 1 can achieve a combination of two properties that were previously impossible to combine: (i) sufficiency of developer funding, and (ii) credible neutrality of funding (the much-coveted "fair launch"). Instead of developer funding coming from a hardcoded list of receiving addresses, the decisions can be made by the DAO itself.
Of course, it's difficult to make a launch perfectly fair, and unfairness from information asymmetry can often be worse than unfairness from explicit premines (was Bitcoin really a fair launch considering how few people had a chance to even hear about it by the time 1/4 of the supply had already been handed out by the end of 2010?). But even still, in-protocol compensation for non-security public goods from day one seems like a potentially significant step forward toward getting sufficient and more credibly neutral developer funding.
Small groups of wealthy participants ("whales") are better at successfully executing decisions than large groups of small-holders. This is because of the tragedy of the commons among small-holders: each small-holder has only an insignificant influence on the outcome, and so they have little incentive to not be lazy and actually vote. Even if there are rewards for voting, there is little incentive to research and think carefully about what they are voting for.
Coin voting governance empowers coin holders and coin holder interests at the expense of other parts of the community: protocol communities are made up of diverse constituencies that have many different values, visions and goals. Coin voting, however, only gives power to one constituency (coin holders, and especially wealthy ones), and leads to over-valuing the goal of making the coin price go up even if that involves harmful rent extraction.
Conflict of interest issues: giving voting power to one constituency (coin holders), and especially over-empowering wealthy actors in that constituency, risks over-exposure to the conflicts-of-interest within that particular elite (eg. investment funds or holders that also hold tokens of other DeFi platforms that interact with the platform in question)
My voting delegation page in the Gitcoin DAO
Can crypto protocols be considered public goods if ownership is concentrated in the hands of a few whales? Colloquially, these market primitives are sometimes described as "public infrastructure," but if blockchains serve a "public" today, it is primarily one of decentralized finance. Fundamentally, these tokenholders share only one common object of concern: price.
The complaint is false; blockchains serve a public much richer and broader than DeFi token holders. But our coin-voting-driven governance systems are completely failing to capture that, and it seems difficult to make a governance system that captures that richness without a more fundamental change to the paradigm.
The problems get much worse once determined attackers trying to subvert the system enter the picture. The fundamental vulnerability of coin voting is simple to understand. A token in a protocol with coin voting is a bundle of two rights that are combined into a single asset: (i) some kind of economic interest in the protocol's revenue and (ii) the right to participate in governance. This combination is deliberate: the goal is to align power and responsibility. But in fact, these two rights are very easy to unbundle from each other. Imagine a simple wrapper contract that has these rules: if you deposit 1 XYZ into the contract, you get back 1 WXYZ. That WXYZ can be converted back into an XYZ at any time, plus in addition it accrues dividends. Where do the dividends come from? Well, while the XYZ coins are inside the wrapper contract, it's the wrapper contract that has the ability to use them however it wants in governance (making proposals, voting on proposals, etc). The wrapper contract simply auctions off this right every day, and distributes the profits among the original depositors.
As an XYZ holder, is it in your interest to deposit your coins into the contract? If you are a very large holder, it might not be; you like the dividends, but you are scared of what a misaligned actor might do with the governance power you are selling them. But if you are a smaller holder, then it very much is. If the governance power auctioned by the wrapper contract gets bought up by an attacker, you personally only suffer a small fraction of the cost of the bad governance decisions that your token is contributing to, but you personally gain the full benefit of the dividend from the governance rights auction. This situation is a classic tragedy of the commons.
Accept attacker's bribe
Reject bribe, vote your conscience
One natural critique of voter bribing fears is: are voters really going to be so immoral as to accept such obvious bribes? The average DAO token holder is an enthusiast, and it would be hard for them to feel good about so selfishly and blatantly selling out the project. But what this misses is that there are much more obfuscated ways to separate out profit sharing rights from governance rights, that don't require anything remotely as explicit as a wrapper contract.
Note that throughout this process, the borrower has no financial exposure to XYZ. That is, if they use their XYZ to vote for a governance decision that destroys the value of XYZ, they do not lose a penny as a result. The XYZ they are holding is XYZ that they have to eventually pay back into the CDP regardless, so they do not care if its value goes up or down. And so we have achieved unbundling: the borrower has governance power without economic interest, and the lender has economic interest without governance power.
At present, many blockchains and DAOs with coin voting have so far managed to avoid these attacks in their most severe forms. There are occasional signs of attempted bribes:
But despite all of these important issues, there have been much fewer examples of outright voter bribing, including obfuscated forms such as using financial markets, than simple economic reasoning would suggest. The natural question to ask is: why haven't more outright attacks happened yet?
My answer is that the "why not yet" relies on three contingent factors that are true today, but are likely to get less true over time:
Community spirit from having a tightly-knit community, where everyone feels a sense of camaraderie in a common tribe and mission..
High wealth concentration and coordination of token holders; large holders have higher ability to affect the outcome and have investments in long-term relationships with each other (both the "old boys clubs" of VCs, but also many other equally powerful but lower-profile groups of wealthy token holders), and this makes them much more difficult to bribe.
When a small coordinated group of users holds over 50% of the coins, and both they and the rest are invested in a tightly-knit community, and there are few tokens being lent out at reasonable rates, all of the above bribing attacks may perhaps remain theoretical. But over time, (1) and (3) will inevitably become less true no matter what we do, and (2) must become less true if we want DAOs to become more fair. When those changes happen, will DAOs remain safe? And if coin voting cannot be sustainably resistant against attacks, then what can?
One possible mitigation to the above issues, and one that is to varying extents being tried already, is to put limits on what coin-driven governance can do. There are a few ways to do this:
Use on-chain governance only for applications, not base layers: Ethereum does this already, as the protocol itself is governed through off-chain governance, while DAOs and other apps on top of this are sometimes (but not always) governed through on-chain governance.
Be more fork-friendly: make it easier for users to quickly coordinate on and execute a fork. This makes the payoff of capturing governance smaller.
The Uniswap case is particularly interesting: it's an intended behavior that the on-chain governance funds teams, which may develop future versions of the Uniswap protocol, but it's up to users to opt-in to upgrading to those versions. This is a hybrid of on-chain and off-chain governance that leaves only a limited role for the on-chain side.
But limited governance is not an acceptable solution by itself; those areas where governance is needed the most (eg. funds distribution for public goods) are themselves among the most vulnerable to attack. Public goods funding is so vulnerable to attack because there is a very direct way for an attacker to profit from bad decisions: they can try to push through a bad decision that sends funds to themselves. Hence, we also need techniques to improve governance itself...
A second approach is to use forms of governance that are not coin-voting-driven. But if coins do not determine what weight an account has in governance, what does? There are two natural alternatives:
Proof of participation is less well-understood. The key challenge is that determining what counts as how much participation itself requires a quite robust governance structure. It's possible that the easiest solution involves bootstrapping the system with a hand-picked choice of 10-100 early contributors, and then decentralizing over time as the selected participants of round N determine participation criteria for round N+1. The possibility of a fork helps provide a path to recovery from, and an incentive against, governance going off the rails.
The third approach is to break the tragedy of the commons, by changing the rules of the vote itself. Coin voting fails because while voters are collectively accountable for their decisions (if everyone votes for a terrible decision, everyone's coins drop to zero), each voter is not individually accountable (if a terrible decision happens, those who supported it suffer no more than those who opposed it). Can we make a voting system that changes this dynamic, and makes voters individually, and not just collectively, responsible for their decisions?
Fork-friendliness is arguably a skin-in-the-game strategy, if forks are done in the way that Hive forked from Steem. In the case that a ruinous governance decision succeeds and can no longer be opposed inside the protocol, users can take it upon themselves to make a fork. Furthermore, in that fork, the coins that voted for the bad decision can be destroyed.
This sounds harsh, and perhaps it even feels like a violation of an implicit norm that the "immutability of the ledger" should remain sacrosanct when forking a coin. But the idea seems much more reasonable when seen from a different perspective. We keep the idea of a strong firewall where individual coin balances are expected to be inviolate, but only apply that protection to coins that do not participate in governance. If you participate in governance, even indirectly by putting your coins into a wrapper mechanism, then you may be held liable for the costs of your actions.
This creates individual responsibility: if an attack happens, and your coins vote for the attack, then your coins are destroyed. If your coins do not vote for the attack, your coins are safe. The responsibility propagates upward: if you put your coins into a wrapper contract and the wrapper contract votes for an attack, the wrapper contract's balance is wiped and so you lose your coins. If an attacker borrows XYZ from a defi lending platform, when the platform forks anyone who lent XYZ loses out (note that this makes lending the governance token in general very risky; this is an intended consequence).
But the above only works for guarding against decisions that are truly extreme. What about smaller-scale heists, which unfairly favor attackers manipulating the economics of the governance but not severely enough to be ruinous? And what about, in the absence of any attackers at all, simple laziness, and the fact that coin-voting governance has no selection pressure in favor of higher-quality opinions?
"Pure" futarchy has proven difficult to introduce, because in practice objective functions are very difficult to define (it's not just coin price that people want!), but various hybrid forms of futarchy may well work. Examples of hybrid futarchy include:
In the latter two cases, hybrid futarchy depends on some form of non-futarchy governance to measure against the objective function or serve as a dispute layer of last resort. However, this non-futarchy governance has several advantages that it does not if used directly: (i) it activates later, so it has access to more information, (ii) it is used less frequently, so it can expend less effort, and (iii) each use of it has greater consequences, so it's more acceptable to just rely on forking to align incentives for this final layer.
There are also solutions that combine elements of the above techniques. Some possible examples:
Time delays plus elected-specialist governance: this is one possible solution to the ancient conundrum of how to make a crypto-collateralized stablecoin whose locked funds can exceed the value of the profit-taking token without risking governance capture. The stable coin uses a price oracle constructed from the median of values submitted by N (eg. N = 13) elected providers. Coin voting chooses the providers, but it can only cycle out one provider each week. If users notice that coin voting is bringing in untrustworthy price providers, they have N/2 weeks before the stablecoin breaks to switch to a different one.
But these are all only a few possible examples. There is much more that can be done in researching and developing non-coin-driven governance algorithms. The most important thing that can be done today is moving away from the idea that coin voting is the only legitimate form of governance decentralization. Coin voting is attractive because it feels credibly neutral: anyone can go and get some units of the governance token on Uniswap. In practice, however, coin voting may well only appear secure today precisely because of the imperfections in its neutrality (namely, large portions of the supply staying in the hands of a tightly-coordinated clique of insiders).
We should stay very wary of the idea that current forms of coin voting are "safe defaults". There is still much that remains to be seen about how they function under conditions of more economic stress and mature ecosystems and financial markets, and the time is now to start simultaneously experimenting with alternatives.
https://vijayboyapati.medium.com/the-bullish-case-for-bitcoin-6ecc8bdecc1
Vijay Boyapati, Mar 2018
For an investor the salient fact of the invention of Bitcoin is the creation of a new scarce digital good — bitcoins. Bitcoins are transferable digital tokens that are created on the Bitcoin network in a process known as “mining”. Bitcoin mining is roughly analogous to gold mining except that production follows a designed, predictable schedule. By design, only 21 million bitcoins will ever be mined and most of these already have been — approximately 16.8 million bitcoins have been mined at the time of writing. Every four years the number of bitcoins produced by mining halves and the production of new bitcoins will end completely by the year 2140.
Bitcoins are not backed by any physical commodity, nor are they guaranteed by any government or company, which raises the obvious question for a new bitcoin investor: why do they have any value at all? Unlike stocks, bonds, real-estate or even commodities such as oil and wheat, bitcoins cannot be valued using standard discounted cash flow analysis or by demand for their use in the production of higher order goods. Bitcoins fall into an entirely different category of goods, known as monetary goods, whose value is set game-theoretically. I.e., each market participant values the good based on their appraisal of whether and how much other participants will value it. To understand the game-theoretic nature of monetary goods, we need to explore the origins of money.
The primary and ultimate evolutionary function of collectibles was as a medium for storing and transferring wealth.
Collectibles served as a sort of “proto-money” by making trade possible between otherwise antagonistic tribes and by allowing wealth to be transferred between generations. Trade and transfer of collectibles were quite infrequent in paleolithic societies, and these goods served more as a “store of value” rather than the “medium of exchange” role that we largely recognize modern money to play. Szabo explains:
Compared to modern money, primitive money had a very low velocity — it might be transferred only a handful of times in an average individual’s lifetime. Nevertheless, a durable collectible, what today we would call an heirloom, could persist for many generations and added substantial value at each transfer — often making the transfer even possible at all.
Over the millennia, as human societies grew and trade routes developed, the stores of value that had emerged in individual societies came to compete against each other. Merchants and traders would face a choice of whether to save the proceeds of their trade in the store of value of their own society or the store of value of the society they were trading with, or some balance of both. The benefit of maintaining savings in a foreign store of value was the enhanced ability to complete trade in the associated foreign society. Merchants holding savings in a foreign store of value also had an incentive to encourage its adoption within their own society, as this would increase the purchasing power of their savings. The benefits of an imported store of value accrued not only to the merchants doing the importing, but also to the societies themselves. Two societies converging on a single store of value would see a substantial decrease in the cost of completing trade with each other and an attendant increase in trade-based wealth. Indeed, the 19th century was the first time when most of the world converged on a single store of value — gold — and this period saw the greatest explosion of trade in the history of the world. Of this halcyon period, Lord Keynes wrote:
What an extraordinary episode in the economic progress of man that age was … for any man of capacity or character at all exceeding the average, into the middle and upper classes, for whom life offered, at a low cost and with the least trouble, conveniences, comforts, and amenities beyond the compass of the richest and most powerful monarchs of other ages. The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep
When stores of value compete against each other, it is the specific attributes that make a good store of value that allows one to out-compete another at the margin and increase demand for it over time. While many goods have been used as stores of value or “proto-money”, certain attributes emerged that were particularly demanded and allowed goods with these attributes to out-compete others. An ideal store of value will be:
Durable: the good must not be perishable or easily destroyed. Thus wheat is not an ideal store of value
Portable: the good must be easy to transport and store, making it possible to secure it against loss or theft and allowing it to facilitate long-distance trade. A cow is thus less ideal than a gold bracelet.
Fungible: one specimen of the good should be interchangeable with another of equal quantity. Without fungibility, the coincidence of wants problem remains unsolved. Thus gold is better than diamonds, which are irregular in shape and quality.
Verifiable: the good must be easy to quickly identify and verify as authentic. Easy verification increases the confidence of its recipient in trade and increases the likelihood a trade will be consummated.
Divisible: the good must be easy to subdivide. While this attribute was less important in early societies where trade was infrequent, it became more important as trade flourished and the quantities exchanged became smaller and more precise.
Scarce: As Nick Szabo termed it, a monetary good must have “unforgeable costliness”. In other words, the good must not be abundant or easy to either obtain or produce in quantity. Scarcity is perhaps the most important attribute of a store of value as it taps into the innate human desire to collect that which is rare. It is the source of the original value of the store of value.
Established history: the longer the good is perceived to have been valuable by society, the greater its appeal as a store of value. A long-established store of value will be hard to displace by a new upstart except by force of conquest or if the arriviste is endowed with a significant advantage among the other attributes listed above.
Censorship-resistant: a new attribute, which has become increasingly important in our modern, digital society with pervasive surveillance, is censorship-resistance. That is, how difficult is it for an external party such as a corporation or state to prevent the owner of the good from keeping and using it. Goods that are censorship-resistant are ideal to those living under regimes that are trying to enforce capital controls or to outlaw various forms of peaceful trade.
The table below grades Bitcoin, gold and fiat money (such as dollars) against the attributes listed above and is followed by an explanation of each grade:
Bitcoins are the most portable store of value ever used by man. Private keys representing hundreds of millions of dollars can be stored on a tiny USB drive and easily carried anywhere. Furthermore, equally valuable sums can be transmitted between people on opposite ends of the earth near instantly. Fiat currencies, being fundamentally digital, are also highly portable. However, government regulations and capital controls mean that large transfers of value usually take days or may not be possible at all. Cash can be used to avoid capital controls, but then the risk of storage and cost of transportation become significant. Gold, being physical in form and incredibly dense, is by far the least portable. It is no wonder that the majority of bullion is never transported. When bullion is transferred between a buyer and a seller it is typically only the title to the gold that is transferred, not the physical bullion itself. Transmitting physical gold across large distances is costly, risky and time-consuming.
Gold provides the standard for fungibility. When melted down, an ounce of gold is essentially indistinguishable from any other ounce, and gold has always traded this way on the market. Fiat currencies, on the other hand, are only as fungible as the issuing institutions allow them to be. While it may be the case that a fiat banknote is usually treated like any other by merchants accepting them, there are instances where large-denomination notes have been treated differently to small ones. For instance, India’s government, in an attempt to stamp out India’s untaxed gray market, completely demonetized their 500 and 1000 rupee banknotes. The demonetization caused 500 and 1000 rupee notes to trade at a discount to their face value, making them no longer truly fungible with their lower denomination sibling notes. Bitcoins are fungible at the network level, meaning that every bitcoin, when transmitted, is treated the same on the Bitcoin network. However, because bitcoins are traceable on the blockchain, a particular bitcoin may become tainted by its use in illicit trade and merchants or exchanges may be compelled not to accept such tainted bitcoins. Without improvements to the privacy and anonymity of Bitcoin’s network protocol, bitcoins cannot be considered as fungible as gold.
Bitcoins can be divided down to a hundred millionth of a bitcoin and transmitted at such infinitesimal amounts (network fees can, however, make transmission of tiny amounts uneconomic). Fiat currencies are typically divisible down to pocket change, which has little purchasing power, making fiat divisible enough in practice. Gold, while physically divisible, becomes difficult to use when divided into small enough quantities that it could be useful for lower-value day-to-day trade.
If Bitcoin exists for 20 years, there will be near-universal confidence that it will be available forever, much as people believe the Internet is a permanent feature of the modern world.
Bitcoin excels across the majority of attributes listed above, allowing it to outcompete modern and ancient monetary goods at the margin and providing a strong incentive for its increasing adoption. In particular, the potent combination of censorship resistance and absolute scarcity has been a powerful motivator for wealthy investors to allocate a portion of their wealth to the nascent asset class.
Historically speaking … gold seems to have served, firstly, as a commodity valuable for ornamental purposes; secondly, as stored wealth; thirdly, as a medium of exchange; and, lastly, as a measure of value.
Using modern terminology, money always evolves in the following four stages:
Collectible: In the very first stage of its evolution, money will be demanded solely based on its peculiar properties, usually becoming a whimsy of its possessor. Shells, beads and gold were all collectibles before later transitioning to the more familiar roles of money.
Store of value: Once it is demanded by enough people for its peculiarities, money will be recognized as a means of keeping and storing value over time. As a good becomes more widely recognized as a suitable store of value, its purchasing power will rise as more people demand it for this purpose. The purchasing power of a store of value will eventually plateau when it is widely held and the influx of new people desiring it as a store of value dwindles.
Unit of account: When money is widely used as a medium of exchange, goods will be priced in terms of it. I.e., the exchange ratio against money will be available for most goods. It is a common misconception that bitcoin prices are available for many goods today. For example, while a cup of coffee might be available for purchase using bitcoins, the price listed is not a true bitcoin price; rather it is the dollar price desired by the merchant translated into bitcoin terms at the current USD/BTC market exchange rate. If the price of bitcoin were to drop in dollar terms, the number of bitcoins requested by the merchant would increase commensurately. Only when merchants are willing to accept bitcoins for payment without regard to the bitcoin exchange rate against fiat currencies can we truly think of Bitcoin as having become a unit of account.
Monetary goods that are not yet a unit of account may be thought of as being “partly monetized”. Today gold fills such a role, being a store of value but having been stripped of its medium of exchange and unit of account roles by government intervention. It is also possible that one good fills the medium of exchange role of money while another good fills the other roles. This is typically true in countries with dysfunctional states, such as Argentina or Zimbabwe. In his book Digital Gold, Nathaniel Popper writes:
In America, the dollar seamlessly serves the three functions of money: providing a medium of exchange, a unit for measuring the cost of goods, and an asset where value can be stored. In Argentina, on the other hand, while the peso was used as a medium of exchange — for daily purchases — no one used it as a store of value. Keeping savings in the peso was equivalent to throwing away money. So people exchanged any pesos they wanted to save for dollars, which kept their value better than the peso. Because the peso was so volatile, people usually remembered prices in dollars, which provided a more reliable unit of measure over time.
Bitcoin is currently transitioning from the first stage of monetization to the second stage. It will likely be several years before Bitcoin transitions from being an incipient store of value to being a true medium of exchange, and the path it takes to get there is still fraught with risk and uncertainty. It is striking to note that the same transition took many centuries for gold. No one alive has seen the real-time monetization of a good (as is taking place with Bitcoin), so there is precious little experience regarding the path this monetization will take.
In the process of being monetized, a monetary good will soar in purchasing power. Many have commented that the increase in purchasing power of Bitcoin creates the appearance of a “bubble”. While this term is often using disparagingly to suggest that Bitcoin is grossly overvalued, it is unintentionally apt. A characteristic that is common to all monetary goods is that their purchasing power is higher than can be justified by their use-value alone. Indeed, many historical monies had no use-value at all. The difference between the purchasing power of a monetary good and the exchange-value it could command for its inherent usefulness can be thought of as a “monetary premium”. As a monetary good transitions through the stages of monetization (listed in the section above), the monetary premium will increase. The premium does not, however, move in a straight, predictable line. A good X that was in the process of being monetized may be outcompeted by another good Y that is more suitable as money, and the monetary premium of X may drop or vanish entirely. The monetary premium of silver disappeared almost entirely in the late 19th century when governments across the world largely abandoned it as money in favor of gold.
the trouble with [the] bubble story, of course, is that [it] is consistent with any price path, and thus gives no explanation for a particular price path
The process of monetization is game-theoretic; every market participant attempts to anticipate the aggregate demand of other participants and thereby the future monetary premium. Because the monetary premium is unanchored to any inherent usefulness, market participants tend to default to past prices when determining whether a monetary good is cheap or expensive and whether to buy or sell it. The connection of current demand to past prices is known as “path dependence” and is perhaps the greatest source of confusion in understanding the price movements of monetary goods.
I bought [bitcoins] at like $2300 and had an immediate double on my hands. Then I started saying “I can’t buy more of it,” as it rose, even though that’s an anchored opinion based on nothing other than the price where I originally got it. Then, as it fell over the last week because of a Chinese crackdown on the exchanges, I started saying to myself, “Oh good, I hope it gets killed so I can buy more.”
The truth is that the notions of “cheap” and “expensive” are essentially meaningless in reference to monetary goods. The price of a monetary good is not a reflection of its cash flow or how useful it is but, rather, is a measure of how widely adopted it has become for the various roles of money.
You recognize this as a religion — a story we all tell each other and agree upon. Religion is the adoption curve we ought to be thinking about. It’s almost perfect — as soon as someone gets in, they tell everyone and go out evangelizing. Then their friends get in and they start evangelizing.
While the comparison to religion may give Bitcoin an aura of irrational faith, it is entirely rational for the individual owner to evangelize for a superior monetary good and for society as a whole to standardize on it. Money acts as the foundation for all trade and savings, so the adoption of a superior form of money has tremendous multiplicative benefits to wealth creation for all members of a society.
While there are no a priori rules about the path a monetary good will take as it is monetized, a curious pattern has emerged during the relatively brief history of Bitcoin’s monetization. Bitcoin’s price appears to follow a fractal pattern of increasing magnitude, where each iteration of the fractal matches the classic shape of a Gartner hype cycle.
Each Gartner hype cycle begins with a burst of enthusiasm for the new technology, and the price is bid up by the market participants who are “reachable” in that iteration. The earliest buyers in a Gartner hype cycle typically have a strong conviction about the transformative nature of the technology they are investing in. Eventually the market reaches a crescendo of enthusiasm as the supply of new participants who can be reached in the cycle is exhausted and the buying becomes dominated by speculators more interested in quick profits than the underlying technology.
Following the peak of the hype cycle, prices rapidly drop and the speculative fervor is replaced by despair, public derision and a sense that the technology was not transformative at all. Eventually the price bottoms and forms a plateau where the original investors who had strong conviction are joined by a new cohort who were able to withstand the pain of the crash and who appreciated the importance of the technology.
The plateau persists for a prolonged period of time and forms, as Casey calls it, a “stable, boring low”. During the plateau, public interest in the technology will dwindle but it will continue to be developed and the collection of strong believers will slowly grow. A new base is then set for the next iteration of the hype cycle as external observers recognize the technology is not going away and that investing in it may not be as risky as it seemed during the crash phase of the cycle. The next iteration of the hype cycle will bring in a much larger set of adopters and be far greater in magnitude.
Very few people participating in an iteration of a Gartner hype cycle will correctly anticipate how high prices will go in that cycle. Prices usually reach levels that would seem absurd to most investors at the earliest stages of the cycle. When the cycle ends, a popular cause is typically attributed to the crash by the media. While the stated cause (such as an exchange failure) may be a precipitating event, it is not the fundamental reason for the cycle to end. Gartner hype cycles end because of an exhaustion of market participants reachable in the cycle.
It is telling that gold followed the classic pattern of a Gartner hype cycle from the late 1970s to the early 2000s. One might speculate that the hype cycle is an inherent social dynamic to the process of monetization.
Since the inception of the first exchange traded price in 2010, the Bitcoin market has witnessed four major Gartner hype cycles. With hindsight we can precisely identify the price ranges of previous hype cycles in the Bitcoin market. We can also qualitatively identify the cohort of investors that were associated with each iteration of prior cycles.
$0–$1 (2009–March 2011): The first hype cycle in the Bitcoin market was dominated by cryptographers, computer scientists and cypherpunks who were already primed to understand the importance of Satoshi Nakamoto’s groundbreaking invention and who were pioneers in establishing that the Bitcoin protocol was free of technical flaws.
$1–$30 (March 2011–July 2011): The second cycle attracted both early adopters of new technology and a steady stream of ideologically motivated investors who were dazzled by the potential of a stateless money. Libertarians such as Roger Ver were attracted to Bitcoin for the anti-establishment activities that would become possible if the nascent technology became widely adopted. Wences Casares, a brilliant and well-connected serial entrepreneur, was also part of the second Bitcoin hype cycle and is known to have evangelized Bitcoin to some of the most prominent technologists and investors in Silicon Valley.
$250–$1100 (April 2013–December 2013): The third hype cycle saw the entrance of early retail and institutional investors who were willing to brave the horrendously complicated and risky liquidity channels from which bitcoins could be bought. The primary source of liquidity in the market during this period was the Japan-based MtGox exchange that was run by the notoriously incompetent and malfeasant Mark Karpeles, who later saw prison time for his role in the collapse of the exchange.
It is worth observing that the rise in Bitcoin’s price during the aforementioned hype cycles was largely correlated with an increase in liquidity and the ease with which investors could purchase bitcoins. In the first hype cycle, there were no exchanges available, and acquisition of bitcoins was primarily through mining or by direct exchange with someone who had already mined bitcoins. In the second hype cycle, rudimentary exchanges became available, but obtaining and securing bitcoins from these exchanges remained too complex for all but the most technologically savvy investors. Even in the third hype cycle, significant hurdles remained for investors transferring money to MtGox to acquire bitcoins. Banks were reluctant to deal with the exchange, and third party vendors who facilitated transfers were often incompetent, criminal, or both. Further, many who did manage to transfer money to MtGox ultimately faced loss of funds when the exchange was hacked and later closed.
It was only after the collapse of the MtGox exchange and a two-year lull in the market price of Bitcoin that mature and deep sources of liquidity were developed; examples include regulated exchanges such as GDAX and OTC brokers such as Cumberland mining. By the time the fourth hype cycle began in 2016 it was relatively easy for retail investors to buy bitcoins and secure them.
$1100–$19600? (2014–?):
At the time of writing, the Bitcoin market is undergoing its fourth major hype cycle. Participation in the current hype cycle has been dominated by what Michael Casey described as the “early majority” of retail and institutional investors.
As sources of liquidity have deepened and matured, major institutional investors now have the opportunity to participate through regulated futures markets. The availability of a regulated futures market paves the way for the creation of a Bitcoin ETF, which will then usher in the “late majority” and “laggards” in subsequent hype cycles.
Although it is impossible to predict the exact magnitude of the current hype cycle, it would be reasonable to conjecture that the cycle reaches its zenith in the range of $20,000 to $50,000. Much higher than this range and Bitcoin would command a significant fraction of gold’s entire market capitalization (gold and Bitcoin would have equivalent market capitalizations at a bitcoin price of approximately $380,000 at the time of writing). A significant fraction of gold’s market capitalization comes from central bank demand and it’s unlikely that central banks or nation states will participate in this particular hype cycle.
Bitcoin’s final Gartner hype cycle will begin when nation-states start accumulating it as a part of their foreign currency reserves. The market capitalization of Bitcoin is currently too small for it to be considered a viable addition to reserves for most countries. However, as private sector interest increases and the capitalization of Bitcoin approaches 1 trillion dollars it will become liquid enough for most states to enter the market. The entrance of the first state to officially add bitcoins to their reserves will likely trigger a stampede for others to do so. The states that are the earliest in adopting Bitcoin would see the largest benefit to their balance sheets if Bitcoin ultimately became a global reserve currency. Unfortunately, it will probably be the states with the strongest executive powers — dictatorships such as North Korea — that will move the fastest in accumulating bitcoins. The unwillingness to see such states improve their financial position and the inherently weak executive branches of the Western democracies will cause them to dither and be laggards in accumulating bitcoins for their reserves.
There is a great irony that the US is currently one of the nations most open in its regulatory position toward Bitcoin, while China and Russia are the most hostile. The US risks the greatest downside to its geopolitical position if Bitcoin were to supplant the dollar as the world’s reserve currency. In the 1960s, Charle de Gaulle criticized the “exorbitant privilege” the US enjoyed from the international monetary order it crafted with the Bretton Woods agreement of 1944. The Russian and Chinese governments have not yet awoken to the geo-strategic benefits of Bitcoin as a reserve currency and are currently preoccupied with the effects it may have on their internal markets. Like de Gaulle in the 1960s, who threatened to reestablish the classical gold standard in response to the US’s exorbitant privilege, the Chinese and Russians will, in time, come to see the benefits of a large reserve position in a non-sovereign store of value. With the largest concentration of Bitcoin mining power residing in China, the Chinese state already has a distinct advantage in its potential to add bitcoins to its reserves.
There is another danger, perhaps even more serious from the point of view of the central banks and regulators: bitcoin might not crash. If the speculative fervor in the cryptocurrency is merely the precursor to it being widely used as an alternative to the dollar, it will threaten the central banks’ monopoly on money.
In the coming years there will be a great struggle between entrepreneurs and innovators in Silicon Valley, who will attempt to keep Bitcoin free of state control, and the banking industry and central banks who will do everything in their power to regulate Bitcoin to prevent their industry and money-issuing powers from being disrupted.
A monetary good cannot transition to being a generally accepted medium of exchange (the standard economic definition of “money”) before it is widely valued, for the tautological reason that a good that is not valued will not be accepted in exchange. In the process of becoming widely valued, and hence a store of value, a monetary good will soar in purchasing power, creating an opportunity cost to relinquishing it for use in exchange. Only when the opportunity cost of relinquishing a store of value drops to a suitably low level can it transition to becoming a generally accepted medium of exchange.
More precisely, a monetary good will only be suitable as a medium of exchange when the sum of the opportunity cost and the transactional cost of using it in exchange drops below the cost of completing a trade without it.
In a barter-based society, the transition of a store of value to a medium of exchange can occur even when the monetary good is increasing in purchasing power because the transactional costs of barter trade are extremely high. In a developed economy, where transactional costs are low, it is possible for a nascent and rapidly appreciating store of value, such as Bitcoin, to be used as a medium of exchange, albeit in a very limited scope. An example is the illicit drug market where buyers are willing to sacrifice the opportunity of holding bitcoins to minimize the substantial risk of purchasing the drugs using fiat currency.
There are, however, major institutional barriers to a nascent store of value becoming a generally accepted medium of exchange in a developed society. States use taxation as a powerful means to protect their sovereign money from being displaced by competing monetary goods. Not only does a sovereign money enjoy the advantage of a constant source of demand, by way of taxes being remittable only in it, but competing monetary goods are taxed whenever they are exchanged at an appreciated value. This latter kind of taxation creates significant friction to using a store of value as a medium of exchange.
The handicapping of market-based monetary goods is not an insurmountable barrier to their adoption as a generally accepted medium of exchange, however. If faith is lost in a sovereign money, its value can collapse in a process known as hyperinflation. When a sovereign money hyperinflates, its value first collapses against the most liquid goods in the society, such as gold or a foreign money like the US dollar, if they are available. When no liquid goods are available or their supply is limited, a hyperinflating money collapses against real goods, such as real estate and commodities. The archetypal image of a hyperinflation is a grocery store emptied of all its produce as consumers flee the rapidly diminishing value of their nation’s money.
Eventually, when faith is completely lost during a hyperinflation, a sovereign money will no longer be accepted by anyone, and the society will either devolve to barter or the monetary unit will be completely replaced as a medium of exchange. An example of this process was the replacement of the Zimbabwe dollar with the US dollar. The replacement of a sovereign money with a foreign one is made more difficult by the scarcity of the foreign money and the absence of foreign banking institutions to provide liquidity.
Much of this article has focused on the monetary nature of Bitcoin. With this foundation we can now address some of the most commonly held misconceptions about Bitcoin.
Bitcoin, like all market-based monetary goods, displays a monetary premium. The monetary premium is what gives rise to the common criticism that Bitcoin is a “bubble”. However, all monetary goods display a monetary premium. Indeed, it is this premium (the excess over the use-demand price) that is the defining characteristic of all monies. In other words, money is always and everywhere a bubble. Paradoxically, a monetary good is both a bubble and may be undervalued if it’s in the early stages of its adoption for use as money.
Bitcoin’s price volatility is a function of its nascency. In the first few years of its existence, Bitcoin behaved like a penny-stock, and any large buyer — such as the Winklevoss twins — could cause a large spike in its price. As adoption and liquidity have increased over the years, Bitcoin’s volatility has decreased commensurately. When Bitcoin achieves the market capitalization of gold, it will display a similar level of volatility. As Bitcoin surpasses the market capitalization of gold, its volatility will decrease to a level that will make it suitable as a widely used medium of exchange. As previously noted, the monetization of Bitcoin occurs in a series of Gartner hype cycles. Volatility is lowest during the plateau phase of the hype cycle, while it is highest during the peak and crash phases of the cycle. Each hype cycle has lower volatility than the previous ones because the liquidity of the market has increased.
A recent criticism of the Bitcoin network is that the increase in fees to transmit bitcoins makes it unsuitable as a payment system. However, the growth in fees is healthy and expected. Transaction fees are the cost required to pay bitcoin miners to secure the network by validating transactions. Miners can either be paid by transaction fees or by block rewards, which are an inflationary subsidy borne by current bitcoin owners.
Given Bitcoin’s fixed supply schedule — a monetary policy which makes it ideally suited as a store of value — block rewards will eventually decline to zero and the network must ultimately be secured with transaction fees. A network with “low” fees is a network with little security and prone to external censorship. Those touting the low fees of Bitcoin alternatives are unknowingly describing the weakness of these so-called “alt-coins”.
The specious root of the criticism of Bitcoin’s “high” transaction fees is the belief that Bitcoin should be a payment system first and a store of value later. As we have seen with the origins of money, this belief puts the cart before the horse. Only when Bitcoin has become a deeply established store of value will it become suitable as a medium of exchange. Further, once the opportunity cost of trading bitcoins is at a level at which it is suitable as a medium of exchange, most trades will not occur on the Bitcoin network itself but on “second layer” networks with much lower fees. Second layer networks, such as the Lightning network, provide the modern equivalent of the promissory notes that were used to transfer titles for gold in the 19th century. Promissory notes were used by banks because transferring the underlying bullion was far more costly than transferring the note that represented title to the gold. Unlike promissory notes, however, the Lightning network will allow the transfer of bitcoins at low cost while requiring little or no trust of third parties such as banks. The development of the Lightning network is a profoundly important technical innovation in Bitcoin’s history and its value will become apparent as it is developed and adopted in the coming years.
As an open-source software protocol, it has always been possible to copy Bitcoin’s software and imitate its network. Over the years, many imitators have been created, ranging from ersatz facsimiles, such as Litecoin, to complex variants like Ethereum that promise to allow arbitrarily complex contractual arrangements using a distributed computational system. A common investment criticism of Bitcoin is that it cannot maintain its value when competitors can be easily created that are able to incorporate the latest innovations and software features.
The fallacy in this argument is that the scores of Bitcoin competitors that have been created over the years lack the “network effect” of the first and dominant technology in the space. A network effect — the increased value of using Bitcoin simply because it is already the dominant network — is a feature in and of itself. For any technology that possesses a network effect, it is by far the most important feature.
The network effect for Bitcoin encompasses the liquidity of its market, the number of people who own it, and the community of developers maintaining and improving upon its software and its brand awareness. Large investors, including nation-states, will seek the most liquid market so that they can enter and exit the market quickly without affecting its price. Developers will flock to the dominant development community which has the highest-calibre talent, thereby reinforcing the strength of that community. And brand awareness is self-reinforcing, as would-be competitors to Bitcoin are always mentioned in the context of Bitcoin itself.
A trend that became popular in 2017 was not only to imitate Bitcoin’s software, but to copy the entire history of its past transactions (known as the blockchain). By copying Bitcoin’s blockchain up to a certain point and then splitting off into a new network, in a process known as “forking”, competitors to Bitcoin were able to solve the problem of distributing their token to a large user base.
The most significant fork of this kind occurred on August 1st, 2017 when a new network known as Bitcoin Cash (BCash) was created. An owner of N bitcoins before August 1st, 2017, would then own both N bitcoins and N BCash tokens. The small but vocal community of BCash proponents have tirelessly attempted to expropriate Bitcoin’s brand recognition, both through the naming of their new network and a campaign to convince neophytes in the Bitcoin market that Bcash is the “real” Bitcoin. These attempts have largely failed, and this failure is reflected in the market capitalizations of the two networks. However, for new investors, there remains an apparent risk that a competitor might clone Bitcoin and its blockchain and succeed in overtaking it in market capitalization, thus becoming the de facto Bitcoin.
An important rule can be gleaned from the major forks that have happened to both the Bitcoin and Ethereum networks. The majority of the market capitalization will settle on the network that retains the highest-calibre and most active developer community. For although Bitcoin can be viewed as a nascent money, it is also a computer network built on software that needs to be maintained and improved upon. Buying tokens on a network which has little or inexperienced developer support would be akin to buying a clone of Microsoft Windows that was not supported by Microsoft’s best developers. It is clear from the history of the forks that occurred in 2017 that the best and most experienced computer scientists and cryptographers are committed to developing for the original Bitcoin and not any of the growing legion of imitators that have been created from it.
Although the common criticisms of Bitcoin found in the media and economics profession are misplaced and based on a flawed understanding of money, there are real and significant risks to investing in Bitcoin. It would be prudent for a prospective Bitcoin investor to understand and weigh these risks before considering an investment in Bitcoin.
The Bitcoin protocol and the cryptographic primitives that it is built upon could be found to have a design flaw, or could be made insecure with the development of quantum computing. If a flaw is found in the protocol, or some new means of computation makes possible the breaking of the cryptography underpinning Bitcoin, the faith in Bitcoin may be severely compromised. The protocol risk was highest in the early years of Bitcoin’s development, when it was still unclear, even to seasoned cryptographers, that Satoshi Nakamoto had actually found a solution to the Byzantine Generals’ Problem. Concerns about serious flaws in the Bitcoin protocol have dissipated over the years, but given its technological nature, protocol risk will always remain for Bitcoin, if only as an outlier risk.
Mitigating the risk of exchange shutdowns is jurisdictional arbitrage. Binance, a prominent exchange that started in China, moved to Japan after the Chinese government halted its operations in China. National governments are also wary of smothering a nascent industry that may prove as consequential as the Internet, thereby ceding a tremendous competitive advantage to other nations.
Only with a coordinated global shutdown of Bitcoin exchanges would the process of monetization be halted completely. The race is on for Bitcoin to become so widely adopted that a complete shutdown becomes as politically infeasible as a complete shutdown of the Internet. The possibility of such a shutdown is still real, however, and must be factored into the risks of investing in Bitcoin. As was discussed in the prior section on the entrance of nation-states, national governments are finally awakening to the threat that a non-sovereign, censorship-resistant, digital currency poses to their monetary policies. It is an open question whether they will act on this threat before Bitcoin becomes so entrenched that political action against it proves ineffectual.
The open and transparent nature of the Bitcoin blockchain makes it possible for states to mark certain bitcoins as being “tainted” by their use in proscribed activities. Although Bitcoin’s censorship resistance at the protocol level allows these bitcoins to be transmitted, if regulations were to appear that banned the use of such tainted bitcoins by exchanges or merchants, they could become largely worthless. Bitcoin would then lose one of the critical properties of a monetary good: fungibility.
To ameliorate Bitcoin’s fungibility, improvements will need to be made at the protocol level to improve the privacy of transactions. While there are new developments in this regard, pioneered in digital currencies such as Monero and Zcash, there are major technological tradeoffs to be made between the efficiency and complexity of Bitcoin and its privacy. It remains an open question whether privacy-enhancing features can be added to Bitcoin in a way that doesn’t compromise its usefulness as money in other ways.
If you imagine it being used for some fraction of world commerce, then there’s only going to be 21 million coins for the whole world, so it would be worth much more per unit.
[I]magine that Bitcoin is successful and becomes the dominant payment system in use throughout the world. Then the total value of the currency should be equal to the total value of all the wealth in the world. Current estimates of total worldwide household wealth that I have found range from $100 trillion to $300 trillion. With 20 million coins, that gives each coin a value of about $10 million.
Even if Bitcoin were not to become a fully fledged global money and were simply to compete with gold as a non-sovereign store of value, it is currently massively undervalued. Mapping the market capitalization of the extant above-ground gold supply (approximately 8 trillion dollars) to a maximum Bitcoin supply of 21 million coins gives a value of approximately $380,000 per bitcoin. As we have seen in prior sections, for the attributes that make a monetary good suitable as a store of value, Bitcoin is superior to gold along every axis except for established history. As time passes and the Lindy effect takes hold, established history will no longer be a competitive advantage for gold. Thus, it is not unreasonable to expect that Bitcoin will approach, and perhaps surpass, gold’s market capitalization in the next decade. A caveat to this thesis is that a large fraction of gold’s capitalization comes from central banks holding it as a store of value. For Bitcoin to achieve or surpass gold’s capitalization, some participation by nation-states will be necessary. Whether the Western democracies will participate in the ownership of Bitcoin is unclear. It is more likely, and unfortunate, that tin-pot dictatorships and kleptocracies will be the first nations to enter the Bitcoin market.
If no nation-states participate in the Bitcoin market, there still remains a bullish case for Bitcoin. As a non-sovereign store of value used only by retail and institutional investors, Bitcoin is still early in its adoption curve — the so-called “early majority” are now entering the market while the late majority and laggards are still years away from entering. With broader participation from retail and especially institutional investors, a price level between $100,000 and $200,000 is feasible.
Owning bitcoins is one of the few asymmetric bets that people across the entire world can participate in. Much like a call option, an investor’s downside is limited to 1x, while their potential upside is still 100x or more. Bitcoin is the first truly global bubble whose size and scope is limited only by the desire of the world’s citizenry to protect their savings from the vagaries of government economic mismanagement. Indeed, Bitcoin rose like a phoenix from the ashes of the 2008 global financial catastrophe — a catastrophe that was precipitated by the policies of central banks like the Federal Reserve.
Beyond the financial case for Bitcoin, its rise as a non-sovereign store of value will have profound geopolitical consequences. A global, non-inflationary reserve currency will force nation-states to alter their primary funding mechanism from inflation to direct taxation, which is far less politically palatable. States will shrink in size commensurate to the political pain of transitioning to taxation as their exclusive means of funding. Furthermore, global trade will be settled in a manner that satisfies Charles de Gaulle’s aspiration that no nation should have privilege over any other:
We consider it necessary that international trade be established, as it was the case, before the great misfortunes of the World, on an indisputable monetary base, and one that does not bear the mark of any particular country.
50 years from now, that monetary base will be Bitcoin.
https://fehrsam.xyz/blog/ethereum-is-the-forefront-of-digital-currency
Fred Ehrsam, May 2016
My theory has been that the scripting language in Bitcoin — the piece of every Bitcoin transaction that lets you run a little software program along with it — is too restrictive.
It was, and still is, incredible that Bitcoin got off the ground and is alive after 7 years. It is the first network ever to allow anyone in the world to access a fundamentally open financial system through free software. It has ~$7bn in market cap and has never had a systemic issue which could not be fixed. To some this is already a great success.
However, we also stand here 7 years into Bitcoin with few apps and no “killer apps” beyond store of value and speculation. The scripting language in Bitcoin has barely expanded and remains very restrictive. While Bitcoin has become embroiled in debate over the block size — an important topic for the health of the network, but not something that should halt progress in a young and rapidly developing field — Ethereum is charting new territory, both intellectually and executionally.
Make no mistake — Ethereum would never have existed without Bitcoin as a forerunner. That said, I think Ethereum is ahead of Bitcoin in many ways and represents the bleeding edge of digital currency. I believe this for a few reasons:
Here’s an example of a script in Bitcoin:
And one in Ethereum’s Solidity:
Developers at Coinbase have written simple Ethereum apps in a day or two.
I cannot overemphasize enough how important this combination of full programming functionality and ease of use is. People are doing things in Ethereum that are not possible right now in Bitcoin. It has created a new generation of developers which never worked with Bitcoin but are interested in Ethereum.
Bitcoin could have this advanced functionality, but it would be through a series of other layers that work with the Bitcoin protocol that haven’t been created yet, while Ethereum is providing it out of the box.
Developer mindshare is the most important thing to have in digital currency. The only reason these networks (Bitcoin, Ethereum) and their tokens (bitcoin, ether) have value is because there is a future expectation that people will want to acquire those tokens to use the network. And developers create the applications which drive that demand. Without a reason to use the network, both the network and its currency are worth nothing.
Counterargument and caveats Ethereum is young and it’s prudent to highlight the risks:
Ethereum allows you to do more than you currently can in Bitcoin, and that brings increased regulatory risk. This is less of a systemic risk to Ethereum as a network, rather more of a risk to specific applications of Ethereum. A good example would be decentralized organizations (ex: the DAO) and regulation which would normally apply to a corporation.
Wait — why is this a contest? Are Bitcoin and Ethereum competitors or complementary? This remains to be seen. It’s possible Bitcoin remains the protocol that people are comfortable storing their value in because it is more stable and reliable. This would allow Ethereum to continue to take more risk by trying less tested advancements. In this scenario, Bitcoin is more of a settlement network while Ethereum is used to run decentralized applications (where most of the transaction volume occurs is up in the air). The two could be quite complementary.
What is very real, though, is the possibility that Ethereum blows past Bitcoin entirely. There is nothing that Bitcoin can do which Ethereum can’t. While Ethereum is less battle tested, it is moving faster, has better leadership, and has more developer mindshare. First mover advantage is challenging to overcome, but at current pace, it’s conceivable.
Taking a step back, it feels like the rate of change in digital currency is accelerating.
Digital currency is a unique field because of how ambitious the scope is: creating a better transaction network for the entire world (for currency, assets, our online identities, and many other things). Like the Internet itself, this is not one company selling its own proprietary product, it is a series of low-level protocols that will connect everyone someday. And, like the Internet, it will (and has) taken longer to develop, but the impact will be immense.
Fasten your seatbelts. Thanks to Varun Srinivasan, Linda Xie, Nicholas Foley, Kristen Stone, Dan Romero, Maksim Stepanenko, Chris Dixon, and Brian Armstrong.
https://www.michael.com/en/resources/bitcoin-mining-and-the-environment
Michael Saylor
To Journalists, Investors, Regulators, & Anyone Else Interested in Bitcoin & the Environment,
Thanks for your inquiry. I thought I would share a few high level thoughts on Bitcoin Mining & the Environment.
1. Bitcoin Energy Utilization: Bitcoin runs on stranded, excess energy, generated at the edge of the grid, in places where there is no other demand, at times when no one else needs the electricity. Retail & commercial consumers of electricity in major population areas pay 5-10x more per kwH (10-20 cents per kwH) than bitcoin miners, who should be thought of as wholesale consumers of energy (normally budgeting 2-3 cents per kwH). The world produces more energy than it needs, and approximately a third of this energy is wasted. The last 15 basis points of energy power the entire Bitcoin Network - this is the least valued, cheapest margin of energy left after 99.85% of the energy in the world is allocated to other uses.
3. Bitcoin Value Creation & Energy Intensity: Approximately $4-5 billion in electricity is used to power & secure a network that is worth $420 billion as of today, and settles $12 billion per day ($4 trillion per year). The value of the output is 100x the cost of the energy input. This makes Bitcoin far less energy intensive than Google, Netflix, or Facebook, and 1-2 orders of magnitude less energy intensive than traditional 20th century industries like airlines, logistics, retail, hospitality, & agriculture.
4. Bitcoin vs. Other Cryptos: The only proven technique for creating a digital commodity is Proof of Work (bitcoin mining) deployed in a fair, equitable fashion (i.e. no pre-mine, no ICO, no controlling foundation, no primary software development team, no series of forced hard fork upgrades that materially change the monetary protocol). If we remove the dedicated hardware (SHA-256 ASICs) and the dedicated energy that powers those mining rigs, we are left with a network secured by proprietary software running on generic computers. That places all security & control of the network in the hands of a small group of software developers, who must create virtual machines doing virtual work with virtual energy in a virtual world to create virtual security. All attempts to date have resulted in a digital asset that meets the definition of an investment contract (i.e. digital security, not digital commodity). They all pass the Howie test and therefore look more like equities than commodities.
Regulators & legal experts have noted on many occasions that Proof of Stake networks are likely securities, not commodities, and we can expect them to be treated as such over time. PoS Crypto Securities may be appropriate for certain applications, but they are not suitable to serve as global, open, fair money or a global open settlement network. Therefore, it makes no sense to compare Proof of Stake networks to Bitcoin. The creation of a digital commodity without an issuer that serves as “digital gold” is an innovation (we have accomplished this only once in the history of the world with Bitcoin). The creation of a digital security or digital coupon on a shared database is utterly ordinary (it has been done 20,000 times in the crypto world, and 100,000+ times in the traditional world).
5. Bitcoin & Carbon Emissions: 99.92% of carbon emissions in the world are due to industrial uses of energy other than bitcoin mining. Bitcoin mining is neither the problem nor the solution to the challenge of reducing carbon emissions. It is in fact a rounding error and would hardly be noticed if it were not for the competitive guerrilla marketing activities of other crypto promoters & lobbyists that seek to focus negative attention on Proof of Work mining in order to distract regulators, politicians, & the general public from the inconvenient truth that Proof of Stake crypto assets are generally unregistered securities trading on unregulated exchanges to the detriment of the retail investing public.
7. Bitcoin & Global Energy: Bitcoin maximalists believe that Bitcoin is an instrument of economic empowerment for 8 billion people around the world. This is supported by the ability of a bitcoin miner to monetize any power source, anywhere, anytime, at any scale. Bitcoin mining can bring a clean, profitable and modern industry that generates hard currency to a remote location in the developing world, connected only via satellite link. All that is needed is some excess electricity generated from a waterfall, geothermal source, or miscellaneous excess energy deposit. Google, Netflix, and Apple won’t be setting up data centers in Central Africa that export services to their wealthy western clients anytime soon due to constraints on bandwidth, privacy, & requirements for consistent power flow, but bitcoin miners are not hampered by these constraints. They can utilize erratic power supplies with low bandwidth in remote locations and generate valuable bitcoin without prejudice, just as if they were in a suburb of NYC, LA, or SF. Even now, Bitcoin miners are everywhere and will continue to spread (though Africa, Asia, South America, etc.) wherever there is excess energy and anyone with aspirations for a better life. Bitcoin is an egalitarian financial asset offering financial inclusion to all, and bitcoin mining is an egalitarian technology industry offering commercial inclusion to anyone with the energy & engineering capability to operate a mining center.
Thank you for your interest, and best wishes with your article, research, legislation, or investments.
Michael Saylor Executive Chairman MicroStrategy
References:
https://www.lynalden.com/misconceptions-about-bitcoin/
Lynn Alden, Nov 2020
This article addresses common questions, risks, and misconceptions about Bitcoin.
Although it has gained a lot since my original recommendation, I remain bullish on Bitcoin into 2021 as part of a diversified portfolio.
It's looking a bit like a break-out here, when talking in terms of months.
I initially covered Bitcoin in an article in autumn 2017, and was neutral-to-mildly-bearish for the intermediate term, and took no position.
The technology was well-conceived, but I had concerns about euphoric sentiment and market dilution. I neither claimed that it had to go lower, nor viewed it bullishly, and merely stepped aside to keep watching.
With a price tag of over $15,000/BTC today, Bitcoin is up over 120% from the initial price at my April pivot point, and is up over 60% from July, but I continue to be bullish through 2021. From there, I would expect a period of correction and consolidation, and I'll re-assess its forward prospects from that point.
Naturally, I've received many emails about Bitcoin over this summer and autumn. I've answered several of them via email, but figured I would summarize the most popular ones into a quick article on the subject. These are common misconceptions, risks, or questions. All of which make sense to ask, so I do my best here to address them as I see it.
If you haven't read it, I'd recommend reading my July Bitcoin article first.
Many people view Bitcoin as a bubble, which is understandable. Especially for folks that were looking at the linear chart in 2018 or 2019, Bitcoin looked like it hit a silly peak in late 2017 after a parabolic rise that would never be touched again.
This linear price chart goes from the beginning of 2016 to the beginning of 2019, and shows how it looked like a classic bubble:
Maybe it is a bubble. We'll see. However, it looks a lot more rational when you look at the long-term logarithmic chart, especially as it relates to Bitcoin's 4-year halving cycle.
Each dot in that chart represents the monthly bitcoin price, with the color based on how many months it has been since the the prior halving. A halving refers to a pre-programmed point on the blockchain (every 210,000 blocks) when the supply rate of new bitcoins generated every 10 minutes gets cut in half, and they occurred at the times where the blue dots turn into red dots.
The first cycle (the launch cycle) had a massive gain in percent terms from zero to over $20 per bitcoin at its peak. The second cycle, from the peak price in cycle 1 to the peak price in cycle 2, had an increase of over 50x, where Bitcoin first reached over $1,000. The third cycle from peak-to-peak had an increase of about 20x, where Bitcoin briefly touched about $20,000.
Since May 2020, we've been in the fourth cycle, and we'll see what happens over the next year. This is historically a very bullish phase for Bitcoin, as demand remains strong but new supply is very limited, with a big chunk of the existing supply held in strong hands.
The monthly chart is looking solid, with positive MACD, and a higher current price than any monthly close in history. Only on an intra-month basis, within December 2017, has it been higher than it is now:
Chart Source: StockCharts.com
The weekly chart shows how many times it became near-term overbought, and how many corrections it had, on its previous post-halving bullish run where it went up by 20x:
Chart Source: StockCharts.com
My job here is simply to find assets that are likely to do well over a lengthy period of time. For many of the questions/misconceptions discussed in this article, there are digital asset specialists that can answer them with more detail than I can. A downside of specialists, however, is that many of them (not all) tend to be perma-bulls on their chosen asset class.
This is true with many specialist gold investors, specialist stock investors, specialist Bitcoin investors, and so forth. How many gold newsletters suggested that you might want to take profits in gold around its multi-year peak in 2011? How many Bitcoin personalities suggested that Bitcoin was probably overbought in late 2017 and due for a multi-year correction?
I've had the pleasure of having conversations with some of the most knowledgeable Bitcoin specialists in the world; the ones that keep their outlooks measured and fact-based, with risks clearly indicated, rather than being constant promoters of their industry at any cost. Bitcoin's power comes in part from how enthusiastic its supporters are, but there is room for independent analysis on bullish potential and risk analysis as well.
And as someone who isn't in the digital asset industry myself, but who has a background that blends engineering and finance that lends itself reasonably well to analyzing it, I approach Bitcoin like I approach any other asset class; with an acknowledgement of risks, rewards, bullish cycles, and bearish cycles. I continue to be bullish here.
If this fourth cycle plays out anywhere remotely close to the past three cycles since inception (which isn't guaranteed), Bitcoin's relative strength index could become quite extreme again in 2021. Here's a chart from PlanB about Bitcoin's historical monthly RSI during the bullish and bearish phases of its 4-year halving cycle:
For that reason, Bitcoin going from $6,900 to $15,000+ in seven months doesn't lead me to take profits yet. In other words, a monthly RSI of 70 doesn't cut it as "overbought" in Bitcoin terms, particularly this early after a halving event. I'll likely look into some rebalancing later in 2021, though.
Each investor has their own risk tolerance, conviction, knowledge, and financial goals. A key way to manage Bitcoin's volatility is to manage your position size, rather than try to trade it too frequently. If Bitcoin's price volatility keeps you up at night, your position is probably too big. If you have an appropriately-sized position, it's the type of asset to let run for a while, rather than to take profits as soon as it's slightly popular and doing well.
When it's at *extreme* sentiment, and/or its position has grown to a disproportionately large portion of your portfolio, it's likely time to consider rebalancing.
I approached this topic heavily in my autumn 2017 article, and again in my summer 2020 article.
To start with, digital assets can certainly have value. In simplistic terms, imagine a hypothetical online massive multiple game played by millions of people around the world. If there was a magical sword item introduced by the developer that was the strongest weapon in the game, and there were only a dozen of them released, and accounts that somehow got one could sell them to another account, you can bet that the price for that digital sword would be outrageous.
Bitcoin's utility is that it allows people to store value outside of any currency system in something with provably scarce units, and to transport that value around the world. Its founder, Satoshi Nakamoto, solved the double-spending problem and crafted a well-designed protocol that has scarce units that are tradeable in a stateless and decentralized way.
In terms of utility, try bringing $250,000 worth of gold through an international airport vs bringing $250,000 worth of bitcoins with you instead, via a small digital wallet, or via an app on your phone, or even just by remembering a 12-word seed phrase. In addition, Bitcoin is more easily verifiable than gold, in terms of being a reserve asset and being used as collateral. It's more frictionless to transfer than gold, and has a hard-capped supply. And I like gold too; I've been long it since 2018, and still am.
Bitcoin is a digital commodity, as Satoshi envisioned it:
As a thought experiment, imagine there was a base metal as scarce as gold but with the following properties:– boring grey in colour– not a good conductor of electricity– not particularly strong, but not ductile or easily malleable either– not useful for any practical or ornamental purpose
and one special, magical property:– can be transported over a communications channel
If it somehow acquired any value at all for whatever reason, then anyone wanting to transfer wealth over a long distance could buy some, transmit it, and have the recipient sell it.
-Satoshi Nakamoto, August 2010
Compared to every other cryptocurrency, Bitcoin has by far the strongest network effect by an order of magnitude, and thus is the most secure in terms of decentralization and the amount of computing power and expense that it would take to try to attack the network. There are thousands of cryptocurrencies, but none of them have been able to rival Bitcoin in terms of market capitalization, decentralization, ubiquity, firm monetary policy, and network security combined.
Some other tokens present novel privacy advancements, or smart contracts which can allow for all sorts of technological disruption on other industries, but none of them are a major challenge to Bitcoin in terms of being an emergent store of value. Some of them can work well alongside Bitcoin, but not in place of Bitcoin.
Bitcoin is the best at what it does. And in a world of negative real rates within developed markets, and a host of currency failures in emerging markets, what it does has utility. The important question, therefore, is how much utility.
The pricing of that utility is best thought of in terms of the whole protocol, which is divided into 21 million bitcoins (each of which is divisible into 100 million sats), and combines the asset itself with the means of transmitting it and verifying it. The value of the protocol grows as more individuals and institutions use it to store and transmit and verify value, and can shrink if fewer folks use it.
The total market capitalization of gold is estimated to be over $10 trillion. Could Bitcoin reach 10% of that? 25%? Half? Parity? I don't know.
I'm focusing on one Bitcoin halving cycle at a time. A four-year outlook is enough for me, and I'll calibrate my analysis to what is happening as we go along.
In fact, this played a key role in the 2017 hard fork between Bitcoin and Bitcoin Cash. Proponents of Bitcoin Cash wanted to increase the block size, which would allow the network to process more transactions per unit of time.
However, with any payment protocol, there is a trade-off between security, decentralization, and speed. Which variables to maximize is a design choice; it's currently impossible to maximize all three.
Visa, for example, maximizes speed to handle countless transactions per minute, and has moderate security depending on how you measure it. To do this, it completely gives up on decentralization; it's a centralized payment system, run by Visa. And it of course relies on the underlying currency, which itself is centralized government fiat currency.
Bitcoin, on the other hand, maximizes security and decentralization, at the cost of speed. By keeping the block size small, it makes it possible for people all over the world to run their own full nodes, which can be used to verify the entire blockchain. Widespread node distribution (over 10,000 nodes) helps ensure decentralization and continual verification of the blockchain.
Bitcoin Cash potentially increases transaction throughput with bigger block sizes, but at the cost of lower security and less decentralization. In addition, it still doesn't come anywhere close to Visa in terms of transaction throughput, so it doesn't really maximize any variable.
Basically, the dispute between Bitcoin and Bitcoin Cash is whether Bitcoin should be both a settlement layer and a transaction layer (and thus not be perfect at either of those roles), or whether it should maximize itself as a settlement layer, and allow other networks to build on top of it to optimize for transaction speed and throughput.
The way to think about Bitcoin is that it is an ideal settlement layer. It combines a scarce currency/commodity with transmission and verification features, and has a huge amount of security backing it up from its high global hash rate. In fact, that's what makes Bitcoin vs Visa an inappropriate comparison; Visa is just a layer on top of deeper settlement layers, with merchant banks and other systems involved under the surface, whereas Bitcoin is foundational.
The global banking system has extremely bad scaling when you go down to the foundation. Wire transfers, for example, generally take days to settle. You don't pay for everyday things with wire transfers for that reason; they're mainly for big or important transactions.
However, the banking system builds additional layers of scalability onto those types of settlement layers, so we have things like paper checks, electronic checks, credit cards, Paypal, and so forth. Consumers can use these systems to perform a large number of smaller transactions, and the underlying banks settle with each other with more foundational, larger transactions less frequently. Each form of payment is a trade-off between speed and security; banks and institutions settle with each other with the most secure layers, while consumers use the speedier layers for everyday commerce.
Similarly, there are protocols like the Lightning Network and other smart contract concepts that are built on top of Bitcoin, which increase Bitcoin's scalability. Lightning can perform tons of quick transactions between counterparties, and reconcile them with Bitcoin's blockchain in one batch transaction. This reduces the fees and bandwidth limitations per small transaction.
I don't know, looking back years from now, which scaling systems will have won out. There's still a lot of development being done. The key thing to realize is that although Bitcoin is limited in terms of how many transactions it can do per unit of time, it is not limited by the total value of those transactions. The amount of value that Bitcoin can settle per unit of time is limitless, depending on its market cap and additional layers.
In other words, suppose that the Bitcoin network is limited to 250 transactions per minute, which is low. Those transactions could average $100 or $1 million, or any number. If they average $100 each, it means only $25,000 in transaction value is performed per minute. If they average $1 million each, it means $250 million in transaction value is performed per minute. If Bitcoin grows in use as a store of value, the transaction fees and inherent limitations prioritize the largest and most important transactions; the major settlement transactions.
Additional layers built on top of Bitcoin can do an arbitrary number of transactions per minute, and settle them with batches on the actual Bitcoin blockchain. This is similar to how consumer layers like Visa or Paypal can process an arbitrary number of transactions per minute, while the banks behind the scenes settle with larger transactions less frequently.
The market has already spoken about which technology it thinks is best, between Bitcoin and others like Bitcoin Cash. Ever since the 2017 hard fork, Bitcoin's market capitalization and hash rate and number of nodes have greatly outperformed Bitcoin Cash's. Watching this play out in 2017 was one of my initial risk assessments for the protocol, but three years later, that concern no longer exists.
The Bitcoin network currently uses as much energy as a small country. This naturally brings up environmental concerns, especially as it grows.
Similarly, gold mining uses a ton of energy. For each gold coin, a ton of money, energy, and time went into exploration for deposits, developing a mine, and then processing countless tons of rock with heavy equipment to get a few grams of gold per ton. Then, it has to be purified and minted into bars and coins, and transported.
It takes several tons of processed rock to get each 1-ounce gold coin, and thousands of tons of processed rock for each good delivery gold bar. The amount of energy that goes into a small unit of gold is immense.
In fact, that energy is what gives gold value, and what made it internationally recognized as money for thousands of years. Gold is basically concentrated energy, concentrated work, as a dense store of value that does not erode with time.
There's no limit to how many dollars, euros, or yen we can print, however. Banks multiply them all the time with a stroke of a keyboard. Likewise, industrial metals like iron are very common as well; we have no shortage of them. Gold, however, is very rare, and when found, it takes a ton of energy and time to get into pure form. And then we have to spend more energy transporting, securing, and verifying it from time to time.
However, the world does that anyway, because it derives value from it compared to the value that it had to put in to get it. Gold mining and refining requires energy, but in turn, central banks, institutions, investors, and consumers obtain a scarce store of value, or jewelry, or industrial applications from the rare metal.
Similarly, Bitcoin takes a lot of energy, but that's because it has so much computing power constantly securing its protocol, compared to countless other cryptocurrencies that are easy to attack or insufficiently decentralized.
Visa uses much less energy than Bitcoin, but it requires complete centralization and is built on top of an abundant fiat currency. Litecoin uses much less energy than Bitcoin as well, but it's easier for a well-capitalized group to attack.
The question then becomes whether that energy associated with Bitcoin is put to good use. Does Bitcoin justify its energy usage? Does it add enough value?
So far, the market says it does and I agree. A decentralized digital monetary system, separate from any sovereign entity, with a rules-based monetary policy and inherent scarcity, gives people around the world a choice, which some of them use to store value in, and/or use to transmit that value to others.
Those of us in developed markets that haven't experienced rapid inflation for decades may not see the need for it, but countless people in emerging markets have experienced many instances of severe inflation in their lifetimes, tend to get the concept more quickly.
Furthermore, a significant portion of the energy that Bitcoin uses, could otherwise be wasted. Bitcoin miners seek out the absolute cheapest sources of electricity in the world, which usually means energy that was developed for one reason or another, but that doesn't currently have sufficient demand, and would therefore be wasted.
Examples of this include over-built hydroelectric dams in certain regions of China, or stranded oil and gas wells in North America. Bitcoin mining equipment is mobile, and thus can be put near wherever the cheapest source of energy is, to arbitrage it and give a purpose to that stranded energy production.
Bitcoin mining converts the output from those cheap stranded sources of energy into something that currently has monetary value.
Bitcoin is promoted as a store of value and medium of exchange, but it has a very volatile price history. This leads, again somewhat understandably, for investors to say it's not a good store of value or medium of exchange, and thus fails at the one thing that it's designed to do.
And they're kind of right. Bitcoin isn't the asset that you put money into for an emergency fund, or for a down payment on a house that you're saving up for 6 months from now. When you definitely need a certain amount of currency in a near-term time horizon, Bitcoin is not the asset of choice.
This is because it's an emerging store of value, roughly 12 years old now, and thus carries with it a significant degree of growth and speculation. Its market capitalization is growing over time, taking some market share from other stores of value, and growing into a meaningful asset class. We'll see if it continues to do so, or if it levels off somewhere and starts to stagnate.
For Bitcoin's market cap to grow from a $25 million to $250 million to $2.5 billion to $25 billion to today's value of over $250 billion, it requires volatility, especially upward volatility (which, of course, comes with associated downside volatility).
As it grows larger, its volatility reduces over time. If Bitcoin becomes a $2.5 trillion asset class one day, with more widespread holding, its volatility would likely be lower than it is now.
Therefore, having a nonzero exposure to Bitcoin is basically a bet that Bitcoin's network effect and use case will continue to grow until it reaches some equilibrium where it has lower volatility and is more stable. For now, it has plenty of volatility, and it needs that volatility if it is to keep growing. Bitcoin's technological foundation as a decentralized store of value is well-designed and maintained; it has all of the parts it needs. It just needs to grow into what it can be, and we'll see if it does.
It's like if someone identifies a new element, and people begin discovering uses for that element, and it experiences a period of rapid growth and high price volatility, until it has been around for sufficient time that it eventually settles in to a normal volatility band.
While it remains as volatile as it is, investors can mitigate the risk by having an appropriate position size.
Another legitimate concern that folks have is that even if Bitcoin is successful, that will make governments ban it. Some governments already have. So, this falls more in the "risk" category than a "misconception".
There is precedence for this. The United States made it illegal for Americans to own gold from 1933 to 1975, other than in small amounts for jewelry and collectables. In the land of the free, there was a benign yellow metal that we could be sent to prison for owning coins and bars of, simply because it was seen as a threat to the monetary system.
This chart shows the interest rate of 10-year Treasury yields in blue. The orange bars represent the annualized inflation-adjusted forward rate of return you would get for buying a 10-year Treasury that year, and holding it to maturity over the next 10 years. The green square shows the period of time where owning gold was illegal.
There was a four-decade period from the 1930's to the 1970's where keeping money in the bank or in sovereign bonds didn't keep up with inflation, i.e. the orange bars were net negative. Savers' purchasing power went down if they held these paper assets.
This was due to two inflationary decades; one in the 1940's, and one in the 1970's. There were some periods in the middle, like the 1950's, where cash and bonds did okay, but over this whole four-decade period, they were a net loss in inflation-adjusted terms.
It's not too shocking, therefore, that one of the release valves for investors was banned during that specific period. Gold did great over that time, and held its purchasing power against currency debasement. The government considered it a matter of national security to "prevent hoarding" and basically force people into the paper assets that lost value, or into more economic assets like stocks and real estate.
This was back when the dollar was backed by gold, so the United States government wanted to own most of the gold, and limit citizens' abilities to acquire gold. No such backing exists today for gold or Bitcoin, and thus there is less incentive to try to ban it.
And, the gold ban was hard to enforce. There were rather few prosecutions over gold ownership, even though the penalties on paper were severe.
Bitcoin uses encryption, and thus not really able to be confiscated other than through legal demand. However, governments can ban exchanges and make it illegal to own, which would drive out institutional money and put Bitcoin into the black market.
It would be extremely difficult for major capital markets like the United States or Europe or Japan to ban it at this point. If, in the years ahead, Bitcoin's market capitalization reaches over $1 trillion, with more and more institutions holding exposure to it, it becomes harder and harder to ban.
Bitcoin was already an unusual asset that grew into the semi-mainstream from the bottom up, through retail adoption. Once the political donor class owns it as well, which they increasingly do, the game is basically over for banning it. Trying to ban it would be an attack on the balance sheets of corporations, funds, banks, and investors that own it, and would not be popular among millions of voters that own it.
I think regulatory hostility is still a risk to watch out for while the market capitalization is sub-$1 trillion. And the risk can be managed with an appropriate position size for your unique financial situation and goals.
The most frequent question I get about Bitcoin is simply where to buy bitcoins. Some people don't know how to start, and other people are familiar with the popular places to buy, but don't know which ones are ideal.
There's no one answer; it depends on your goals with it, and where you live in the world.
The first question to ask is whether you're a trader or a saver. Do you want to establish a long-term Bitcoin position, or buy some with a plan to sell it in a few months? Or maybe some of both?
The second question to ask yourself is whether you want to self-custody it with private keys and a hardware wallet or multi-signature solution, which has an upfront learning curve but is ultimately more secure, or if you want to have someone else custody it for you, which is simpler but involves counterparty risk.
Overall, having access to a crypto exchange, and having access to a dollar-cost averaging platform, along with a personal custody solution like a hardware wallet or a multi-signature solution, is a good combo.
For folks that are early in the learning curve, keeping it on an exchange or in custody storage is also fine, and as you learn more, you can choose to self-custody if it's right for your situation.
https://ethereum.org/en/whitepaper/
Vitalik Buterin, 2014
The mechanism behind proof-of-work was a breakthrough in the space because it simultaneously solved two problems. First, it provided a simple and moderately effective consensus algorithm, allowing nodes in the network to collectively agree on a set of canonical updates to the state of the Bitcoin ledger. Second, it provided a mechanism for allowing free entry into the consensus process, solving the political problem of deciding who gets to influence the consensus, while simultaneously preventing sybil attacks. It does this by substituting a formal barrier to participation, such as the requirement to be registered as a unique entity on a particular list, with an economic barrier - the weight of a single node in the consensus voting process is directly proportional to the computing power that the node brings. Since then, an alternative approach has been proposed called proof-of-stake, calculating the weight of a node as being proportional to its currency holdings and not computational resources; the discussion of the relative merits of the two approaches is beyond the scope of this paper but it should be noted that both approaches can be used to serve as the backbone of a cryptocurrency.
From a technical standpoint, the ledger of a cryptocurrency such as Bitcoin can be thought of as a state transition system, where there is a "state" consisting of the ownership status of all existing bitcoins and a "state transition function" that takes a state and a transaction and outputs a new state which is the result. In a standard banking system, for example, the state is a balance sheet, a transaction is a request to move $X from A to B, and the state transition function reduces the value in A's account by $X and increases the value in B's account by $X. If A's account has less than $X in the first place, the state transition function returns an error. Hence, one can formally define:
In the banking system defined above:
But:
The state transition function APPLY(S,TX) -> S'
can be defined roughly as follows:
For each input in TX
:
If the referenced UTXO is not in S
, return an error.
If the provided signature does not match the owner of the UTXO, return an error.
If the sum of the denominations of all input UTXO is less than the sum of the denominations of all output UTXO, return an error.
Return S
with all input UTXO removed and all output UTXO added.
The first half of the first step prevents transaction senders from spending coins that do not exist, the second half of the first step prevents transaction senders from spending other people's coins, and the second step enforces conservation of value. In order to use this for payment, the protocol is as follows. Suppose Alice wants to send 11.7 BTC to Bob. First, Alice will look for a set of available UTXO that she owns that totals up to at least 11.7 BTC. Realistically, Alice will not be able to get exactly 11.7 BTC; say that the smallest she can get is 6+4+2=12. She then creates a transaction with those three inputs and two outputs. The first output will be 11.7 BTC with Bob's address as its owner, and the second output will be the remaining 0.3 BTC "change", with the owner being Alice herself.
If we had access to a trustworthy centralized service, this system would be trivial to implement; it could simply be coded exactly as described, using a centralized server's hard drive to keep track of the state. However, with Bitcoin we are trying to build a decentralized currency system, so we will need to combine the state transaction system with a consensus system in order to ensure that everyone agrees on the order of transactions. Bitcoin's decentralized consensus process requires nodes in the network to continuously attempt to produce packages of transactions called "blocks". The network is intended to produce roughly one block every ten minutes, with each block containing a timestamp, a nonce, a reference to (ie. hash of) the previous block and a list of all of the transactions that have taken place since the previous block. Over time, this creates a persistent, ever-growing, "blockchain" that constantly updates to represent the latest state of the Bitcoin ledger.
The algorithm for checking if a block is valid, expressed in this paradigm, is as follows:
Check if the previous block referenced by the block exists and is valid.
Check that the proof-of-work on the block is valid.
Let S[0]
be the state at the end of the previous block.
Suppose TX
is the block's transaction list with n
transactions. For all i
in 0...n-1
, set S[i+1] = APPLY(S[i],TX[i])
If any application returns an error, exit and return false.
Return true, and register S[n]
as the state at the end of this block.
Essentially, each transaction in the block must provide a valid state transition from what was the canonical state before the transaction was executed to some new state. Note that the state is not encoded in the block in any way; it is purely an abstraction to be remembered by the validating node and can only be (securely) computed for any block by starting from the genesis state and sequentially applying every transaction in every block. Additionally, note that the order in which the miner includes transactions into the block matters; if there are two transactions A and B in a block such that B spends a UTXO created by A, then the block will be valid if A comes before B but not otherwise.
The one validity condition present in the above list that is not found in other systems is the requirement for "proof-of-work". The precise condition is that the double-SHA256 hash of every block, treated as a 256-bit number, must be less than a dynamically adjusted target, which as of the time of this writing is approximately 2187. The purpose of this is to make block creation computationally "hard", thereby preventing sybil attackers from remaking the entire blockchain in their favor. Because SHA256 is designed to be a completely unpredictable pseudorandom function, the only way to create a valid block is simply trial and error, repeatedly incrementing the nonce and seeing if the new hash matches.
At the current target of ~2187, the network must make an average of ~269 tries before a valid block is found; in general, the target is recalibrated by the network every 2016 blocks so that on average a new block is produced by some node in the network every ten minutes. In order to compensate miners for this computational work, the miner of every block is entitled to include a transaction giving themselves 25 BTC out of nowhere. Additionally, if any transaction has a higher total denomination in its inputs than in its outputs, the difference also goes to the miner as a "transaction fee". Incidentally, this is also the only mechanism by which BTC are issued; the genesis state contained no coins at all.
In order to better understand the purpose of mining, let us examine what happens in the event of a malicious attacker. Since Bitcoin's underlying cryptography is known to be secure, the attacker will target the one part of the Bitcoin system that is not protected by cryptography directly: the order of transactions. The attacker's strategy is simple:
Send 100 BTC to a merchant in exchange for some product (preferably a rapid-delivery digital good)
Wait for the delivery of the product
Produce another transaction sending the same 100 BTC to himself
Try to convince the network that his transaction to himself was the one that came first.
Once step (1) has taken place, after a few minutes some miner will include the transaction in a block, say block number 270000. After about one hour, five more blocks will have been added to the chain after that block, with each of those blocks indirectly pointing to the transaction and thus "confirming" it. At this point, the merchant will accept the payment as finalized and deliver the product; since we are assuming this is a digital good, delivery is instant. Now, the attacker creates another transaction sending the 100 BTC to himself. If the attacker simply releases it into the wild, the transaction will not be processed; miners will attempt to run APPLY(S,TX)
and notice that TX
consumes a UTXO which is no longer in the state. So instead, the attacker creates a "fork" of the blockchain, starting by mining another version of block 270000 pointing to the same block 269999 as a parent but with the new transaction in place of the old one. Because the block data is different, this requires redoing the proof-of-work. Furthermore, the attacker's new version of block 270000 has a different hash, so the original blocks 270001 to 270005 do not "point" to it; thus, the original chain and the attacker's new chain are completely separate. The rule is that in a fork the longest blockchain is taken to be the truth, and so legitimate miners will work on the 270005 chain while the attacker alone is working on the 270000 chain. In order for the attacker to make his blockchain the longest, he would need to have more computational power than the rest of the network combined in order to catch up (hence, "51% attack").
Left: it suffices to present only a small number of nodes in a Merkle tree to give a proof of the validity of a branch.
Right: any attempt to change any part of the Merkle tree will eventually lead to an inconsistency somewhere up the chain.
An important scalability feature of Bitcoin is that the block is stored in a multi-level data structure. The "hash" of a block is actually only the hash of the block header, a roughly 200-byte piece of data that contains the timestamp, nonce, previous block hash and the root hash of a data structure called the Merkle tree storing all transactions in the block. A Merkle tree is a type of binary tree, composed of a set of nodes with a large number of leaf nodes at the bottom of the tree containing the underlying data, a set of intermediate nodes where each node is the hash of its two children, and finally a single root node, also formed from the hash of its two children, representing the "top" of the tree. The purpose of the Merkle tree is to allow the data in a block to be delivered piecemeal: a node can download only the header of a block from one source, the small part of the tree relevant to them from another source, and still be assured that all of the data is correct. The reason why this works is that hashes propagate upward: if a malicious user attempts to swap in a fake transaction into the bottom of a Merkle tree, this change will cause a change in the node above, and then a change in the node above that, finally changing the root of the tree and therefore the hash of the block, causing the protocol to register it as a completely different block (almost certainly with an invalid proof-of-work).
The Merkle tree protocol is arguably essential to long-term sustainability. A "full node" in the Bitcoin network, one that stores and processes the entirety of every block, takes up about 15 GB of disk space in the Bitcoin network as of April 2014, and is growing by over a gigabyte per month. Currently, this is viable for some desktop computers and not phones, and later on in the future only businesses and hobbyists will be able to participate. A protocol known as "simplified payment verification" (SPV) allows for another class of nodes to exist, called "light nodes", which download the block headers, verify the proof-of-work on the block headers, and then download only the "branches" associated with transactions that are relevant to them. This allows light nodes to determine with a strong guarantee of security what the status of any Bitcoin transaction, and their current balance, is while downloading only a very small portion of the entire blockchain.
Metacoins - the idea behind a metacoin is to have a protocol that lives on top of Bitcoin, using Bitcoin transactions to store metacoin transactions but having a different state transition function, APPLY'
. Because the metacoin protocol cannot prevent invalid metacoin transactions from appearing in the Bitcoin blockchain, a rule is added that if APPLY'(S,TX)
returns an error, the protocol defaults to APPLY'(S,TX) = S
. This provides an easy mechanism for creating an arbitrary cryptocurrency protocol, potentially with advanced features that cannot be implemented inside of Bitcoin itself, but with a very low development cost since the complexities of mining and networking are already handled by the Bitcoin protocol. Metacoins have been used to implement some classes of financial contracts, name registration and decentralized exchange.
Thus, in general, there are two approaches toward building a consensus protocol: building an independent network, and building a protocol on top of Bitcoin. The former approach, while reasonably successful in the case of applications like Namecoin, is difficult to implement; each individual implementation needs to bootstrap an independent blockchain, as well as building and testing all of the necessary state transition and networking code. Additionally, we predict that the set of applications for decentralized consensus technology will follow a power law distribution where the vast majority of applications would be too small to warrant their own blockchain, and we note that there exist large classes of decentralized applications, particularly decentralized autonomous organizations, that need to interact with each other.
The Bitcoin-based approach, on the other hand, has the flaw that it does not inherit the simplified payment verification features of Bitcoin. SPV works for Bitcoin because it can use blockchain depth as a proxy for validity; at some point, once the ancestors of a transaction go far enough back, it is safe to say that they were legitimately part of the state. Blockchain-based meta-protocols, on the other hand, cannot force the blockchain not to include transactions that are not valid within the context of their own protocols. Hence, a fully secure SPV meta-protocol implementation would need to backward scan all the way to the beginning of the Bitcoin blockchain to determine whether or not certain transactions are valid. Currently, all "light" implementations of Bitcoin-based meta-protocols rely on a trusted server to provide the data, arguably a highly suboptimal result especially when one of the primary purposes of a cryptocurrency is to eliminate the need for trust.
Even without any extensions, the Bitcoin protocol actually does facilitate a weak version of a concept of "smart contracts". UTXO in Bitcoin can be owned not just by a public key, but also by a more complicated script expressed in a simple stack-based programming language. In this paradigm, a transaction spending that UTXO must provide data that satisfies the script. Indeed, even the basic public key ownership mechanism is implemented via a script: the script takes an elliptic curve signature as input, verifies it against the transaction and the address that owns the UTXO, and returns 1 if the verification is successful and 0 otherwise. Other, more complicated, scripts exist for various additional use cases. For example, one can construct a script that requires signatures from two out of a given three private keys to validate ("multisig"), a setup useful for corporate accounts, secure savings accounts and some merchant escrow situations. Scripts can also be used to pay bounties for solutions to computational problems, and one can even construct a script that says something like "this Bitcoin UTXO is yours if you can provide an SPV proof that you sent a Dogecoin transaction of this denomination to me", essentially allowing decentralized cross-cryptocurrency exchange.
However, the scripting language as implemented in Bitcoin has several important limitations:
Lack of Turing-completeness - that is to say, while there is a large subset of computation that the Bitcoin scripting language supports, it does not nearly support everything. The main category that is missing is loops. This is done to avoid infinite loops during transaction verification; theoretically it is a surmountable obstacle for script programmers, since any loop can be simulated by simply repeating the underlying code many times with an if statement, but it does lead to scripts that are very space-inefficient. For example, implementing an alternative elliptic curve signature algorithm would likely require 256 repeated multiplication rounds all individually included in the code.
Value-blindness - there is no way for a UTXO script to provide fine-grained control over the amount that can be withdrawn. For example, one powerful use case of an oracle contract would be a hedging contract, where A and B put in $1000 worth of BTC and after 30 days the script sends $1000 worth of BTC to A and the rest to B. This would require an oracle to determine the value of 1 BTC in USD, but even then it is a massive improvement in terms of trust and infrastructure requirement over the fully centralized solutions that are available now. However, because UTXO are all-or-nothing, the only way to achieve this is through the very inefficient hack of having many UTXO of varying denominations (eg. one UTXO of 2k for every k up to 30) and having O pick which UTXO to send to A and which to B.
Lack of state - UTXO can either be spent or unspent; there is no opportunity for multi-stage contracts or scripts which keep any other internal state beyond that. This makes it hard to make multi-stage options contracts, decentralized exchange offers or two-stage cryptographic commitment protocols (necessary for secure computational bounties). It also means that UTXO can only be used to build simple, one-off contracts and not more complex "stateful" contracts such as decentralized organizations, and makes meta-protocols difficult to implement. Binary state combined with value-blindness also mean that another important application, withdrawal limits, is impossible.
Blockchain-blindness - UTXO are blind to blockchain data such as the nonce, the timestamp and previous block hash. This severely limits applications in gambling, and several other categories, by depriving the scripting language of a potentially valuable source of randomness.
Thus, we see three approaches to building advanced applications on top of cryptocurrency: building a new blockchain, using scripting on top of Bitcoin, and building a meta-protocol on top of Bitcoin. Building a new blockchain allows for unlimited freedom in building a feature set, but at the cost of development time, bootstrapping effort and security. Using scripting is easy to implement and standardize, but is very limited in its capabilities, and meta-protocols, while easy, suffer from faults in scalability. With Ethereum, we intend to build an alternative framework that provides even larger gains in ease of development as well as even stronger light client properties, while at the same time allowing applications to share an economic environment and blockchain security.
The intent of Ethereum is to create an alternative protocol for building decentralized applications, providing a different set of tradeoffs that we believe will be very useful for a large class of decentralized applications, with particular emphasis on situations where rapid development time, security for small and rarely used applications, and the ability of different applications to very efficiently interact, are important. Ethereum does this by building what is essentially the ultimate abstract foundational layer: a blockchain with a built-in Turing-complete programming language, allowing anyone to write smart contracts and decentralized applications where they can create their own arbitrary rules for ownership, transaction formats and state transition functions. A bare-bones version of Namecoin can be written in two lines of code, and other protocols like currencies and reputation systems can be built in under twenty. Smart contracts, cryptographic "boxes" that contain value and only unlock it if certain conditions are met, can also be built on top of the platform, with vastly more power than that offered by Bitcoin scripting because of the added powers of Turing-completeness, value-awareness, blockchain-awareness and state.
In Ethereum, the state is made up of objects called "accounts", with each account having a 20-byte address and state transitions being direct transfers of value and information between accounts. An Ethereum account contains four fields:
The nonce, a counter used to make sure each transaction can only be processed once
The account's current ether balance
The account's contract code, if present
The account's storage (empty by default)
"Ether" is the main internal crypto-fuel of Ethereum, and is used to pay transaction fees. In general, there are two types of accounts: externally owned accounts, controlled by private keys, and contract accounts, controlled by their contract code. An externally owned account has no code, and one can send messages from an externally owned account by creating and signing a transaction; in a contract account, every time the contract account receives a message its code activates, allowing it to read and write to internal storage and send other messages or create contracts in turn.
Note that "contracts" in Ethereum should not be seen as something that should be "fulfilled" or "complied with"; rather, they are more like "autonomous agents" that live inside of the Ethereum execution environment, always executing a specific piece of code when "poked" by a message or transaction, and having direct control over their own ether balance and their own key/value store to keep track of persistent variables.
The term "transaction" is used in Ethereum to refer to the signed data package that stores a message to be sent from an externally owned account. Transactions contain:
The recipient of the message
A signature identifying the sender
The amount of ether to transfer from the sender to the recipient
An optional data field
A STARTGAS
value, representing the maximum number of computational steps the transaction execution is allowed to take
A GASPRICE
value, representing the fee the sender pays per computational step
The first three are standard fields expected in any cryptocurrency. The data field has no function by default, but the virtual machine has an opcode using which a contract can access the data; as an example use case, if a contract is functioning as an on-blockchain domain registration service, then it may wish to interpret the data being passed to it as containing two "fields", the first field being a domain to register and the second field being the IP address to register it to. The contract would read these values from the message data and appropriately place them in storage.
The STARTGAS
and GASPRICE
fields are crucial for Ethereum's anti-denial of service model. In order to prevent accidental or hostile infinite loops or other computational wastage in code, each transaction is required to set a limit to how many computational steps of code execution it can use. The fundamental unit of computation is "gas"; usually, a computational step costs 1 gas, but some operations cost higher amounts of gas because they are more computationally expensive, or increase the amount of data that must be stored as part of the state. There is also a fee of 5 gas for every byte in the transaction data. The intent of the fee system is to require an attacker to pay proportionately for every resource that they consume, including computation, bandwidth and storage; hence, any transaction that leads to the network consuming a greater amount of any of these resources must have a gas fee roughly proportional to the increment.
Contracts have the ability to send "messages" to other contracts. Messages are virtual objects that are never serialized and exist only in the Ethereum execution environment. A message contains:
The sender of the message (implicit)
The recipient of the message
The amount of ether to transfer alongside the message
An optional data field
A STARTGAS
value
Essentially, a message is like a transaction, except it is produced by a contract and not an external actor. A message is produced when a contract currently executing code executes the CALL
opcode, which produces and executes a message. Like a transaction, a message leads to the recipient account running its code. Thus, contracts can have relationships with other contracts in exactly the same way that external actors can.
Note that the gas allowance assigned by a transaction or contract applies to the total gas consumed by that transaction and all sub-executions. For example, if an external actor A sends a transaction to B with 1000 gas, and B consumes 600 gas before sending a message to C, and the internal execution of C consumes 300 gas before returning, then B can spend another 100 gas before running out of gas.
The Ethereum state transition function, APPLY(S,TX) -> S'
can be defined as follows:
Check if the transaction is well-formed (ie. has the right number of values), the signature is valid, and the nonce matches the nonce in the sender's account. If not, return an error.
Calculate the transaction fee as STARTGAS * GASPRICE
, and determine the sending address from the signature. Subtract the fee from the sender's account balance and increment the sender's nonce. If there is not enough balance to spend, return an error.
Initialize GAS = STARTGAS
, and take off a certain quantity of gas per byte to pay for the bytes in the transaction.
Transfer the transaction value from the sender's account to the receiving account. If the receiving account does not yet exist, create it. If the receiving account is a contract, run the contract's code either to completion or until the execution runs out of gas.
If the value transfer failed because the sender did not have enough money, or the code execution ran out of gas, revert all state changes except the payment of the fees, and add the fees to the miner's account.
Otherwise, refund the fees for all remaining gas to the sender, and send the fees paid for gas consumed to the miner.
For example, suppose that the contract's code is:
Note that in reality the contract code is written in the low-level EVM code; this example is written in Serpent, one of our high-level languages, for clarity, and can be compiled down to EVM code. Suppose that the contract's storage starts off empty, and a transaction is sent with 10 ether value, 2000 gas, 0.001 ether gasprice, and 64 bytes of data, with bytes 0-31 representing the number 2
and bytes 32-63 representing the string CHARLIE
. The process for the state transition function in this case is as follows:
Check that the transaction is valid and well formed.
Check that the transaction sender has at least 2000 * 0.001 = 2 ether. If it is, then subtract 2 ether from the sender's account.
Initialize gas = 2000; assuming the transaction is 170 bytes long and the byte-fee is 5, subtract 850 so that there is 1150 gas left.
Subtract 10 more ether from the sender's account, and add it to the contract's account.
Run the code. In this case, this is simple: it checks if the contract's storage at index 2
is used, notices that it is not, and so it sets the storage at index 2
to the value CHARLIE
. Suppose this takes 187 gas, so the remaining amount of gas is 1150 - 187 = 963
Add 963 * 0.001 = 0.963 ether back to the sender's account, and return the resulting state.
If there was no contract at the receiving end of the transaction, then the total transaction fee would simply be equal to the provided GASPRICE
multiplied by the length of the transaction in bytes, and the data sent alongside the transaction would be irrelevant.
Note that messages work equivalently to transactions in terms of reverts: if a message execution runs out of gas, then that message's execution, and all other executions triggered by that execution, revert, but parent executions do not need to revert. This means that it is "safe" for a contract to call another contract, as if A calls B with G gas then A's execution is guaranteed to lose at most G gas. Finally, note that there is an opcode, CREATE
, that creates a contract; its execution mechanics are generally similar to CALL
, with the exception that the output of the execution determines the code of a newly created contract.
The code in Ethereum contracts is written in a low-level, stack-based bytecode language, referred to as "Ethereum virtual machine code" or "EVM code". The code consists of a series of bytes, where each byte represents an operation. In general, code execution is an infinite loop that consists of repeatedly carrying out the operation at the current program counter (which begins at zero) and then incrementing the program counter by one, until the end of the code is reached or an error or STOP
or RETURN
instruction is detected. The operations have access to three types of space in which to store data:
The stack, a last-in-first-out container to which values can be pushed and popped
Memory, an infinitely expandable byte array
The contract's long-term storage, a key/value store. Unlike stack and memory, which reset after computation ends, storage persists for the long term.
The code can also access the value, sender and data of the incoming message, as well as block header data, and the code can also return a byte array of data as an output.
The formal execution model of EVM code is surprisingly simple. While the Ethereum virtual machine is running, its full computational state can be defined by the tuple (block_state, transaction, message, code, memory, stack, pc, gas)
, where block_state
is the global state containing all accounts and includes balances and storage. At the start of every round of execution, the current instruction is found by taking the pc
th byte of code
(or 0 if pc >= len(code)
), and each instruction has its own definition in terms of how it affects the tuple. For example, ADD
pops two items off the stack and pushes their sum, reduces gas
by 1 and increments pc
by 1, and SSTORE
pushes the top two items off the stack and inserts the second item into the contract's storage at the index specified by the first item. Although there are many ways to optimize Ethereum virtual machine execution via just-in-time compilation, a basic implementation of Ethereum can be done in a few hundred lines of code.
The Ethereum blockchain is in many ways similar to the Bitcoin blockchain, although it does have some differences. The main difference between Ethereum and Bitcoin with regard to the blockchain architecture is that, unlike Bitcoin, Ethereum blocks contain a copy of both the transaction list and the most recent state. Aside from that, two other values, the block number and the difficulty, are also stored in the block. The basic block validation algorithm in Ethereum is as follows:
Check if the previous block referenced exists and is valid.
Check that the timestamp of the block is greater than that of the referenced previous block and less than 15 minutes into the future
Check that the block number, difficulty, transaction root, uncle root and gas limit (various low-level Ethereum-specific concepts) are valid.
Check that the proof-of-work on the block is valid.
Let S[0]
be the state at the end of the previous block.
Let TX
be the block's transaction list, with n
transactions. For all i
in 0...n-1
, set S[i+1] = APPLY(S[i],TX[i])
. If any applications returns an error, or if the total gas consumed in the block up until this point exceeds the GASLIMIT
, return an error.
Let S_FINAL
be S[n]
, but adding the block reward paid to the miner.
Check if the Merkle tree root of the state S_FINAL
is equal to the final state root provided in the block header. If it is, the block is valid; otherwise, it is not valid.
The approach may seem highly inefficient at first glance, because it needs to store the entire state with each block, but in reality efficiency should be comparable to that of Bitcoin. The reason is that the state is stored in the tree structure, and after every block only a small part of the tree needs to be changed. Thus, in general, between two adjacent blocks the vast majority of the tree should be the same, and therefore the data can be stored once and referenced twice using pointers (ie. hashes of subtrees). A special kind of tree known as a "Patricia tree" is used to accomplish this, including a modification to the Merkle tree concept that allows for nodes to be inserted and deleted, and not just changed, efficiently. Additionally, because all of the state information is part of the last block, there is no need to store the entire blockchain history - a strategy which, if it could be applied to Bitcoin, can be calculated to provide 5-20x savings in space.
A commonly asked question is "where" contract code is executed, in terms of physical hardware. This has a simple answer: the process of executing contract code is part of the definition of the state transition function, which is part of the block validation algorithm, so if a transaction is added into block B
the code execution spawned by that transaction will be executed by all nodes, now and in the future, that download and validate block B
.
In general, there are three types of applications on top of Ethereum. The first category is financial applications, providing users with more powerful ways of managing and entering into contracts using their money. This includes sub-currencies, financial derivatives, hedging contracts, savings wallets, wills, and ultimately even some classes of full-scale employment contracts. The second category is semi-financial applications, where money is involved but there is also a heavy non-monetary side to what is being done; a perfect example is self-enforcing bounties for solutions to computational problems. Finally, there are applications such as online voting and decentralized governance that are not financial at all.
On-blockchain token systems have many applications ranging from sub-currencies representing assets such as USD or gold to company stocks, individual tokens representing smart property, secure unforgeable coupons, and even token systems with no ties to conventional value at all, used as point systems for incentivization. Token systems are surprisingly easy to implement in Ethereum. The key point to understand is that all a currency, or token system, fundamentally is, is a database with one operation: subtract X units from A and give X units to B, with the proviso that (i) A had at least X units before the transaction and (2) the transaction is approved by A. All that it takes to implement a token system is to implement this logic into a contract.
The basic code for implementing a token system in Serpent looks as follows:
This is essentially a literal implementation of the "banking system" state transition function described further above in this document. A few extra lines of code need to be added to provide for the initial step of distributing the currency units in the first place and a few other edge cases, and ideally a function would be added to let other contracts query for the balance of an address. But that's all there is to it. Theoretically, Ethereum-based token systems acting as sub-currencies can potentially include another important feature that on-chain Bitcoin-based meta-currencies lack: the ability to pay transaction fees directly in that currency. The way this would be implemented is that the contract would maintain an ether balance with which it would refund ether used to pay fees to the sender, and it would refill this balance by collecting the internal currency units that it takes in fees and reselling them in a constant running auction. Users would thus need to "activate" their accounts with ether, but once the ether is there it would be reusable because the contract would refund it each time.
Financial derivatives are the most common application of a "smart contract", and one of the simplest to implement in code. The main challenge in implementing financial contracts is that the majority of them require reference to an external price ticker; for example, a very desirable application is a smart contract that hedges against the volatility of ether (or another cryptocurrency) with respect to the US dollar, but doing this requires the contract to know what the value of ETH/USD is. The simplest way to do this is through a "data feed" contract maintained by a specific party (eg. NASDAQ) designed so that that party has the ability to update the contract as needed, and providing an interface that allows other contracts to send a message to that contract and get back a response that provides the price.
Given that critical ingredient, the hedging contract would look as follows:
Wait for party A to input 1000 ether.
Wait for party B to input 1000 ether.
Record the USD value of 1000 ether, calculated by querying the data feed contract, in storage, say this is $x.
After 30 days, allow A or B to "reactivate" the contract in order to send $x worth of ether (calculated by querying the data feed contract again to get the new price) to A and the rest to B.
Such a contract would have significant potential in crypto-commerce. One of the main problems cited about cryptocurrency is the fact that it's volatile; although many users and merchants may want the security and convenience of dealing with cryptographic assets, they many not wish to face that prospect of losing 23% of the value of their funds in a single day. Up until now, the most commonly proposed solution has been issuer-backed assets; the idea is that an issuer creates a sub-currency in which they have the right to issue and revoke units, and provide one unit of the currency to anyone who provides them (offline) with one unit of a specified underlying asset (eg. gold, USD). The issuer then promises to provide one unit of the underlying asset to anyone who sends back one unit of the crypto-asset. This mechanism allows any non-cryptographic asset to be "uplifted" into a cryptographic asset, provided that the issuer can be trusted.
In practice, however, issuers are not always trustworthy, and in some cases the banking infrastructure is too weak, or too hostile, for such services to exist. Financial derivatives provide an alternative. Here, instead of a single issuer providing the funds to back up an asset, a decentralized market of speculators, betting that the price of a cryptographic reference asset (eg. ETH) will go up, plays that role. Unlike issuers, speculators have no option to default on their side of the bargain because the hedging contract holds their funds in escrow. Note that this approach is not fully decentralized, because a trusted source is still needed to provide the price ticker, although arguably even still this is a massive improvement in terms of reducing infrastructure requirements (unlike being an issuer, issuing a price feed requires no licenses and can likely be categorized as free speech) and reducing the potential for fraud.
The contract is very simple; all it is is a database inside the Ethereum network that can be added to, but not modified or removed from. Anyone can register a name with some value, and that registration then sticks forever. A more sophisticated name registration contract will also have a "function clause" allowing other contracts to query it, as well as a mechanism for the "owner" (ie. the first registerer) of a name to change the data or transfer ownership. One can even add reputation and web-of-trust functionality on top.
Over the past few years, there have emerged a number of popular online file storage startups, the most prominent being Dropbox, seeking to allow users to upload a backup of their hard drive and have the service store the backup and allow the user to access it in exchange for a monthly fee. However, at this point the file storage market is at times relatively inefficient; a cursory look at various existing solutions shows that, particularly at the "uncanny valley" 20-200 GB level at which neither free quotas nor enterprise-level discounts kick in, monthly prices for mainstream file storage costs are such that you are paying for more than the cost of the entire hard drive in a single month. Ethereum contracts can allow for the development of a decentralized file storage ecosystem, where individual users can earn small quantities of money by renting out their own hard drives and unused space can be used to further drive down the costs of file storage.
The key underpinning piece of such a device would be what we have termed the "decentralized Dropbox contract". This contract works as follows. First, one splits the desired data up into blocks, encrypting each block for privacy, and builds a Merkle tree out of it. One then makes a contract with the rule that, every N blocks, the contract would pick a random index in the Merkle tree (using the previous block hash, accessible from contract code, as a source of randomness), and give X ether to the first entity to supply a transaction with a simplified payment verification-like proof of ownership of the block at that particular index in the tree. When a user wants to re-download their file, they can use a micropayment channel protocol (eg. pay 1 szabo per 32 kilobytes) to recover the file; the most fee-efficient approach is for the payer not to publish the transaction until the end, instead replacing the transaction with a slightly more lucrative one with the same nonce after every 32 kilobytes.
An important feature of the protocol is that, although it may seem like one is trusting many random nodes not to decide to forget the file, one can reduce that risk down to near-zero by splitting the file into many pieces via secret sharing, and watching the contracts to see each piece is still in some node's possession. If a contract is still paying out money, that provides a cryptographic proof that someone out there is still storing the file.
The general concept of a "decentralized autonomous organization" is that of a virtual entity that has a certain set of members or shareholders which, perhaps with a 67% majority, have the right to spend the entity's funds and modify its code. The members would collectively decide on how the organization should allocate its funds. Methods for allocating a DAO's funds could range from bounties, salaries to even more exotic mechanisms such as an internal currency to reward work. This essentially replicates the legal trappings of a traditional company or nonprofit but using only cryptographic blockchain technology for enforcement. So far much of the talk around DAOs has been around the "capitalist" model of a "decentralized autonomous corporation" (DAC) with dividend-receiving shareholders and tradable shares; an alternative, perhaps described as a "decentralized autonomous community", would have all members have an equal share in the decision making and require 67% of existing members to agree to add or remove a member. The requirement that one person can only have one membership would then need to be enforced collectively by the group.
A general outline for how to code a DAO is as follows. The simplest design is simply a piece of self-modifying code that changes if two thirds of members agree on a change. Although code is theoretically immutable, one can easily get around this and have de-facto mutability by having chunks of the code in separate contracts, and having the address of which contracts to call stored in the modifiable storage. In a simple implementation of such a DAO contract, there would be three transaction types, distinguished by the data provided in the transaction:
[0,i,K,V]
to register a proposal with index i
to change the address at storage index K
to value V
[1,i]
to register a vote in favor of proposal i
[2,i]
to finalize proposal i
if enough votes have been made
An alternative model is for a decentralized corporation, where any account can have zero or more shares, and two thirds of the shares are required to make a decision. A complete skeleton would involve asset management functionality, the ability to make an offer to buy or sell shares, and the ability to accept offers (preferably with an order-matching mechanism inside the contract). Delegation would also exist Liquid Democracy-style, generalizing the concept of a "board of directors".
1. Savings wallets. Suppose that Alice wants to keep her funds safe, but is worried that she will lose or someone will hack her private key. She puts ether into a contract with Bob, a bank, as follows:
Alice alone can withdraw a maximum of 1% of the funds per day.
Bob alone can withdraw a maximum of 1% of the funds per day, but Alice has the ability to make a transaction with her key shutting off this ability.
Alice and Bob together can withdraw anything.
Normally, 1% per day is enough for Alice, and if Alice wants to withdraw more she can contact Bob for help. If Alice's key gets hacked, she runs to Bob to move the funds to a new contract. If she loses her key, Bob will get the funds out eventually. If Bob turns out to be malicious, then she can turn off his ability to withdraw.
2. Crop insurance. One can easily make a financial derivatives contract but using a data feed of the weather instead of any price index. If a farmer in Iowa purchases a derivative that pays out inversely based on the precipitation in Iowa, then if there is a drought, the farmer will automatically receive money and if there is enough rain the farmer will be happy because their crops would do well. This can be expanded to natural disaster insurance generally.
4. Smart multisignature escrow. Bitcoin allows multisignature transaction contracts where, for example, three out of a given five keys can spend the funds. Ethereum allows for more granularity; for example, four out of five can spend everything, three out of five can spend up to 10% per day, and two out of five can spend up to 0.5% per day. Additionally, Ethereum multisig is asynchronous - two parties can register their signatures on the blockchain at different times and the last signature will automatically send the transaction.
5. Cloud computing. The EVM technology can also be used to create a verifiable computing environment, allowing users to ask others to carry out computations and then optionally ask for proofs that computations at certain randomly selected checkpoints were done correctly. This allows for the creation of a cloud computing market where any user can participate with their desktop, laptop or specialized server, and spot-checking together with security deposits can be used to ensure that the system is trustworthy (ie. nodes cannot profitably cheat). Although such a system may not be suitable for all tasks; tasks that require a high level of inter-process communication, for example, cannot easily be done on a large cloud of nodes. Other tasks, however, are much easier to parallelize; projects like SETI@home, folding@home and genetic algorithms can easily be implemented on top of such a platform.
8. On-chain decentralized marketplaces, using the identity and reputation system as a base.
As described by Sompolinsky and Zohar, GHOST solves the first issue of network security loss by including stale blocks in the calculation of which chain is the "longest"; that is to say, not just the parent and further ancestors of a block, but also the stale descendants of the block's ancestor (in Ethereum jargon, "uncles") are added to the calculation of which block has the largest total proof-of-work backing it. To solve the second issue of centralization bias, we go beyond the protocol described by Sompolinsky and Zohar, and also provide block rewards to stales: a stale block receives 87.5% of its base reward, and the nephew that includes the stale block receives the remaining 12.5%. Transaction fees, however, are not awarded to uncles.
Ethereum implements a simplified version of GHOST which only goes down seven levels. Specifically, it is defined as follows:
A block must specify a parent, and it must specify 0 or more uncles
An uncle included in block B must have the following properties:
It must be a direct child of the kth generation ancestor of B, where 2 <= k <= 7.
It cannot be an ancestor of B
An uncle must be a valid block header, but does not need to be a previously verified or even valid block
An uncle must be different from all uncles included in previous blocks and all other uncles included in the same block (non-double-inclusion)
For every uncle U in block B, the miner of B gets an additional 3.125% added to its coinbase reward and the miner of U gets 93.75% of a standard coinbase reward.
This limited version of GHOST, with uncles includable only up to 7 generations, was used for two reasons. First, unlimited GHOST would include too many complications into the calculation of which uncles for a given block are valid. Second, unlimited GHOST with compensation as used in Ethereum removes the incentive for a miner to mine on the main chain and not the chain of a public attacker.
Because every transaction published into the blockchain imposes on the network the cost of needing to download and verify it, there is a need for some regulatory mechanism, typically involving transaction fees, to prevent abuse. The default approach, used in Bitcoin, is to have purely voluntary fees, relying on miners to act as the gatekeepers and set dynamic minimums. This approach has been received very favorably in the Bitcoin community particularly because it is "market-based", allowing supply and demand between miners and transaction senders determine the price. The problem with this line of reasoning is, however, that transaction processing is not a market; although it is intuitively attractive to construe transaction processing as a service that the miner is offering to the sender, in reality every transaction that a miner includes will need to be processed by every node in the network, so the vast majority of the cost of transaction processing is borne by third parties and not the miner that is making the decision of whether or not to include it. Hence, tragedy-of-the-commons problems are very likely to occur.
However, as it turns out this flaw in the market-based mechanism, when given a particular inaccurate simplifying assumption, magically cancels itself out. The argument is as follows. Suppose that:
A transaction leads to k
operations, offering the reward kR
to any miner that includes it where R
is set by the sender and k
and R
are (roughly) visible to the miner beforehand.
An operation has a processing cost of C
to any node (ie. all nodes have equal efficiency)
There are N
mining nodes, each with exactly equal processing power (ie. 1/N
of total)
No non-mining full nodes exist.
A miner would be willing to process a transaction if the expected reward is greater than the cost. Thus, the expected reward is kR/N
since the miner has a 1/N
chance of processing the next block, and the processing cost for the miner is simply kC
. Hence, miners will include transactions where kR/N > kC
, or R > NC
. Note that R
is the per-operation fee provided by the sender, and is thus a lower bound on the benefit that the sender derives from the transaction, and NC
is the cost to the entire network together of processing an operation. Hence, miners have the incentive to include only those transactions for which the total utilitarian benefit exceeds the cost.
However, there are several important deviations from those assumptions in reality:
The miner does pay a higher cost to process the transaction than the other verifying nodes, since the extra verification time delays block propagation and thus increases the chance the block will become a stale.
There do exist nonmining full nodes.
The mining power distribution may end up radically inegalitarian in practice.
Speculators, political enemies and crazies whose utility function includes causing harm to the network do exist, and they can cleverly set up contracts where their cost is much lower than the cost paid by other verifying nodes.
BLK_LIMIT_FACTOR
and EMA_FACTOR
are constants that will be set to 65536 and 1.5 for the time being, but will likely be changed after further analysis.
There is another factor disincentivizing large block sizes in Bitcoin: blocks that are large will take longer to propagate, and thus have a higher probability of becoming stales. In Ethereum, highly gas-consuming blocks can also take longer to propagate both because they are physically larger and because they take longer to process the transaction state transitions to validate. This delay disincentive is a significant consideration in Bitcoin, but less so in Ethereum because of the GHOST protocol; hence, relying on regulated block limits provides a more stable baseline.
An important note is that the Ethereum virtual machine is Turing-complete; this means that EVM code can encode any computation that can be conceivably carried out, including infinite loops. EVM code allows looping in two ways. First, there is a JUMP
instruction that allows the program to jump back to a previous spot in the code, and a JUMPI
instruction to do conditional jumping, allowing for statements like while x < 27: x = x * 2
. Second, contracts can call other contracts, potentially allowing for looping through recursion. This naturally leads to a problem: can malicious users essentially shut miners and full nodes down by forcing them to enter into an infinite loop? The issue arises because of a problem in computer science known as the halting problem: there is no way to tell, in the general case, whether or not a given program will ever halt.
As described in the state transition section, our solution works by requiring a transaction to set a maximum number of computational steps that it is allowed to take, and if execution takes longer computation is reverted but fees are still paid. Messages work in the same way. To show the motivation behind our solution, consider the following examples:
An attacker creates a contract which runs an infinite loop, and then sends a transaction activating that loop to the miner. The miner will process the transaction, running the infinite loop, and wait for it to run out of gas. Even though the execution runs out of gas and stops halfway through, the transaction is still valid and the miner still claims the fee from the attacker for each computational step.
An attacker creates a very long infinite loop with the intent of forcing the miner to keep computing for such a long time that by the time computation finishes a few more blocks will have come out and it will not be possible for the miner to include the transaction to claim the fee. However, the attacker will be required to submit a value for STARTGAS
limiting the number of computational steps that execution can take, so the miner will know ahead of time that the computation will take an excessively large number of steps.
An attacker sees a contract with code of some form like send(A,contract.storage[A]); contract.storage[A] = 0
, and sends a transaction with just enough gas to run the first step but not the second (ie. making a withdrawal but not letting the balance go down). The contract author does not need to worry about protecting against such attacks, because if execution stops halfway through the changes get reverted.
A financial contract works by taking the median of nine proprietary data feeds in order to minimize risk. An attacker takes over one of the data feeds, which is designed to be modifiable via the variable-address-call mechanism described in the section on DAOs, and converts it to run an infinite loop, thereby attempting to force any attempts to claim funds from the financial contract to run out of gas. However, the financial contract can set a gas limit on the message to prevent this problem.
The alternative to Turing-completeness is Turing-incompleteness, where JUMP
and JUMPI
do not exist and only one copy of each contract is allowed to exist in the call stack at any given time. With this system, the fee system described and the uncertainties around the effectiveness of our solution might not be necessary, as the cost of executing a contract would be bounded above by its size. Additionally, Turing-incompleteness is not even that big a limitation; out of all the contract examples we have conceived internally, so far only one required a loop, and even that loop could be removed by making 26 repetitions of a one-line piece of code. Given the serious implications of Turing-completeness, and the limited benefit, why not simply have a Turing-incomplete language? In reality, however, Turing-incompleteness is far from a neat solution to the problem. To see why, consider the following contracts:
Now, send a transaction to A. Thus, in 51 transactions, we have a contract that takes up 250 computational steps. Miners could try to detect such logic bombs ahead of time by maintaining a value alongside each contract specifying the maximum number of computational steps that it can take, and calculating this for contracts calling other contracts recursively, but that would require miners to forbid contracts that create other contracts (since the creation and execution of all 26 contracts above could easily be rolled into a single contract). Another problematic point is that the address field of a message is a variable, so in general it may not even be possible to tell which other contracts a given contract will call ahead of time. Hence, all in all, we have a surprising conclusion: Turing-completeness is surprisingly easy to manage, and the lack of Turing-completeness is equally surprisingly difficult to manage unless the exact same controls are in place - but in that case why not just let the protocol be Turing-complete?
The Ethereum network includes its own built-in currency, ether, which serves the dual purpose of providing a primary liquidity layer to allow for efficient exchange between various types of digital assets and, more importantly, of providing a mechanism for paying transaction fees. For convenience and to avoid future argument (see the current mBTC/uBTC/satoshi debate in Bitcoin), the denominations will be pre-labelled:
1: wei
1012: szabo
1015: finney
1018: ether
This should be taken as an expanded version of the concept of "dollars" and "cents" or "BTC" and "satoshi". In the near future, we expect "ether" to be used for ordinary transactions, "finney" for microtransactions and "szabo" and "wei" for technical discussions around fees and protocol implementation; the remaining denominations may become useful later and should not be included in clients at this point.
The issuance model will be as follows:
Ether will be released in a currency sale at the price of 1000-2000 ether per BTC, a mechanism intended to fund the Ethereum organization and pay for development that has been used with success by other platforms such as Mastercoin and NXT. Earlier buyers will benefit from larger discounts. The BTC received from the sale will be used entirely to pay salaries and bounties to developers and invested into various for-profit and non-profit projects in the Ethereum and cryptocurrency ecosystem.
0.099x the total amount sold (60102216 ETH) will be allocated to the organization to compensate early contributors and pay ETH-denominated expenses before the genesis block.
0.099x the total amount sold will be maintained as a long-term reserve.
0.26x the total amount sold will be allocated to miners per year forever after that point.
Long-Term Supply Growth Rate (percent)
Despite the linear currency issuance, just like with Bitcoin over time the supply growth rate nevertheless tends to zero.
The two main choices in the above model are (1) the existence and size of an endowment pool, and (2) the existence of a permanently growing linear supply, as opposed to a capped supply as in Bitcoin. The justification of the endowment pool is as follows. If the endowment pool did not exist, and the linear issuance reduced to 0.217x to provide the same inflation rate, then the total quantity of ether would be 16.5% less and so each unit would be 19.8% more valuable. Hence, in the equilibrium 19.8% more ether would be purchased in the sale, so each unit would once again be exactly as valuable as before. The organization would also then have 1.198x as much BTC, which can be considered to be split into two slices: the original BTC, and the additional 0.198x. Hence, this situation is exactly equivalent to the endowment, but with one important difference: the organization holds purely BTC, and so is not incentivized to support the value of the ether unit.
The permanent linear supply growth model reduces the risk of what some see as excessive wealth concentration in Bitcoin, and gives individuals living in present and future eras a fair chance to acquire currency units, while at the same time retaining a strong incentive to obtain and hold ether because the "supply growth rate" as a percentage still tends to zero over time. We also theorize that because coins are always lost over time due to carelessness, death, etc, and coin loss can be modeled as a percentage of the total supply per year, that the total currency supply in circulation will in fact eventually stabilize at a value equal to the annual issuance divided by the loss rate (eg. at a loss rate of 1%, once the supply reaches 26X then 0.26X will be mined and 0.26X lost every year, creating an equilibrium).
Note that in the future, it is likely that Ethereum will switch to a proof-of-stake model for security, reducing the issuance requirement to somewhere between zero and 0.05X per year. In the event that the Ethereum organization loses funding or for any other reason disappears, we leave open a "social contract": anyone has the right to create a future candidate version of Ethereum, with the only condition being that the quantity of ether must be at most equal to 60102216 * (1.198 + 0.26 * n)
where n
is the number of years after the genesis block. Creators are free to crowd-sell or otherwise assign some or all of the difference between the PoS-driven supply expansion and the maximum allowable supply expansion to pay for development. Candidate upgrades that do not comply with the social contract may justifiably be forked into compliant versions.
The Bitcoin mining algorithm works by having miners compute SHA256 on slightly modified versions of the block header millions of times over and over again, until eventually one node comes up with a version whose hash is less than the target (currently around 2192). However, this mining algorithm is vulnerable to two forms of centralization. First, the mining ecosystem has come to be dominated by ASICs (application-specific integrated circuits), computer chips designed for, and therefore thousands of times more efficient at, the specific task of Bitcoin mining. This means that Bitcoin mining is no longer a highly decentralized and egalitarian pursuit, requiring millions of dollars of capital to effectively participate in. Second, most Bitcoin miners do not actually perform block validation locally; instead, they rely on a centralized mining pool to provide the block headers. This problem is arguably worse: as of the time of this writing, the top three mining pools indirectly control roughly 50% of processing power in the Bitcoin network, although this is mitigated by the fact that miners can switch to other mining pools if a pool or coalition attempts a 51% attack.
The current intent at Ethereum is to use a mining algorithm where miners are required to fetch random data from the state, compute some randomly selected transactions from the last N blocks in the blockchain, and return the hash of the result. This has two important benefits. First, Ethereum contracts can include any kind of computation, so an Ethereum ASIC would essentially be an ASIC for general computation - ie. a better CPU. Second, mining requires access to the entire blockchain, forcing miners to store the entire blockchain and at least be capable of verifying every transaction. This removes the need for centralized mining pools; although mining pools can still serve the legitimate role of evening out the randomness of reward distribution, this function can be served equally well by peer-to-peer pools with no central control.
This model is untested, and there may be difficulties along the way in avoiding certain clever optimizations when using contract execution as a mining algorithm. However, one notably interesting feature of this algorithm is that it allows anyone to "poison the well", by introducing a large number of contracts into the blockchain specifically designed to stymie certain ASICs. The economic incentives exist for ASIC manufacturers to use such a trick to attack each other. Thus, the solution that we are developing is ultimately an adaptive economic human solution rather than purely a technical one.
One common concern about Ethereum is the issue of scalability. Like Bitcoin, Ethereum suffers from the flaw that every transaction needs to be processed by every node in the network. With Bitcoin, the size of the current blockchain rests at about 15 GB, growing by about 1 MB per hour. If the Bitcoin network were to process Visa's 2000 transactions per second, it would grow by 1 MB per three seconds (1 GB per hour, 8 TB per year). Ethereum is likely to suffer a similar growth pattern, worsened by the fact that there will be many applications on top of the Ethereum blockchain instead of just a currency as is the case with Bitcoin, but ameliorated by the fact that Ethereum full nodes need to store just the state instead of the entire blockchain history.
In the near term, Ethereum will use two additional strategies to cope with this problem. First, because of the blockchain-based mining algorithms, at least every miner will be forced to be a full node, creating a lower bound on the number of full nodes. Second and more importantly, however, we will include an intermediate state tree root in the blockchain after processing each transaction. Even if block validation is centralized, as long as one honest verifying node exists, the centralization problem can be circumvented via a verification protocol. If a miner publishes an invalid block, that block must either be badly formatted, or the state S[n]
is incorrect. Since S[0]
is known to be correct, there must be some first state S[i]
that is incorrect where S[i-1]
is correct. The verifying node would provide the index i
, along with a "proof of invalidity" consisting of the subset of Patricia tree nodes needing to process APPLY(S[i-1],TX[i]) -> S[i]
. Nodes would be able to use those nodes to run that part of the computation, and see that the S[i]
generated does not match the S[i]
provided.
Another, more sophisticated, attack would involve the malicious miners publishing incomplete blocks, so the full information does not even exist to determine whether or not blocks are valid. The solution to this is a challenge-response protocol: verification nodes issue "challenges" in the form of target transaction indices, and upon receiving a node a light node treats the block as untrusted until another node, whether the miner or another verifier, provides a subset of Patricia nodes as a proof of validity.
The Ethereum protocol was originally conceived as an upgraded version of a cryptocurrency, providing advanced features such as on-blockchain escrow, withdrawal limits, financial contracts, gambling markets and the like via a highly generalized programming language. The Ethereum protocol would not "support" any of the applications directly, but the existence of a Turing-complete programming language means that arbitrary contracts can theoretically be created for any transaction type or application. What is more interesting about Ethereum, however, is that the Ethereum protocol moves far beyond just currency. Protocols around decentralized file storage, decentralized computation and decentralized prediction markets, among dozens of other such concepts, have the potential to substantially increase the efficiency of the computational industry, and provide a massive boost to other peer-to-peer protocols by adding for the first time an economic layer. Finally, there is also a substantial array of applications that have nothing to do with money at all.
The concept of an arbitrary state transition function as implemented by the Ethereum protocol provides for a platform with unique potential; rather than being a closed-ended, single-purpose protocol intended for a specific array of applications in data storage, gambling or finance, Ethereum is open-ended by design, and we believe that it is extremely well-suited to serving as a foundational layer for a very large number of both financial and non-financial protocols in the years to come.
A sophisticated reader may notice that in fact a Bitcoin address is the hash of the elliptic curve public key, and not the public key itself. However, it is in fact perfectly legitimate cryptographic terminology to refer to the pubkey hash as a public key itself. This is because Bitcoin's cryptography can be considered to be a custom digital signature algorithm, where the public key consists of the hash of the ECC pubkey, the signature consists of the ECC pubkey concatenated with the ECC signature, and the verification algorithm involves checking the ECC pubkey in the signature against the ECC pubkey hash provided as a public key and then verifying the ECC signature against the ECC pubkey.
Technically, the median of the 11 previous blocks.
https://queue.acm.org/detail.cfm?id=3136559
Arvind Narayanan and Jeremy Clark, Aug 2017
If you've read about bitcoin in the press and have some familiarity with academic research in the field of cryptography, you might reasonably come away with the following impression: Several decades' worth of research on digital cash, beginning with David Chaum,10,12 did not lead to commercial success because it required a centralized, banklike server controlling the system, and no banks wanted to sign on. Along came bitcoin, a radically different proposal for a decentralized cryptocurrency that didn't need the banks, and digital cash finally succeeded. Its inventor, the mysterious Satoshi Nakamoto, was an academic outsider, and bitcoin bears no resemblance to earlier academic proposals.
This article challenges that view by showing that nearly all of the technical components of bitcoin originated in the academic literature of the 1980s and '90s (see figure 1). This is not to diminish Nakamoto's achievement but to point out that he stood on the shoulders of giants. Indeed, by tracing the origins of the ideas in bitcoin, we can zero in on Nakamoto's true leap of insight—the specific, complex way in which the underlying components are put together. This helps explain why bitcoin took so long to be invented. Readers already familiar with how bitcoin works may gain a deeper understanding from this historical presentation. (For an introduction, see Bitcoin and Cryptocurrency Technologies by Arvind Narayanan et al.36) Bitcoin's intellectual history also serves as a case study demonstrating the relationships among academia, outside researchers, and practitioners, and offers lessons on how these groups can benefit from one another.
If you have a secure ledger, the process to leverage it into a digital payment system is straightforward. For example, if Alice sends Bob $100 by PayPal, then PayPal debits $100 from Alice's account and credits $100 to Bob's account. This is also roughly what happens in traditional banking, although the absence of a single ledger shared between banks complicates things.
This idea of a ledger is the starting point for understanding bitcoin. It is a place to record all transactions that happen in the system, and it is open to and trusted by all system participants. Bitcoin converts this system for recording payments into a currency. Whereas in banking, an account balance represents cash that can be demanded from the bank, what does a unit of bitcoin represent? For now, assume that what is being transacted holds value inherently.
How can you build a ledger for use in an environment like the Internet where participants may not trust each other? Let's start with the easy part: the choice of data structure. There are a few desirable properties. The ledger should be immutable or, more precisely, append only: you should be able to add new transactions but not remove, modify, or reorder existing ones. There should also be a way to obtain a succinct cryptographic digest of the state of the ledger at any time. A digest is a short string that makes it possible to avoid storing the entire ledger, knowing that if the ledger were tampered with in any way, the resulting digest would change, and thus the tampering would be detected. The reason for these properties is that unlike a regular data structure that's stored on a single machine, the ledger is a global data structure collectively maintained by a mutually untrusting set of participants. This contrasts with another approach to decentralizing digital ledgers,7,13,21 in which many participants maintain local ledgers and it is up to the user querying this set of ledgers to resolve any conflicts.
Linked timestamping
Bitcoin's ledger data structure is borrowed, with minimal modifications, from a series of papers by Stuart Haber and Scott Stornetta written between 1990 and 1997 (their 1991 paper had another co-author, Dave Bayer).5,22,23 We know this because Nakamoto says so in his bitcoin white paper.34 Haber and Stornetta's work addressed the problem of document timestamping—they aimed to build a "digital notary" service. For patents, business contracts, and other documents, one may want to establish that the document was created at a certain point in time, and no later. Their notion of document is quite general and could be any type of data. They do mention, in passing, financial transactions as a potential application, but it wasn't their focus.
In a simplified version of Haber and Stornetta's proposal, documents are constantly being created and broadcast. The creator of each document asserts a time of creation and signs the document, its timestamp, and the previously broadcast document. This previous document has signed its own predecessor, so the documents form a long chain with pointers backwards in time. An outside user cannot alter a timestamped message since it is signed by the creator, and the creator cannot alter the message without also altering the entire chain of messages that follows. Thus, if you are given a single item in the chain by a trusted source (e.g., another user or a specialized timestamping service), the entire chain up to that point is locked in, immutable, and temporally ordered. Further, if you assume that the system rejects documents with incorrect creation times, you can be reasonably assured that documents are at least as old as they claim to be. At any rate, bitcoin borrows only the data structure from Haber and Stornetta's work and reengineers its security properties with the addition of the proof-of-work scheme described later in this article.
In their follow-up papers, Haber and Stornetta introduced other ideas that make this data structure more effective and efficient (some of which were hinted at in their first paper). First, links between documents can be created using hashes rather than signatures; hashes are simpler and faster to compute. Such links are called hash pointers. Second, instead of threading documents individually—which might be inefficient if many documents are created at approximately the same time—they can be grouped into batches or blocks, with documents in each block having essentially the same timestamp. Third, within each block, documents can be linked together with a binary tree of hash pointers, called a Merkle tree, rather than a linear chain. Incidentally, Josh Benaloh and Michael de Mare independently introduced all three of these ideas in 1991,6 soon after Haber and Stornetta's first paper.
Merkle trees
Bitcoin uses essentially the data structure in Haber and Stornetta's 1991 and 1997 papers, shown in simplified form in figure 2 (Nakamoto was presumably unaware of Benaloh and de Mare's work). Of course, in bitcoin, transactions take the place of documents. In each block's Merkle tree, the leaf nodes are transactions, and each internal node essentially consists of two pointers. This data structure has two important properties. First, the hash of the latest block acts as a digest. A change to any of the transactions (leaf nodes) will necessitate changes propagating all the way to the root of the block, and the roots of all following blocks. Thus, if you know the latest hash, you can download the rest of the ledger from an untrusted source and verify that it hasn't changed. A similar argument establishes another important property of the data structure—that is, someone can efficiently prove to you that a particular transaction is included in the ledger. This user would have to send you only a small number of nodes in that transaction's block (this is the point of the Merkle tree), as well as a small amount of information for every following block. The ability to efficiently prove inclusion of transactions is highly desirable for performance and scalability.
Merkle trees, by the way, are named for Ralph Merkle, a pioneer of asymmetric cryptography who proposed the idea in his 1980 paper.33 His intended application was to produce a digest for a public directory of digital certificates. When a website, for example, presents you with a certificate, it could also present a short proof that the certificate appears in the global directory. You could efficiently verify the proof as long as you know the root hash of the Merkle tree of the certificates in the directory. This idea is ancient by cryptographic standards, but its power has been appreciated only of late. It is at the core of the recently implemented Certificate Transparency system.30 A 2015 paper proposes CONIKS, which applies the idea to directories of public keys for end-to-end encrypted emails.32 Efficient verification of parts of the global state is one of the key functionalities provided by the ledger in Ethereum, a new cryptocurrency.
Bitcoin may be the most well-known real-world instantiation of Haber and Stornetta's data structures, but it is not the first. At least two companies—Surety starting in the mid-'90s and Guardtime starting in 2007—offer document timestamping services. An interesting twist present in both of these services is an idea mentioned by Bayer, Haber, and Stornetta,5 which is to publish Merkle roots periodically in a newspaper by taking out an ad. Figure 3 shows a Merkle root published by Guardtime.
Byzantine fault tolerance
Of course, the requirements for an Internet currency without a central authority are more stringent. A distributed ledger will inevitably have forks, which means that some nodes will think block A is the latest block, while other nodes will think it is block B. This could be because of an adversary trying to disrupt the ledger's operation or simply because of network latency, resulting in blocks occasionally being generated near-simultaneously by different nodes unaware of each other's blocks. Linked timestamping alone is not enough to resolve forks, as was shown by Mike Just in 1998.26
A different research field, fault-tolerant distributed computing, has studied this problem, where it goes by different names, including state replication. A solution to this problem is one that enables a set of nodes to apply the same state transitions in the same order—typically, the precise order does not matter, only that all nodes are consistent. For a digital currency, the state to be replicated is the set of balances, and transactions are state transitions. Early solutions, including Paxos, proposed by Turing Award winner Leslie Lamport in 1989,28,29 consider state replication when communication channels are unreliable and when a minority of nodes may exhibit certain "realistic" faults, such as going offline forever or rebooting and sending outdated messages from when it first went offline. A prolific literature followed with more adverse settings and efficiency tradeoffs.
A related line of work studied the situation where the network is mostly reliable (messages are delivered with bounded delay), but where the definition of "fault" was expanded to handle any deviation from the protocol. Such Byzantine faults include both naturally occurring faults as well as maliciously crafted behaviors. They were first studied in a paper also by Lamport, cowritten with Robert Shostak and Marshall Pease, as early as 1982.27 Much later, in 1999, a landmark paper by Miguel Castro and Barbara Liskov introduced PBFT (practical Byzantine fault tolerance), which accommodated both Byzantine faults and an unreliable network.8 Compared with linked timestamping, the fault-tolerance literature is enormous and includes hundreds of variants and optimizations of Paxos, PBFT, and other seminal protocols.
In his original white paper, Nakamoto does not cite this literature or use its language. He uses some concepts, referring to his protocol as a consensus mechanism and considering faults both in the form of attackers, as well as nodes joining and leaving the network. This is in contrast to his explicit reliance on the literature in linked timestamping (and proof of work, discussed next). When asked in a mailing-list discussion about bitcoin's relation to the Byzantine Generals' Problem (a thought experiment requiring BFT to solve), Nakamoto asserts that the proof-of-work chain solves this problem.35
In the following years, other academics have studied Nakamoto consensus from the perspective of distributed systems. This is still a work in progress. Some show that bitcoin's properties are quite weak,43 while others argue that the BFT perspective doesn't do justice to bitcoin's consistency properties.40 Another approach is to define variants of well-studied properties and prove that bitcoin satisfies them.19 Recently these definitions were substantially sharpened to provide a more standard consistency definition that holds under more realistic assumptions about message delivery.37 All of this work, however, makes assumptions about "honest," i.e., procotol-compliant, behavior among a subset of participants, whereas Nakamoto suggests that honest behavior need not be blindly assumed, because it is incentivized. A richer analysis of Nakamoto consensus accounting for the role of incentives doesn't fit cleanly into past models of fault-tolerant systems.
Virtually all fault-tolerant systems assume that a strict majority or supermajority (e.g., more than half or two-thirds) of nodes in the system are both honest and reliable. In an open peer-to-peer network, there is no registration of nodes, and they freely join and leave. Thus an adversary can create enough Sybils, or sockpuppet nodes, to overcome the consensus guarantees of the system. The Sybil attack was formalized in 2002 by John Douceur,14 who turned to a cryptographic construction called proof of work to mitigate it.
The origins
To understand proof of work, let's turn to its origins. The first proposal that would be called proof of work today was created in 1992 by Cynthia Dwork and Moni Naor.15 Their goal was to deter spam. Note that spam, Sybil attacks, and denial of service are all roughly similar problems in which the adversary amplifies its influence in the network compared to regular users; proof of work is applicable as a defense against all three. In Dwork and Naor's design, email recipients would process only those emails that were accompanied by proof that the sender had performed a moderate amount of computational work—hence, "proof of work." Computing the proof would take perhaps a few seconds on a regular computer. Thus, it would pose no difficulty for regular users, but a spammer wishing to send a million emails would require several weeks, using equivalent hardware.
Note that the proof-of-work instance (also called a puzzle) has to be specific to the email, as well as to the recipient. Otherwise, a spammer would be able to send multiple messages to the same recipient (or the same message to multiple recipients) for the cost of one message to one recipient. The second crucial property is that it should pose minimal computational burden on the recipient; puzzle solutions should be trivial to verify, regardless of how hard they are to compute. Additionally, Dwork and Naor considered functions with a trapdoor, a secret known to a central authority that would allow the authority to solve the puzzles without doing the work. One possible application of a trapdoor would be for the authority to approve posting to mailing lists without incurring a cost. Dwork and Naor's proposal consisted of three candidate puzzles meeting their properties, and it kicked off a whole research field, to which we'll return.
Hashcash
A very similar idea called hashcash was independently invented in 1997 by Adam Back, a postdoctoral researcher at the time who was part of the cypherpunk community. Cypherpunks were activists who opposed the power of governments and centralized institutions, and sought to create social and political change through cryptography. Back was practically oriented: he released hashcash first as software,2 and five years later in 2002 released an Internet draft (a standardization document) and a paper.4
Hashcash is much simpler than Dwork and Naor's idea: it has no trapdoor and no central authority, and it uses only hash functions instead of digital signatures. It is based on a simple principle: a hash function behaves as a random function for some practical purposes, which means that the only way to find an input that hashes to a particular output is to try various inputs until one produces the desired output. Further, the only way to find an input that hashes into an arbitrary set of outputs is again to try hashing different inputs one by one. So, if I challenged you to find an input whose (binary) hash value begins with 10 zeros, you would have to try numerous inputs, and you would find that each output had a 1/210 chance of beginning with 10 zeros, which means that you would have to try on the order of 210 inputs, or approximately 1,000 hash computations.
As the name suggests, in hashcash Back viewed proof of work as a form of cash. On his web page he positioned it as an alternative to David Chaum's DigiCash, which was a system that issued untraceable digital cash from a bank to a user.3 He even made compromises to the technical design to make it appear more cashlike. Later, Back made comments suggesting that bitcoin was a straightforward extension of hashcash. Hashcash is simply not cash, however, because it has no protection against double spending. Hashcash tokens cannot be exchanged among peers.
Meanwhile, in the academic scene, researchers found many applications for proof of work besides spam, such as preventing denial-of-service attacks,25 ensuring the integrity of web analytics,17 and rate-limiting password guessing online.38 Incidentally, the term proof of work was coined only in 1999 in a paper by Markus Jakobsson and Ari Juels, which also includes a nice survey of the work up until that point.24 It is worth noting that these researchers seem to have been unaware of hashcash but independently started to converge on hash-based proof of work, which was introduced in papers by Eran Gabber et al.18 and by Juels and Brainard.25 (Many of the terms used throughout this paragraph didn't become standard terminology until long after the papers in question were published.)
Proof of work and digital cash: A catch-22
You may know that proof of work did not succeed in its original application as an anti-spam measure. One possible reason is the dramatic difference in the puzzle-solving speed of different devices. That means spammers will be able to make a small investment in custom hardware to increase their spam rate by orders of magnitude. In economics, the natural response to an asymmetry in the cost of production is trade—that is, a market for proof-of-work solutions. But this presents a catch-22, because that would require a working digital currency. Indeed, the lack of such a currency is a major part of the motivation for proof of work in the first place. One crude solution to this problem is to declare puzzle solutions to be cash, as hashcash tries to do.
More coherent approaches to treating puzzle solutions as cash are found in two essays that preceded bitcoin, describing ideas called b-money13 and bit gold42 respectively. These proposals offer timestamping services that sign off on the creation (through proof of work) of money, and once money is created, they sign off on transfers. If disagreement about the ledger occurs among the servers or nodes, however, there isn't a clear way to resolve it. Letting the majority decide seems to be implicit in both authors' writings, but because of the Sybil problem, these mechanisms aren't very secure, unless there is a gatekeeper who controls entry into the network or Sybil resistance is itself achieved with proof of work.
Understanding all these predecessors that contain pieces of bitcoin's design leads to an appreciation of the true genius of Nakamoto's innovation. In bitcoin, for the first time, puzzle solutions don't constitute cash by themselves. Instead, they are merely used to secure the ledger. Solving proof of work is performed by specialized entities called miners (although Nakamoto underestimated just how specialized mining would become).
Miners are constantly in a race with each other to find the next puzzle solution; each miner solves a slightly different variant of the puzzle so that the chance of success is proportional to the fraction of global mining power that the miner controls. A miner who solves a puzzle gets to contribute the next batch, or block, of transactions to the ledger, which is based on linked timestamping. In exchange for the service of maintaining the ledger, a miner who contributes a block is rewarded with newly minted units of the currency. With high likelihood, if a miner contributes an invalid transaction or block, it will be rejected by the majority of other miners who contribute the following blocks, and this will also invalidate the block reward for the bad block. In this way, because of the monetary incentives, miners ensure each other's compliance with the protocol.
Bitcoin neatly avoids the double-spending problem plaguing proof-of-work-as-cash schemes because it eschews puzzle solutions themselves having value. In fact, puzzle solutions are twice decoupled from economic value: the amount of work required to produce a block is a floating parameter (proportional to the global mining power), and further, the number of bitcoins issued per block is not fixed either. The block reward (which is how new bitcoins are minted) is set to halve every four years (in 2017, the reward is 12.5 bitcoins/block, down from 50 bitcoins/block). Bitcoin incorporates an additional reward scheme—namely, senders of transactions paying miners for the service of including the transaction in their blocks. It is expected that the market will determine transaction fees and miners' rewards.
Nakamoto's genius, then, wasn't any of the individual components of bitcoin, but rather the intricate way in which they fit together to breathe life into the system. The timestamping and Byzantine agreement researchers didn't hit upon the idea of incentivizing nodes to be honest, nor, until 2005, of using proof of work to do away with identities. Conversely, the authors of hashcash, b-money, and bit gold didn't incorporate the idea of a consensus algorithm to prevent double spending. In bitcoin, a secure ledger is necessary to prevent double spending and thus ensure that the currency has value. A valuable currency is necessary to reward miners. In turn, strength of mining power is necessary to secure the ledger. Without it, an adversary could amass more than 50 percent of the global mining power and thereby be able to generate blocks faster than the rest of the network, double-spend transactions, and effectively rewrite history, overrunning the system. Thus, bitcoin is bootstrapped, with a circular dependence among these three components. Nakamoto's challenge was not just the design, but also convincing the initial community of users and miners to take a leap together into the unknown—back when a pizza cost 10,000 bitcoins and the network's mining power was less than a trillionth of what it is today.
Public keys as identities
This article began with the understanding that a secure ledger makes creating digital currency straightforward. Let's revisit this claim. When Alice wishes to pay Bob, she broadcasts the transaction to all bitcoin nodes. A transaction is simply a string: a statement encoding Alice's wish to pay Bob some value, signed by her. The eventual inclusion of this signed statement into the ledger by miners is what makes the transaction real. Note that this doesn't require Bob's participation in any way. But let's focus on what's not in the transaction: conspicuously absent are Alice and Bob's identities; instead, the transaction contains only their respective public keys. This is an important concept in bitcoin: public keys are the only kinds of identities in the system. Transactions transfer value from and to public keys, which are called addresses.
In order to "speak for" an identity, you must know the corresponding secret key. You can create a new identity at any time by generating a new key pair, with no central authority or registry. You don't need to obtain a user name or inform others that you have picked a particular name. This is the notion of decentralized identity management. Bitcoin doesn't specify how Alice tells Bob what her pseudonym is—that is external to the system.
Although radically different from most other payment systems today, these ideas are quite old, dating back to David Chaum, the father of digital cash. In fact, Chaum also made seminal contributions to anonymity networks, and it is in this context that he invented this idea. In his 1981 paper, "Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms,"9 he states: "A digital pseudonym' is a public key used to verify signatures made by the anonymous holder of the corresponding private key."
Now, having message recipients be known only by a public key presents an obvious problem: there is no way to route the message to the right computer. This leads to a massive inefficiency in Chaum's proposal, which can be traded off against the level of anonymity but not eliminated. Bitcoin is similarly exceedingly inefficient compared with centralized payment systems: the ledger containing every transaction is maintained by every node in the system. Bitcoin incurs this inefficiency for security reasons anyway, and thus achieves pseudonymity (i.e, public keys as identities) "for free." Chaum took these ideas much further in a 1985 paper,11 where he presents a vision of privacy-preserving e-commerce based on pervasive pseudonyms, as well as "blind signatures," the key technical idea behind his digital cash.
The public-keys-as-identities idea is also seen in b-money and bit gold, the two precursor essays to bitcoin discussed earlier. However, much of the work that built on Chaum's foundation, as well as Chaum's own later work on ecash, moved away from this idea. The cypherpunks were keenly interested in privacy-preserving communication and commerce, and they embraced pseudonyms, which they called nyms. But to them, nyms weren't mere cryptographic identities (i.e., public keys), but rather, usually email addresses that were linked to public keys. Similarly, Ian Goldberg's dissertation, which became the basis of much future work on anonymous communication, recognizes Chaum's idea but suggests that nyms should be human-memorable nicknames with certificates to bind them.20 Thus Bitcoin proved to be the most successful instantiation of Chaum's idea.
So far, this article has not addressed the blockchain, which, if you believe the hype, is bitcoin's main invention. It might come as a surprise to you that Nakamoto doesn't mention that term at all. In fact, the term blockchain has no standard technical definition but is a loose umbrella term used by various parties to refer to systems that bear varying levels of resemblance to bitcoin and its ledger.
Discussing example applications that benefit from a blockchain will help clarify the different uses of the term. First, consider a database backend for transactions among a consortium of banks, where transactions are netted at the end of each day and accounts are settled by the central bank. Such a system has a small number of well-identified parties, so Nakamoto consensus would be overkill. An on-blockchain currency is not needed either, as the accounts are denominated in traditional currency. Linked timestamping, on the other hand, would clearly be useful, at least to ensure a consistent global ordering of transactions in the face of network latency. State replication would also be useful: a bank would know that its local copy of the data is identical to what the central bank will use to settle its account. This frees banks from the expensive reconciliation process they must currently perform.
Second, consider an asset-management application such as a registry of documents that tracks ownership of financial securities, or real estate, or any other asset. Using a blockchain would increase interoperability and decrease barriers to entry. We want a secure, global registry of documents, and ideally one that allows public participation. This is essentially what the timestamping services of the 1990s and 2000s sought to provide. Public blockchains offer a particularly effective way to achieve this today (the data itself may be stored off-chain, with only the metadata stored on-chain). Other applications also benefit from a timestamping or "public bulletin board" abstraction, most notably electronic voting.
Let's build on the asset-management example. Suppose you want to execute trades of assets via the blockchain, and not merely record them there. This is possible if the asset is issued digitally on the blockchain itself, and if the blockchain supports smart contracts. In this instance, smart contracts solve the "fair exchange" problem of ensuring that payment is made if and only if the asset is transferred. More generally, smart contracts can encode complex business logic, provided that all necessary input data (assets, their prices, and so on) are represented on the blockchain.
This mapping of blockchain properties to applications allows us not only to appreciate their potential, but also to inject a much-needed dose of skepticism. First, many proposed applications of blockchains, especially in banking, don't use Nakamoto consensus. Rather, they use the ledger data structure and Byzantine agreement, which, as shown, date to the '90s. This belies the claim that blockchains are a new and revolutionary technology. Instead, the buzz around blockchains has helped banks initiate collective action to deploy shared-ledger technology, like the parable of "stone soup." Bitcoin has also served as a highly visible proof of concept that the decentralized ledger works, and the Bitcoin Core project has provided a convenient code base that can be adapted as necessary.
Second, blockchains are frequently presented as more secure than traditional registries—a misleading claim. To see why, the overall stability of the system or platform must be separated from endpoint security—that is, the security of users and devices. True, the systemic risk of blockchains may be lower than that of many centralized institutions, but the endpoint-security risk of blockchains is far worse than the corresponding risk of traditional institutions. Blockchain transactions are near-instant, irreversible, and, in public blockchains, anonymous by design. With a blockchain-based stock registry, if a user (or broker or agent) loses control of his or her private keys—which takes nothing more than losing a phone or getting malware on a computer—the user loses his or her assets. The extraordinary history of bitcoin hacks, thefts, and scams doesn't inspire much confidence—according to one estimate, at least six percent of bitcoins in circulation have been stolen at least once.39
The history described here offers rich (and complementary) lessons for practitioners and academics. Practitioners should be skeptical of claims of revolutionary technology. As shown here, most of the ideas in bitcoin that have generated excitement in the enterprise, such as distributed ledgers and Byzantine agreement, actually date back 20 years or more. Recognize that your problem may not require any breakthroughs—there may be long-forgotten solutions in research papers.
Academia seems to have the opposite problem, at least in this instance: a resistance to radical, extrinsic ideas. The bitcoin white paper, despite the pedigree of many of its ideas, was more novel than most academic research. Moreover, Nakamoto didn't care for academic peer review and didn't fully connect it to its history. As a result, academics essentially ignored bitcoin for several years. Many academic communities informally argued that Bitcoin couldn't work, based on theoretical models or experiences with past systems, despite the fact that it was working in practice.
We've seen repeatedly that ideas in the research literature can be gradually forgotten or lie unappreciated, especially if they are ahead of their time, even in popular areas of research. Both practitioners and academics would do well to revisit old ideas to glean insights for present systems. Bitcoin was unusual and successful not because it was on the cutting edge of research on any of its components, but because it combined old ideas from many previously unrelated fields. This is not easy to do, as it requires bridging disparate terminology, assumptions, etc., but it is a valuable blueprint for innovation.
Practitioners would benefit from being able to identify overhyped technology. Some indicators of hype: difficulty identifying the technical innovation; difficulty pinning down the meaning of supposedly technical terms, because of companies eager to attach their own products to the bandwagon; difficulty identifying the problem that is being solved; and finally, claims of technology solving social problems or creating economic/political upheaval.
In contrast, academia has difficulty selling its inventions. For example, it's unfortunate that the original proof-of-work researchers get no credit for bitcoin, possibly because the work wasn't well known outside academic circles. Activities such as releasing code and working with practitioners are not adequately rewarded in academia. In fact, the original branch of the academic proof-of-work literature continues today without acknowledging the existence of bitcoin! Engaging with the real world not only helps get credit, but will also reduce reinvention and is a source of fresh ideas.
Sybil-resistant networks
In his paper on Sybil attacks, John Douceur proposed that all nodes participating in a BFT protocol be required to solve hashcash puzzles. If a node were masquerading as N nodes, it would be unable to solve N puzzles in time, and the fake identities would be purged. A malicious node, however, could still obtain a moderate advantage over an honest node that claimed only a single identity. A follow-up paper in 20051 suggested that honest nodes should instead mimic the behavior of malicious nodes and claim as many virtual identities as they computationally can afford to claim. With these virtual identities executing a BFT protocol, the assumption, "At most a fraction f of nodes are faulty" can be replaced with the assumption "The fraction of total computational power controlled by faulty nodes is at most f." Thus, it is no longer necessary to validate identities, and open peer-to-peer networks can run a BFT protocol. Bitcoin uses exactly this idea. But Nakamoto asks a further question: What motivates nodes to perform computationally expensive proof of work? The answer requires a further leap: digital currency.
Smart contracts
A smart contract takes the idea of putting data in a secure ledger and extends it to computation. In other words, it is a consensus protocol for the correct execution of a publicly specified program. Users can invoke functions in these smart-contract programs, subject to any restrictions specified by the program, and the function code is executed in tandem by the miners. Users can trust the output without having to redo the computation and can write their own programs to act on the output of other programs. Smart contracts are especially powerful when combined with a cryptocurrency platform, because the programs in question can handle money—own it, transfer it, destroy it, and, in some cases, even print it.
Bitcoin implements a restrictive programming language for smart contracts. A "standard" transaction (i.e., one that moves currency from one address to another) is specified as a short script in this language. Ethereum offers a more permissive and powerful language.
The idea of smart contracts was proposed by Nick Szabo in 199441 and so named because he saw them as analogs of legal contracts, but with automated enforcement. (This view has been critiqued by Karen Levy31 and Ed Felten.16) Presciently, Szabo presented smart contracts as extensions of digital-cash protocols and recognized that Byzantine agreement and digital signatures (among others) could be used as building blocks. The success of cryptocurrencies has made smart contracts practical, and research on the topic has bloomed as well. For example, programming languages researchers have adapted their methods and tools to automatically discover bugs in smart contracts and to write verifiably correct ones.
Permissioned blockchains
While this article has emphasized that private or permissioned blockchains omit most of bitcoin's innovations, this isn't meant to diminish the interesting work happening in this space. A permissioned blockchain places restrictions on who can join the network, write transactions, or mine blocks. In particular, if miners are restricted to a list of trustworthy participants, the proof of work can be dropped in favor of a more traditional BFT approach. Thus, much of the research is a rebirth of BFT that asks questions such as: Can we use hash trees to simplify consensus algorithms? What if the network can fail only in certain ways?
Further, there are important considerations around identity and public-key infrastructure, access control, and confidentiality of the data stored on the blockchain. These issues largely don't arise in public blockchain settings, nor are they studied in the traditional BFT literature.
Finally, there is also the engineering work of scaling blockchains for high throughput and adapting them to various applications such as supply-chain management and financial technology.
Acknowledgements
The authors are grateful to Adam Back, Andrew Miller, Edward Felten, Harry Kalodner, Ian Goldberg, Ian Grigg, Joseph Bonneau, Malte Möser, Mike Just, Neha Narula, Steven Goldfeder, and Stuart Haber for valuable feedback on a draft.
References
10. Chaum, D. 1983. Blind signatures for untraceable payments. Advances in Cryptology: 199-203.
43. Wattenhofer, R. 2016. The Science of the Blockchain. Inverted Forest Publishing.
44. Rivest, R. L., Shamir, A. 1996. PayWord and MicroMint: Two simple micropayment schemes. International Workshop on Security Protocols.
Arvind Narayanan is an assistant professor of computer science at Princeton. He leads the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. Narayanan also leads a research team investigating the security, anonymity, and stability of cryptocurrencies, as well as novel applications of blockchains. He co-created a massive open online course, and a textbook on bitcoin and cryptocurrency technologies. His doctoral research showed the fundamental limits of de-identification, for which he received the Privacy Enhancing Technologies Award. Narayanan is an affiliated faculty member at the Center for Information Technology Policy at Princeton and an affiliate scholar at Stanford Law School's Center for Internet and Society. You can follow him on Twitter at @random_walker.
Jeremy Clark is an assistant professor at the Concordia Institute for Information Systems Engineering. He obtained his Ph.D. from the University of Waterloo, where his gold medal dissertation was on designing and deploying secure voting systems, including Scantegrity—the first cryptographically verifiable system used in a public-sector election. He wrote one of the earliest academic papers on bitcoin, completed several research projects in the area, and contributed to the first textbook. Beyond research, he has worked with several municipalities on voting technology and testified to the Canadian Senate on bitcoin. You can follow him on Twitter at @PulpSpy.
https://www.paradigm.xyz/2021/04/on-staking-pools-and-staking-derivatives
Paradigm, Apr 21
The transition from Proof of Work (PoW) to Proof of Stake (PoS) is Ethereum’s most anticipated milestone since its inception. Instead of using the energy-costly PoW to extend the blockchain, PoS allows users to stake their ETH and operate block-producing nodes called validators.
Ethereum’s PoS protocol does not provide stakers with some of the functionality they have come to expect in other PoS implementations like Cosmos, Tezos, and Polkadot. The rationale behind that is to incentivize decentralization, but we posit that the market will always step in to make staking more efficient and convenient. So it is important to ensure that the solution that has the most private benefit to stakers also leads to a healthy systemic outcome for Ethereum as a whole.
In this post, we explore the problems that ETH stakers experience today. We then show how staking pools and staking derivatives solve these problems for stakers while, counterintuitively, also increasing the effective security of the network.
The validator public key: Before depositing, the user generates a keypair for their validator. The private key is used to sign on blocks, whereas the public key serves as their unique identifier.
The withdrawal credentials for the deposited 32 ETH: Once withdrawals are enabled, the principal (32 ETH) and staking rewards can only be withdrawn to this address.
Critically, the public key and withdrawal credentials do not need to be controlled by the same entity.
The user is then expected to operate an ETH2 validator node and sign on blocks when it’s their turn, or get penalized for not following the protocol.
The efficiency and convenience of a staking protocol can be broken down into the following properties, along with their Ethereum implementation:
These properties represent significant hurdles for stakers. All else equal, they would prefer to be able to stake any amount of ETH, delegate the operation of their infrastructure, and withdraw their staked ETH instantly. If possible, they would also like to use their staked ETH in other applications, as has become standard procedure in decentralized finance.
Below, we discuss
how staking pools solve delegation and the minimum stake requirement; and
how staking derivatives—issued by these staking pools—address the long lockup and allow stakers to unlock liquidity on their staked ETH.
On its face, a staking pool works similarly to a mining pool in PoW, but due the nature of PoS it can offer additional benefits to its customers:
By pooling ETH together, stakers can bypass the 32 ETH minimum requirement. This allows smaller stakers to participate in PoS.
Instead of having each user operate their own validator(s), the pool handles the operational aspect of staking. Some may also insure customers against protocol penalties like slashing.
The pool can maintain a reserve of liquid ETH to satisfy demand for immediate withdrawal, similar to how a bank would. This eliminates the withdrawal period, assuming that not all customers want to withdraw at the same time.
Finally, the pool can offer a token that represents the staked ETH which can be used in other applications. This point is so important that we dedicate a full chapter to its discussion further below.
Staking pools can either be centralized or decentralized, each with their own set of tradeoffs.
The exchange simply needs to:
Allow users to opt into staking, in return for staking rewards.
Run the validators using the customer’s ETH.
Since the exchange does the staking, the user does not need to run any infrastructure. Offering instant liquidity is very easy for them as well, since they already have large liquid ETH reserves. Given how valuable customer acquisition and liquidity is to the exchange business, they can offer this service at no additional cost to the user.
From the user’s perspective, things are very straightforward: They deposit ETH into an Ethereum smart contract, and receive stETH as a receipt. The stETH token’s balance adjusts over time to reflect the distribution of staking rewards that accrue to the contract. That means, 1 stETH will always represent 1 ETH staked.
From Lido’s perspective, each time 32 ETH is buffered on the Ethereum smart contract, the DAO selects a new validator from a governance-controlled registry. It then calls the deposit contract, assigning the 32 ETH to that validator’s public key, and uses the LidoDAO’s withdrawal credentials.
There are two questions that need to be answered here:
We have already established that stETH is a claim on staked ETH and any rewards accruing in the smart contract. This is also called a staking derivative.
Staking derivatives will have a major impact on the entire Ethereum ecosystem, including ETH stakers, regular ETH holders, the competition between pools, and even Ethereum itself.
Stakers: The main benefit for stakers is rehypothecation, which allows them to stake while simultaneously using the principal in other applications, similar to how Uniswap’s LP tokens can be used as collateral across DeFi. This greatly lowers the opportunity cost of staking.
Competition between pools: The existence of stETH grants its pool an important network effect. This network effect creates a strong incentive to stake with the market leader, which indicates that ETH staking derivatives could follow a power-law or winner-take-all distribution due to the liquidity moat and network effects associated with them. As a result, it is possible that stETH will replace ETH in many use cases, and potentially even replace ETH altogether.
Ethereum: There exists a popular argument that staking derivatives lower the security of PoS because they separate block production from staking and slashing. This is also known as a principal-agent problem, and can lead to scenarios where the block producers may not be incentivized to follow the protocol since they have nothing at stake.
However, this argument has to be weighted against the benefits: If staking derivatives lower the cost of staking, they could lead to far more (or even all) ETH being staked. Note that this is a perfect example of a virtuous cycle: the more liquid stETH becomes, the lower the opportunity cost of staking, which leads to more ETH being staked, which in turn further deepens the liquidity of stETH, and so on.
Without staking derivatives, we might expect 15-30% of ETH to be staked. However, with staking derivatives, this number could be as high as 80-100%, because there is no additional cost to staking compared to non-staking.
To show why this leads to higher economic security, consider the following attack scenarios:
If 20% of all ETH is staked, and an attacker wanted to acquire 66% of all stake (a critical threshold to corrupt the chain), they would have to buy 40% of all ETH in the open market.
If 60% of ETH were staked, but the stETH is liquid, then the attacker would have to buy 66% of all stETH, which also comes down to 40% of all ETH. Note that this has additional steps, where the attacker would first have to redeem the stETH to remove the honest validators and then re-stake their ETH.
Above 60% staked, the share of all ETH the attacker would have to buy is now higher than 40% and only increases from there.
If 100% of ETH are staked, then the attacker would need 66% of all stETH to get to the same threshold.
We can conclude that if staking derivatives can increase the number of ETH staked above 60%, they would strictly increase Ethereum’s economic security instead of decreasing it.
Decentralization is often seen as an invisible benefit that comes at a higher price, and as a result users are often not willing to pay for it (see e.g. Binance Smart Chain vs Ethereum debate). This line of thinking does not apply to decentralized staking pools, because they have three critical advantages over their centralized counterparts.
They are more socially scalable: One metric that matters for PoS security is how much of the stake is controlled by a single entity. For exchanges, that number might be capped at 15-30%; at more than that, there might be social concerns about power centralization in the Ethereum ecosystem. A decentralized staking pool can control any share of the network, as long as each individual validator in the DAO is not too big and as long as the withdrawal credentials cannot change / be voted on.We have to emphasize how important it is that the decentralized staking pool by that point has shed all of its governance functionality. Neither fees, nor withdrawal addresses, nor the validator registry can be allowed to be changed by human inputs.
Their staking derivative is trustless: A large exchange like Coinbase or Binance can only issue a custodial token, whose adoption is necessarily capped as—all else equal—users strictly prefer a trustless token over a trusted one. This causes centralized pools to miss out on the staking derivative’s network effect.One could point out that with WBTC, a centralized token was able to win the market for tokenized BTC. However, we posit that this is only because BTC on Ethereum can’t be tokenized in a way that is both trustless and capital-efficient, whereas for staked ETH that is possible.
They have fewer restrictions around MEV Extraction: Institutional staking pools (e.g. exchanges) may have social and reputational constraints that prevent them from extracting certain forms of MEV. This allows smaller staking firms and decentralized pools without these constraints to provide higher returns for their stakers. This could turn the aforementioned decentralization premium for using a decentralized staking pool into a decentralization discount.
These benefits are so large, that the leader in pooled staking will likely be a decentralized / non-custodial staking pool. If said pool is sufficiently governance-minimized, it could possibly win the entire market without causing any systemic risk for Ethereum.
Staking pools and their staking derivatives are subject to similar market realities as MEV extraction, in the sense that their existence is inevitable. As long as there is a private benefit to creating and using them, they will exist and flourish. However, if the right solution wins and is sufficiently adopted, it can lead to systemic benefits for Ethereum as well.
Due to stETH’s vast network effect and the fact that decentralized pools can be both non-custodial and possibly earn more revenue from MEV, we see it as likely that a single such decentralized pool can win the whole market.
As a result, we should be focused on making sure a non-custodial and robust version of stETH wins the market instead of a centralized one, to ensure a good systemic outcome.
Disclaimer:
Paradigm owns LDO tokens
https://medium.com/@oneofthemanymatts/getting-up-to-speed-on-ethereum-63ed28821bbe
Aug 2017
You should read this blog post if:
You’re a professional software engineer
You want to have a deep working understanding of Ethereum and the related ecosystem.
Prerequisites:
You’re a professional software engineer. Seriously, I mean, nontrivial amounts of real-world software engineering experience. Understanding systems/architecture/math is a learning efficiency multiplier.
Like a week of your free time. I told you we’re skipping the nonsense.
Note: you don’t need to read the whitepapers, but you need to understand them. But it turns out that reading them is the fastest way to understand them. 😉
I recommend reading the entirety of this post, absorbing the contents at a high level, and then diving into each link individually, over the course of a few days.
You’ll be surprised at how little of this technology is magic (read: none of it), despite the severe case of buzzword-itis the ecosystem has. Everyone is building off of the shoulders of giants; if you make a living as a professional software engineer you have the ability to understand all of these projects and technologies at a deep level. Just don’t get distracted by the bullshit.
A “Smart Contract” is literally just some code (seriously, that’s it) that’s executed in a distributed environment like the Ethereum platform. The platform on which it’s executed gives this piece of code some properties such as: immutability, deterministic operation, distributed & verifiable state, etc. The state managed by this code is stored, immutably, on a blockchain and anyone can double check that the current state is correct by replaying all of the transactions from the beginning of the chain.
In Ethereum, contracts are given an address to uniquely identify them (it’s a hash of the creator’s address and how many transactions they’ve sent before). Clients can then interface with this address by sending it ether, calling functions, querying the distributed state that it manages, etc.
Smart Contracts, again, are just some code with some distributed state managed by a blockchain. For example, multi-sid wallets that you’re using to receive/send ETH is just a smart contract with a fancy UI on it.
This concept is pretty powerful, and I’m sure you’ve already read all about it. If you haven’t, browse your favorite mildly-technical new source (hey Medium!) and you’ll be inundated with people telling you how much potential there is. Some buzzwords: asset/rights management, decentralized autonomous organizations (DAOs), identity, social networking, etc.
The “price” of gas (i.e. “how much $$$ does adding two numbers together on this distributed computer cost me”) is determined by the market, similar to Bitcoin’s transaction fees. If you pay a higher gas price, nodes will prioritize your transactions for that sweet sweet profit.
In general, it’s way more expensive to compute and store things on Ethereum than it would be to do it in a traditional environment, but Ethereum gives your code all those nice properties we discussed above, which can be just as valuable.
In general, reading state is free and writing state costs gas. Here’s a more in-depth overview of the concept of gas:
A Distributed App is an app where the “server side” is just one or more well-known Smart Contracts that exist on the Ethereum network.
A Distributed App doesn’t need to store all of its state and perform all of its computation on the blockchain, which would be pretty expensive for end-users, but a distributed app does eventually store trusted state on the Ethereum blockchain, which anyone can then read. Many distributed apps also use technologies like IPFS and Golem (discussed later) to handle computation and storage off of the Ethereum blockchain, but in an equally decentralized manner.
Also, we’ve finally agreed on the proper capitalization of “dApp” via twitter poll (so you know it’s statistically significant), so let’s all just write it as “dApp” then, yeah?
These distributed applications are usually accompanied by some sort of user-friendly frontend because nobody wants to manually send transactions to/from contracts using a cli or manually craft requests with hashes and opcodes oh my.
A dApp Client is literally just any “client” or “frontend” as you’d normally use the term in programming, except this client interfaces with the Ethereum blockchain (perhaps in addition to other services) somehow. These clients are frequently written in JavaScript because we haven’t yet finished converting everything in the world to NodeJS.
More seriously, most dApp clients are written primarily in JavaScript because it can be run in a web browser and everyone’s already got one of those. They’re also frequently written in Go due to a superior state of existing tooling available. It’s a viciously positive cycle of improvement, which means unless you really know what you’re doing, you get to choose between JavaScript and Go (and to a certain extent, Rust) for interfacing with the Ethereum blockchain and the protocols being developed on top of it.
* So there’s a little bit of confusion/discussion around the exact terminology/definition of “Distributed App”: is it just the smart contract(s)? Is it the entire backend of a system which, at some point, interfaces with the Ethereum platform to store trust? Or maybe it includes the client code, too, and the user interface so the whole bundle of stuff is called a “dApp”?
* I’ve gone ahead and defined as “the backend of a system that interfaces with the Ethereum blockchain”. This is different enough from “Smart Contract” to warrant its own concept and also implies (correctly) that anyone can create a client to interface with a distributed app
A dApp Browser is exactly what it says on the tin; it’s an application (a normal one that we’re all familiar with) that makes using dApp clients (usually javascript that interfaces with an Ethereum node to communicate with smart contracts) easier.
The primary purposes of a dApp browser are to
Provide a connection to an Ethereum node (either to one hosted locally or remotely) and an easy way to change that connection to point to a different node (which might be connected to a different network),
And provide an account interface (“wallet”) for the user so that they can easily interface with these dApps.
Your node needs to know which blockchain to download and which peers to talk to; see below for a discussion of the different networks available.
If you’re distributing a dApp client to users, you don’t necessarily need to provide access to an Ethereum node as well; dApp Browsers provide the connection to any client that needs it.
So you know how we can write code (a “Smart Contract”) that stores some state on a blockchain? What if, in that state, we stored a map of ethereum addresses to an integer. And called that integer your balance. Your balance of what? Let’s just call them “tokens”. Oh.
Due to some issues with the ERC20 spec (namely that it was made quickly, has small security concerns, and requires two transactions for payments), now we have more proposed token standards.
* there had been some confusion around ERC223 vs ERC23, but they are the same concept; the ERC number is 223, so this standard should be referred to as ERC223.
You interface with your smart contract (aka, executing methods and reading state) by connecting to an Ethereum node and executing functions over a JSON RPC API. There are a variety of Ethereum node clients that perform this in a developer-friendly way. Both geth and parity provide consoles/browsers for interfacing with contracts.
When you “deploy” a smart contract, all you’re really doing is sending a transaction to the 0-address ( 0x0
) with the contract bytecode as an argument.
Here’s more information about transactions:
Once you start writing smart contracts, you end up doing a lot of the same things over and over again; compiling your source code to bytecode and an abi, deploying it to the network, testing the deployed contract, etc. You probably also want a nice way to play around with new ideas.
Frameworks like Truffle, Embark, Populous, and Perigord standardize and automate a lot of the minutiae. They provide a nice developer experience for writing contracts, deploying contracts, and incredibly importantly, testing contracts.
This post has a lot of good information and uses truffle to deploy and interface with a contract.
Of note, you can also use NPM to provide smart contract code for distribution, which a lot of people do, notably Open Zeppelin.
Mainnet — the main Ethereum network, and generally the default in whatever client or browser you’re using.
An Ethereum account is a private key and address pair. They basically just store Ether, and they don’t cost gas to create (because they exist automatically with every possible address). All transactions on the Ethereum network originate from an account; contracts don’t have the ability to initiate a transaction.
Now that we’ve correctly defined the two, prepare to see people confusing the two terms everywhere and labelling anything that sends/receives Ether as a Wallet and calling anything and everything an account.
The code that is your Smart Contract runs on every full node in the network within the EVM (Ethereum Virtual Machine). It’s your standard virtual machine that executes some bytecode, except the vm is specifically isolated from the network, the filesystem, processes, etc. Nobody wants to write bytecode, so we’ve got some higher level languages that compile down to EVM bytecode.
Here’s what Solidity looks like:
LLL looks something like:
If you’re learning, you probably don’t want to write anything in LLL.
There are a bunch of other high-level languages in various states of usability and development and more will undoubtedly be developed. In order to reach wide adoption, though, a language and compiler must be thoroughly vetted and tested, which will take time.
Once a smart contract is deployed to Ethereum, it is immutable and exists forever. If you write a bug, you don’t get to take down the broken version; you can only fix-forward*.
Because so many engineers developing for Ethereum and other smart contract platforms are coming from web development, this concept might be new and crazy.
There are a variety of different security-related traps your code could fall into, both at a language level and from a high-level logic perspective. When you deploy a production smart contract that handles real money, you need to be 100% confident in the operation of the contract.
* there’s a selfdestruct() function that a contract can use to remove itself from the network.
It uses the protocol shh
which is pretty cute. There’s surprisingly little documentation and adopting of this protocol.
Whether or not whisper is under active development is up for discussion.
This is an organization (like, a group of human beings) where, instead of using legal documents to enforce operation, they use a bunch of smart contracts. Your group of human beings then uses these contracts to do all the normal stuff an organization does, like vote on things and determine what color all the local co-op houses should be painted.
A side effect of this is that decision making, governance, and what color the houses should be painted is immutably stored on the blockchain (in the state of those contracts). Cool stuff.
While this is a new protocol, there is an http gateway and a filesystem adaptor, meaning you can fetch IPFS content via http and mount the entire world wide filesystem on your local disk at /ipfs
. IPFS also provides a naming system called IPNS (InterPlanetary Name Space), which allows for mutable state (recall that everything in IPFS is immutable). You can even use DNS TXT records to direct your IPNS client, allowing you to generate human-friendly links to data.
There’s also some public backlash about how FileCoin is being implemented and distributed:
Liquidity of tokens is a relatively large problem in the crypto ecosystem. Trading between users requires satisfying both your desire to buy and the other party’s desire to sell (or vise versa).
The value of your arbitrary token managed by the Ethereum blockchain can fluctuate pretty arbitrarily; this sucks if you’re trying to use that token for some sort of actual real-world process (like going to sleep and hoping it’s still worth something). Stablecoins are considered a Big Deal™ for the field of cryptoeconomics.
“What if my Smart Contract needs an external piece of information, like the weather in New York? I’d have to use a decentralized oracle protocol to ask a bunch of people the weather (expensive, slow), but if I wrote a service that provides this information from a central place, I’d be betraying the core idea of a decentralized application.”
Their integration is pretty powerful; you can fetch URLs, parse JSON and XPATHs, query WolframAlpha, etc.
Zeppelin is a technology firm doing some really awesome, professional stuff in the space. Honestly, they’re doing a lot of things and it’s tough to keep track of it all.
There are some other neat planned aspects of zeppelin_os, like the scheduler (async execution of contract functions, since by default contracts don’t do anything until interacted with), a marketplace protocol, and off-chain developer experience tools. You can learn more about them via the whitepaper.
Brave and the Basic Attention Token was started by Brendan Eich, who originally created JavaScript and then co-founded Mozilla.
Obviously this document is going to be out of date in t-minus 2 hours, so if there’s a protocol/platform/technology/team that you’re really excited about, let me know via a comment and I’ll consider documenting it.
The purpose of this document is to create building blocks of understanding rooted in real life. None of that “key and locker” lossy metaphor nonsense. If you’ve found it to be helpful, give it a 💚 (wait no, a 👏). If you found that it did the opposite, let me know via a comment. Or on twitter dot com or whatever.
It also attempts to remove most/all of the buzzwords and high-level magic that so many people want you to believe that this technology is.
We need a way for the payee to know that the previous owners did not sign any earlier transactions. For our purposes, the earliest transaction is the one that counts, so we don’t care about later attempts to double-spend. The only way to confirm the absence of a transaction is to be aware of all transactions. In the mint based model, the mint was aware of all transactions and decided which arrived first. To accomplish this without a trusted party, transactions must be publicly announced [], and we need a system for participants to agree on a single history of the order in which they were received. The payee needs proof that at the time of each transaction, the majority of nodes agreed it was the first received.
The solution we propose begins with a timestamp server. A timestamp server works by taking a hash of a block of items to be timestamped and widely publishing the hash, such as in a newspaper or Usenet post [,,,]. The timestamp proves that the data must have existed at the time, obviously, in order to get into the hash. Each timestamp includes the previous timestamp in its hash, forming a chain, with each additional timestamp reinforcing the ones before it.
To implement a distributed timestamp server on a peer-to-peer basis, we will need to use a proof-of-work system similar to Adam Back’s Hashcash [], rather than newspaper or Usenet posts. The proof-of-work involves scanning for a value that when hashed, such as with SHA-256, the hash begins with a number of zero bits. The average work required is exponential in the number of zero bits required and can be verified by executing a single hash.
Once the latest transaction in a coin is buried under enough blocks, the spent transactions before it can be discarded to save disk space. To facilitate this without breaking the block’s hash, transactions are hashed in a Merkle Tree [,,], with only the root included in the block’s hash. Old blocks can then be compacted by stubbing off branches of the tree. The interior hashes do not need to be stored.
The probability of an attacker catching up from a given deficit is analogous to a Gambler’s Ruin problem. Suppose a gambler with unlimited credit starts at a deficit and plays potentially an infinite number of trials to try to reach breakeven. We can calculate the probability he ever reaches breakeven, or that an attacker ever catches up with the honest chain, as follows []:
Given our assumption that , the probability drops exponentially as the number of blocks the attacker has to catch up with increases. With the odds against him, if he doesn’t make a lucky lunge forward early on, his chances become vanishingly small as he falls further behind.
The recipient waits until the transaction has been added to a block and blocks have been linked after it. He doesn’t know the exact amount of progress the attacker has made, but assuming the honest blocks took the average expected time per block, the attacker’s potential progress will be a Poisson distribution with expected value:
Running some results, we can see the probability drop off exponentially with .
1. b-money Dai Wei (1998-11-01)
2. Design of a secure timestamping service with minimal trust requirements Henri Massias, Xavier Serret-Avila, Jean-Jacques Quisquater 20th Symposium on Information Theory in the Benelux (1999-05)
3. How to time-stamp a digital document Stuart Haber, W.Scott Stornetta Journal of Cryptology (1991) DOI:
4. Improving the Efficiency and Reliability of Digital Time-Stamping Dave Bayer, Stuart Haber, W. Scott Stornetta Sequences II (1993) DOI:
5. Secure names for bit-strings Stuart Haber, W. Scott Stornetta Proceedings of the 4th ACM conference on Computer and communications security - CCS ’97 (1997) DOI:
6. Hashcash - A Denial of Service Counter-Measure Adam Back (2002-08-01)
7. Protocols for Public Key Cryptosystems Ralph C. Merkle 1980 IEEE Symposium on Security and Privacy (1980-04) DOI:
8. An Introduction to Probability Theory and its Applications William Feller John Wiley & Sons (1957)
Originally published as a report for institutional investors:
The concept of blockchain was first introduced by Satoshi Nakamoto in his article . On as little as 8 pages, the author explained the entire system of bitcoin cryptocurrency based on the blockchain algorithm.
Last year Bill learned about cryptography on the programming course. He still remembers that any string can be turned into an unrecognizable set of characters – . Changing any single character in that hash would completely mess up the entire encrypted content.
The tree structure makes it possible to delete unnecessary (spent) transactions from the block. Let’s say, there are two transactions joined by a hash, and one or both are no longer needed. All they had is given away by other transactions – so these old ones can be deleted, and their hash can be kept, as the result, nothing gets wrecked in the structure. See . Reclaiming the Disk Space” in Satoshi’s article.
Yes, but there are some ideas. I described classic mining called here when each machine proves that it worked for the benefit of the entire network by solving meaningless problems with a given probability.
Special thanks to Karl Floersch, Dan Robinson and Tina Zhen for feedback and review. See also , , and for earlier thinking on similar topics.
One of the important trends in the blockchain space over the past year is the transition from focusing on decentralized finance (DeFi) to also thinking about decentralized governance (DeGov). While the year 2020 is widely, and with much justification, as , over the year since then the growing complexity and capability of DeFi projects that make up this trend has led to growing interest in decentralized governance to handle that complexity. There are examples inside of Ethereum: , , , , and others have all launched, or even started with, some kind of DAO. But it's also true outside of Ethereum, with arguments over in Bitcoin Cash, , and much more.
The rising popularity of formalized decentralized governance of some form is undeniable, and there are important reasons why people are interested in it. But it is also important to keep in mind the risks of such schemes, as the recent and subsequent mass exodus to Hive makes clear. I would further argue that these trends are unavoidable. Decentralized governance in some contexts is both necessary and dangerous, for reasons that I will get into in this post. How can we get the benefits of DeGov while minimizing the risks? I will argue for one key part of the answer: we need to move beyond coin voting as it exists in its present form.
Ever since the in 1996, there has been a key unresolved contradiction in what can be called cypherpunk ideology. On the one hand, cypherpunk values are all about using cryptography to minimize coercion, and maximize the efficiency and reach of the main non-coercive coordination mechanism available at the time: private property and markets. On the other hand, the economic logic of private property and markets is optimized for activities that into repeated one-to-one interactions, and the infosphere, where art, documentation, science and code are produced and consumed through irreducibly one-to-many interactions, is the exact opposite of that.
Early blockchain projects largely ignored both of these challenges, pretending that the only public good that mattered was network security, which could be achieved with a single algorithm set in stone forever and paid for with fixed proof of work rewards. This state of affairs in funding was possible at first because of extreme Bitcoin price rises from 2010-13, then the one-time ICO boom from 2014-17, and again from the simultaneous second crypto bubble of 2014-17, all of which made the ecosystem wealthy enough to temporarily paper over the large market inefficiencies. Long-term governance of public resources was similarly ignored: Bitcoin took the path of extreme minimization, focusing on providing a fixed-supply currency and ensuring support for layer-2 payment systems like Lightning and nothing else, Ethereum continued developing mostly harmoniously (with ) because of the strong of its pre-existing roadmap (basically: "proof of stake and sharding"), and sophisticated application-layer projects that required anything more did not yet exist.
It is worth stepping back and seeing the absurdity of the present situation. Daily mining issuance rewards from Ethereum are about 13500 ETH, or about $40m, . are similarly high; the continues to be around 1,500 ETH (~$4.5m) per day. So there are many billions of dollars per year going to fund network security. Now, what is the budget of the Ethereum Foundation? About $30-60 million per year. There are non-EF actors (eg. Consensys) contributing to development, but they are not much larger. The situation in Bitcoin is similar, with perhaps even less funding going into non-security public goods.
Within the Ethereum ecosystem, one can make a case that this disparity does not matter too much; tens of millions of dollars per year is "enough" to do the needed R&D and adding more funds , and so the risks to the platform's from instituting in-protocol developer funding exceed the benefits. But in many smaller ecosystems, both ecosystems within Ethereum and entirely separate blockchains like BCH and Zcash, the same debate is brewing, and at those smaller scales the imbalance makes a big difference.
In addition to public goods funding, the other equally important problem requiring governance is protocol maintenance and upgrades. While I advocate trying to minimize all non-automated parameter adjustment (see the ) and I am a fan of , there are times where governance is unavoidable. Price oracle inputs must come from somewhere, and occasionally that somewhere needs to change. Until a protocol "ossifies" into its final form, improvements have to be coordinated somehow. Sometimes, a protocol's community might think that they are ready to ossify, but then the world throws a curveball that requires a complete and controversial restructuring. What happens if the US dollar collapses, and RAI has to scramble to create and maintain their own decentralized CPI index for their stablecoin to remain stable and relevant? Here too, DeGov is necessary, and so avoiding it outright is not a viable solution.
One important distinction is whether or not off-chain governance is possible. I have for a long time been wherever possible. And indeed, for base-layer blockchains, off-chain governance absolutely is possible. But for application-layer projects, and especially defi projects, we run into the problem that application-layer smart contract systems often directly control external assets, and that control cannot be forked away. If Tezos's on-chain governance gets captured by an attacker, the community can hard-fork away without any losses beyond (admittedly high) coordination costs. If MakerDAO's on-chain governance gets captured by an attacker, the community can absolutely spin up a new MakerDAO, but they will lose all the ETH and other assets that are stuck in the existing MakerDAO CDPs. Hence, while off-chain governance is a good solution for base layers and some application-layer projects, many application-layer projects, particularly DeFi, will inevitably require formalized on-chain governance of some form.
However, all current instantiations of decentralized governance come with great risks. To followers of my writing, this discussion will not be new; the risks are much the same as those that I talked about , and . There are two primary types of issues with coin voting that I worry about: (i) inequalities and incentive misalignments even in the absence of attackers, and (ii) outright attacks through various forms of (often obfuscated) vote buying. To the former, there have already been many proposed mitigations (eg. delegation), and there will be more. But the latter is a much more dangerous elephant in the room to which I see no solution within the current coin voting paradigm.
The problems with coin voting even without explicit attackers are increasingly well-understood (eg. see ), and mostly fall into a few buckets:
There is one major type of strategy being attempted for solving the first problem (and therefore also mitigating the third problem): . Smallholders don't have to personally judge each decision; instead, they can delegate to community members that they trust. This is an honorable and worthy experiment; we shall see how well delegation can mitigate the problem.
The problem of coin holder centrism, on the other hand, is significantly more challenging: coin holder centrism is inherently baked into a system where coin holder votes are the only input. The mis-perception that coin holder centrism is an intended goal, and not a bug, is already causing confusion and harm; one (broadly excellent) complains:
Suppose that an attacker makes a decision that corrupts the DAO to the attacker's benefit. The harm per participant from the decision succeeding is , and the chance that a single vote tilts the outcome is . Suppose an attacker makes a bribe of . The game chart looks like this:
If , you are inclined to accept the bribe, but as long as , accepting the bribe is collectively harmful. So if (usually, is far below ), there is an opportunity for an attacker to bribe users to adopt a net-negative decision, compensating each user far less than the harm they suffer.
The simplest example is borrowing from a defi lending platform (eg. ). Someone who already holds ETH can lock up their ETH in a CDP ("collateralized debt position") in one of these platforms, and once they do that the CDP contract allows them to borrow an amount of XYZ up to eg. half the value of the ETH that they put in. They can then do whatever they want with this XYZ. To recover their ETH, they would eventually need to pay back the XYZ that they borrowed, plus interest.
There are also centralized mechanisms for separating profit sharing rights from governance rights. Most notably, when users deposit their coins on a (centralized) exchange, the exchange holds full custody of those coins, and the exchange has the ability to use those coins to vote. This is not mere theory; there is evidence of exchanges using their users' coins in several DPoS systems. The most notable recent example is the , where exchanges used their customers' coins to vote for some proposals that helped to cement a takeover of the Steem network that the bulk of the community strongly opposed. The situation was only resolved through an outright mass exodus, where a large portion of the community moved to a different chain called .
Some DAO protocols are using timelock techniques to limit these attacks, requiring users to lock their coins and make them immovable for some period of time in order to vote. These techniques can limit buy-then-vote-then-sell attacks in the short term, but ultimately by users holding and voting with their coins through a contract that issues a wrapped version of the token (or, more trivially, a centralized exchange). As far as security mechanisms go, timelocks are more like a paywall on a newspaper website than they are like a lock and key.
Immature financial markets in governance tokens: ready-made tools for making wrapper tokens but are not widely used, bribing contracts but are similarly immature, and liquidity in lending markets is low.
Limit governance to fixed parameter choices: Uniswap does this, as it only allows governance to affect (i) token distribution and (ii) a 0.05% fee in the Uniswap exchange. Another great example is , where governance has control over fewer and fewer features over time.
Add time delays: a governance decision made at time T only takes effect at eg. T + 90 days. This allows users and applications that consider the decision unacceptable to move to another application (possibly a fork). Compound in its governance, but in principle the delay can (and eventually should) be much longer.
Proof of personhood systems: systems that verify that accounts correspond to unique individual humans, so that governance can assign one vote per human. See for a review of some techniques being developed, and and for two attempts to implement this.
Proof of participation: systems that attest to the fact that some account corresponds to a person that has participated in some event, passed some educational training, or performed some useful work in the ecosystem. See for one attempt to implement thus.
There are also hybrid possibilities: one example is , which makes the power of a single voter proportional to the square root of the economic resources that they commit to a decision. Preventing people from gaming the system by splitting their resource across many identities requires proof of personhood, and the still-existent financial component allows participants to credibly signal how strongly they care about an issue, as well as how strongly they care about the ecosystem. Gitcoin quadratic funding is a form of quadratic voting, and quadratic voting DAOs .
Proof of personhood and proof of participation both require some form of anti-collusion (see and ) to ensure that the non-money resource being used to measure voting power remains non-financial, and does not itself end up inside of smart contracts that sell the governance power to the highest bidder.
The most popular solution to these kinds of issues is , introduced by Robin Hanson in the early 2000s. Votes become bets: to vote in favor of a proposal, you make a bet that the proposal will lead to a good outcome, and to vote against the proposal, you make a bet that the proposal will lead to a poor outcome. Futarchy introduces individual responsibility for obvious reasons: if you make good bets, you get more coins, and if you make bad bets you lose your coins.
Votes as buy orders: see . Voting in favor of a proposal requires making an enforceable buy order to buy additional tokens at a price somewhat lower than the token's current price. This ensures that if a terrible decision succeeds, those who support it may be forced to buy their opponents out, but it also ensures that in more "normal" decisions coin holders have more slack to decide according to non-price criteria if they so wish.
Retroactive public goods funding: see . Public goods are funded by some voting mechanism retroactively, after they have already achieved a result. Users can buy project tokens to fund their project while signaling confidence in it; buyers of project tokens get a share of the reward if that project is deemed to have achieved a desired goal.
Escalation games: see and . Value-alignment on lower-level decisions is incentivized by the possibility to appeal to a higher-effort but higher-accuracy higher-level process; voters whose votes agree with the ultimate decision are rewarded.
Futarchy + anti-collusion = reputation: Users vote with "reputation", a token that cannot be transferred. Users gain more reputation if their decisions lead to desired results, and lose reputation if their decisions lead to undesired results. See for an article advocating for a reputation-based scheme.
Loosely-coupled (advisory) coin votes: a coin vote does not directly implement a proposed change, instead it simply exists to make its outcome public, to build for off-chain governance to implement that change. This can provide the benefits of coin votes, with fewer risks, as the legitimacy of a coin vote drops off automatically if evidence emerges that the coin vote was bribed or otherwise manipulated.
Never in the history of the world had it been possible to transfer value between distant peoples without relying on a trusted intermediary, such as a bank or government. In 2008 Satoshi Nakamoto, whose identity is still unknown, published a to a long-standing problem of computer science known as the Byzantine General’s Problem. Nakamoto’s solution and the system he built from it — Bitcoin — allowed, for the first time ever, value to be quickly transferred, at great distance, in a completely trustless way. The ramifications of the creation of Bitcoin are so profound for both economics and computer science that Nakamoto should rightly be the first person to qualify for both a Nobel prize in Economics and the Turing award.
In the earliest human societies, trade between groups of people occurred through barter. The incredible inefficiencies inherent to barter trade drastically limited the scale and geographical scope at which trade could occur. A major disadvantage with barter based trade is the double coincidence of wants problem. An apple grower may desire trade with a fisherman, for example, but if the fisherman does not desire apples at the same moment, the trade will not take place. Over time humans evolved a desire to hold certain collectible items for their rarity and symbolic value (examples include shells, animal teeth and flint). Indeed, as Nick Szabo argues in his brilliant , the human desire for collectibles provided a distinct evolutionary advantage for early man over his nearest biological competitors, Homo neanderthalensis.
Early man faced an important game-theoretic dilemma when deciding which collectibles to gather or create: which objects would be desired by other humans? By correctly anticipating which objects might be demanded for their collectible value, a tremendous benefit was conferred on the possessor in their ability to complete trade and to acquire wealth. Some Native American tribes, such as the Narragansetts, specialized in the manufacture of otherwise useless collectibles simply for their value in trade. It is worth noting that the earlier the anticipation of future demand for a collectible good, the greater the advantage conferred to its possessor; it can be acquired more cheaply than when it is widely demanded and its trade value appreciates as the population which demands it expands. Furthermore, acquiring a good in hopes that it will be demanded as a future store of value hastens its adoption for that very purpose. This seeming circularity is actually a feedback loop that drives societies to quickly converge on a single store of value. In game-theoretic terms, this is known as a “”. Achieving a Nash Equilibrium for a store of value is a major boon to any society, as it greatly facilitates trade and the division of labor, paving the way for the advent of civilization.
Gold is the undisputed King of durability. The vast majority of gold that has ever been mined or minted, including the gold of the Pharaohs, remains extant today and will likely be available a thousand years hence. Gold coins that were used as money in antiquity still maintain significant value today. Fiat currency and bitcoins are fundamentally digital records that may take physical form (such as paper bills). Thus it is not their physical manifestation whose durability should be considered (since a tattered dollar bill may be exchanged for a new one), but the durability of the institution that issues them. In the case of fiat currencies, many governments have come and gone over the centuries, and their currencies disappeared with them. The Papiermark, Rentenmark and Reichsmark of the Weimar Republic no longer have value because the institution that issued them no longer exists. If history is a guide, it would be folly to consider fiat currencies durable in the long term — the US dollar and British Pound are relative anomalies in this regard. Bitcoins, having no issuing authority, may be considered durable so long as the network that secures them remains in place. Given that Bitcoin is still in its infancy, it is too early to draw strong conclusions about its durability. However, there are encouraging signs that, despite prominent instances of nation-states attempting to regulate Bitcoin and years of attacks by hackers, the network has continued to function, displaying a remarkable degree of “”.
For most intents and purposes, both fiat currencies and gold are fairly easy to verify for authenticity. However, despite providing features on their banknotes to prevent counterfeiting, nation-states and their citizens still face the potential to be duped by counterfeit bills. Gold is also not immune from being counterfeited. Sophisticated criminals have used as a way of fooling gold investors into paying for false gold. Bitcoins, on the other hand, can be verified with mathematical certainty. Using cryptographic signatures, the owner of a bitcoin can publicly prove she owns the bitcoins she says she does.
The attribute that most clearly distinguishes Bitcoin from fiat currencies and gold is its predetermined scarcity. By design, at most 21 million bitcoins can ever be created. This gives the owner of bitcoins a known percentage of the total possible supply. For instance, an owner of 10 bitcoins would know that at most 2.1 million people on earth (less than 0.03% of the world’s population) could ever have as many bitcoins as they had. Gold, while remaining quite scarce through history, is not immune to increases in supply. If it were ever the case that a new method of mining or acquiring gold became economic, the supply of gold could rise dramatically (examples include or ). Finally, fiat currencies, while only a relatively recent invention of history, have proven to be prone to constant increases in supply. Nation-states have shown a persistent proclivity to inflate their money supply to solve short-term political problems. The inflationary tendencies of governments across the world leave the owner of a fiat currency with the likelihood that their savings will diminish in value over time.
No monetary good has a history as long and storied as gold, which has been valued for as long as human civilization has existed. Coins minted in the distant days of antiquity . The same cannot be said of fiat currencies, which are a relatively recent anomaly of history. From their inception, fiat currencies have had a near-universal tendency toward eventual worthlessness. The use of inflation as an insidious means of invisibly taxing a citizenry has been a temptation that few states in history have been able to resist. If the 20th century, in which fiat monies came to dominate the global monetary order, established any economic truth, it is that fiat money cannot be trusted to maintain its value over the long or even medium term. Bitcoin, despite its short existence, has weathered enough trials in the market that there is a high likelihood it will not vanish as a valued asset any time soon. Furthermore, the suggests that the longer Bitcoin remains in existence the greater society’s confidence that it will continue to exist long into the future. In other words, the societal trust of a new monetary good is asymptotic in nature, as is illustrated in the graph below:
One of the most significant sources of early demand for bitcoins was their use in the illicit drug trade. Many subsequently surmised, mistakenly, that the primary demand for bitcoins was due to their ostensible anonymity. Bitcoin, however, is far from an anonymous currency; every transaction transmitted on the Bitcoin network is forever recorded on a public blockchain. The historical record of transactions allows for later forensic analysis to identify the source of a flow of funds. It was that led to the apprehending of a perpetrator of the infamous MtGox heist. While it is true that a sufficiently careful and diligent person can conceal their identity when using Bitcoin, this is not why Bitcoin was so popular for trading drugs. The key attribute that makes Bitcoin valuable for proscribed activities is that it is “permissionless” at the network level. When bitcoins are transmitted on the Bitcoin network, there is no human intervention deciding whether the transaction should be allowed. As a distributed peer-to-peer network, Bitcoin is, by its very nature, designed to be censorship-resistant. This is in stark contrast to the fiat banking system, where states regulate banks and the other gatekeepers of money transmission to report and prevent outlawed uses of monetary goods. A classic example of regulated money transmission is capital controls. A wealthy millionaire, for instance, may find it very hard to transfer their wealth to a new domicile if they wish to flee an oppressive regime. Although gold is not issued by states, its physical nature makes it difficult to transmit at distance, making it far more susceptible to state regulation than Bitcoin. India’s is an example of such regulation.
There is an obsession in modern monetary economics with the medium of exchange role of money. In the 20th century, states have monopolized the issuance of money and continually undermined its use as a store of value, creating a false belief that money is primarily defined as a medium of exchange. Many have criticized Bitcoin as being an unsuitable money because its price has been too volatile to be suitable as a medium of exchange. This puts the cart before the horse, however. Money has always evolved in stages, with the store of value role preceding the medium of exchange role. One of the fathers of marginalist economics, William Stanley Jevons, :
Medium of exchange: When money is fully established as a store of value, its purchasing power will stabilize. Having stabilized in purchasing power, the opportunity cost of using money to complete trades will diminish to a level where it is suitable for use as a medium of exchange. In the earliest days of Bitcoin, many people did not appreciate the huge opportunity cost of using bitcoins as a medium of exchange, rather than as an incipient store of value. The of a man trading 10,000 bitcoins (worth approximately $94 million at the time of this article’s writing) for two pizzas illustrates this confusion.
Even in the absence of exogenous factors such as government intervention or competition from other monetary goods, the monetary premium for a new money will not follow a predictable path. Economist Larry White :
When the purchasing power of a monetary good increases with increasing adoption, market expectations of what constitutes “cheap” and “expensive” shift accordingly. Similarly, when the price of a monetary good crashes, expectations can switch to a general belief that prior prices were “irrational” or overly inflated. The path dependence of money is illustrated by of well-known Wall Street fund manager Josh Brown:
Further complicating the path-dependent nature of money is the fact that market participants do not merely act as dispassionate observers, trying to buy or sell in anticipation of future movements of the monetary premium, but also act as active evangelizers. Since there is no objectively correct monetary premium, proselytizing the superior attributes of a monetary good is more effective than for regular goods, whose value is ultimately anchored to cash flow or use-demand. The religious fervor of participants in the Bitcoin market can be observed in various online forums where owners actively promote the benefits of Bitcoin and the wealth that can be made by investing in it. In observing the Bitcoin market, :
In his article on , Michael Casey posits that the expanding Gartner hype cycles represent phases of a standard S-curve of adoption that was followed by many transformative technologies as they become commonly used in society.
The US prides itself as a nation of innovators, with Silicon Valley being a crown jewel of the US economy. Thus far, Silicon Valley has largely dominated the conversation toward regulators on the position they should take vis-à-vis Bitcoin. However, the banking industry and the US Federal Reserve are finally having their first inkling of the existential threat Bitcoin poses to US monetary policy if it were to become a global reserve currency. The Wall Street Journal, known to be a mouth-piece for the Federal Reserve, on the threat Bitcoin poses to US monetary policy:
The ability to easily transmit bitcoins across borders and absence of a need for a banking system make Bitcoin an ideal monetary good to acquire for those afflicted by hyperinflation. In the coming years, as fiat monies continue to follow their historical trend toward eventual worthlessness, Bitcoin will become an increasingly popular choice for global savings to flee to. When a nation’s money is abandoned and replaced by Bitcoin, Bitcoin will have transitioned from being a store of value in that society to a generally accepted medium of exchange. Daniel Krawisz coined the term “” to describe this process.
Being decentralized in design, Bitcoin has shown a remarkable degree of resilience in the face of numerous attempts by various governments to regulate it or shut it down. However, the exchanges where bitcoins are traded for fiat currencies are highly centralized and susceptible to regulation and closure. Without these exchanges and the willingness of the banking system to do business with them, the process of monetization of Bitcoin would be severely stunted, if not halted completely. While there are alternative sources of liquidity for Bitcoin, such as over-the-counter brokers and decentralized markets for buying and selling Bitcoins (like ), the critical process of price discovery happens on the most liquid exchanges, which are all centralized.
Bitcoin is an incipient money that is transitioning from the collectible stage of monetization to becoming a store of value. As a non-sovereign monetary good, it is possible that at some stage in the future Bitcoin will become a global money much like gold during the classical gold standard of the 19th century. The adoption of Bitcoin as global money is precisely the bullish case for Bitcoin, and was articulated by Satoshi Nakamoto as early as 2010 in an with Mike Hearn:
This case was made even more trenchantly by the brilliant cryptographer Hal Finney, the recipient of the first bitcoins sent by Nakamoto, shortly after the :
We have sat here for the last 3 years seeing only infrastructure apps like wallets and exchanges emerge on top of . Why is that?
Enter . Ethereum has taken what was a four function calculator of a programming language in Bitcoin and turned it into a full fledged computer. We now stand only 9 months out from the beginning of the Ethereum network and the level of app development is already faster than Bitcoin’s. We are finally getting rapid iteration at the app layer. In one early example, people have designed a decentralized organization () — a company whose heart is code and peripheral operations are run by humans, rather than the other way around — that has raised in the largest crowdfunding ever.
To be clear, I don’t think this needs to be a contest between Bitcoin vs. Ethereum and plans to strongly support both. I think this is about advancing as much as we can. There is a significant amount of overlap between the two, however, so the comparison is valuable and the potential for competition is real.
How did we get here? First, some history. When the emerged in 2008 it was completely revolutionary. The amount of concepts that had to come together in just the right way — computer science, cryptography, and economic incentives — was astonishing. When the actual Bitcoin network launched in 2009, no one knew about it, and many of those who did thought it would surely fail. Just to make sure the thing worked, the scripting language in Bitcoin was intentionally extremely restrictive. “Scripting language” is a fancy way of saying an easy to work with programming language (in fact, Bitcoin doesn’t exactly have a scripting language, it uses a stack with script operators — more on that later). The scripting language in Bitcoin is important because it is what makes Bitcoin “programmable money”. Within each Bitcoin transaction is the ability to write a little program. For example, you can write a little program in a Bitcoin transaction that says “this transaction isn’t valid unless it’s June 15th, 2016 or later”. This is very powerful because you can move money automatically with computer code and everyone can see the rules by which that money moves and know those rules will be followed.
Ethereum’s programming languages lets you do much more than Bitcoin’s As mentioned above, Bitcoin’s scripting language is intentionally restrictive. You might liken it to programming with an advanced graphing calculator — functionality is limited. As a result, you can only do basic things. It is also hard to understand and use. Rather than most modern programming languages where the code is almost readable like a sentence, . As a result, it took Mike Hearn, a talented ex-Google developer, a whopping 8 months to write .
In contrast, Ethereum’s programming languages ( for those who like Javascript, for those who like Python) let you do pretty much anything an advanced programming language would let you do. This is why they are said to be “”. Equally important, they are easy to use. It is simple for any developer to pick it up and quickly write their first app.
Beyond the radical difference in scripting languages, developer tools are much better in Ethereum. Bitcoin has never had a set of developer tools that caught on much, and they are sorely needed given it is much harder to work with Bitcoin out of the box. Ethereum has made life as a developer much easier. It has for devs and its own development environment () amongst others.
Ethereum has a more robust developer community The developer community in Bitcoin feels fairly dormant. Bitcoin never really made it past the stage of simple wallets and exchanges. The most notable thing to be released recently is an implementation of the (a way of making transactions, especially microtransactions, more efficient) called . This is an additional protocol layer, not an application, however, and could be used by both Bitcoin and Ethereum.
In contrast, Ethereum’s developer community feels vibrant and growing. Most importantly, entirely new things are being tried on Ethereum. While most are experiments or at the moment, that developers from around the world which is rapidly expanding.
Ethereum’s core development team is healthy while Bitcoin’s is dysfunctional , the creator of Ethereum, has shown early promise as the leader of an open source project. He seems both comfortable as a community and technical leader. As an example, when we added Ethereum to , our exchange.
In contrast, Bitcoin has had a leadership vacuum since Gavin Andresen stepped aside after other core developers did not get on board with his (in my opinion rational and convincing) . “Core developers” as they now stand are also relatively fragmented.
Beyond a leadership vacuum, Bitcoin’s “leadership” is less clear and toxic. Greg Maxwell, technical leader of which , recently referred to other core developers who were working with miners on a block size compromise as “.” A second discussion board needed to form on reddit, , because of censorship on the original . The content on the Bitcoin discussion boards feels like squabbling while is talking about relevant issues and new ideas. In summary, Ethereum leadership (and as a result its community) is moving forward while things need to get worse before they can get better in Bitcoin.
Ethereum has a growth mindset while Bitcoin has a false sense of accomplishment The general mindset of the two communities feels different as well. Many in Bitcoin seem to have a false sense of “we’ve got this really valuable network we need to protect!”. In my opinion that view is wrong and dangerous. Bitcoin is still than the major financial networks of the world at ~$200m/day in transaction volume (Visa $18 billion/day, SWIFT wire $5 trillion/day) and ~10 million users (). And while seem to be increasing at a healthy pace, the actual is not growing much.
Meanwhile, the core development team in Ethereum is focused. This is evident from the . When I started reading it, it was everything I found myself thinking about for the present and future of Bitcoin but didn’t see being discussed much: , , , , amongst other topics. These are very ambitious ideas and some won’t work. But some probably will work, and they will be important — moving to proof of stake and eliminating physical mining being one of the most promising.
Ethereum is making faster and more consistent technical progress on the core protocol In Bitcoin, we have mostly been stuck on the block size debate for the last year and a half. Some minor improvements have been made ( to enable the time locking functionality mentioned earlier), and others are in development but not yet live ( to make the network more efficient). None of these changes have sparked much in the way of application development yet.
Meanwhile, beyond the more robust programming language, Ethereum is making advancements that are core to even basic transactions. Its mining allows for much quicker blocks, and thus, transaction confirmation times — on Ethereum compared to 10 minutes on Bitcoin (not an apples to apples comparison, but the larger point holds). This is largely due to the concept of miners getting paid for the work they put in whether or not they are the first to solve the next block (). While this system , it’s meaningful forward progress towards quicker transaction confirmations.
Ethereum has been able to take more risk with new features because it is has had less to lose. Most of Ethereum’s history has occurred while it has held in the hundreds of millions of dollars, while Bitcoin is in the billions. As Ethereum continues to grow, it may not be able to “move fast and break things” in the same way. In practice I think this mostly comes down to the quality of the core development team — if they continue to make progress and build trust with the community execution can still be rapid, as shown by with as an open source project.
Ethereum hasn’t gone through a governance crisis. Vitalik acknowledged this at an . Like any project that has success, it’s inevitable to hit bumps as peoples’ vested interests get bigger.
There is a greater security risk with Ethereum. Having a more robust programming language creates a greater surface area for things to go wrong. Bitcoin has been battle tested for 7 years. Ethereum has been live for 9 months and now stores . While there hasn’t been a major issue yet, it is possible there are issues people are not yet aware of. This probability goes down with each passing day. People will definitely create in Ethereum. This won’t be because of a failure of the core Ethereum protocol though, much like the failure of Mt. Gox was not an error in the Bitcoin protocol.
Ethereum . This would be a huge breakthrough if it works as it would eliminate the need for proof of work and all of the hardware and electricity use that goes with it, but also presents a large risk. I believe this risk is manageable because there would be .
Scaling the network is harder when it supports mini programs in addition to basic transaction processing. This was the biggest question I had when I started to read about the idea in 2014. While there is no silver bullet here, I think some combination of solutions will be developed over time as they are with any evolving technology. Some possibilities for Ethereum are , computing power and networks naturally getting faster over time, and the economics of the Ethereum blockchain only running the most important things as a forcing function. There is a decent argument (best articulated by Gavin Andresen in his article ) that it’s better to keep the base transaction layer dumb for scaling reasons with advanced logic in higher layers. It’s possible we come full circle and end up back there, but this isn’t how interesting things are being created at the moment because it’s harder to 1) create and 2) get decent adoption of multiple layers in the stack than it is to have it all out of the box in Ethereum.
What does all this mean? It’s all good news for digital currency. Ethereum is pushing the envelope and I am more excited than ever. Competition and new ideas create better outcomes for everyone. Even if Ethereum goes up in flames our collective knowledge in digital currency will have leveled up significantly. I have not given up on Bitcoin and it’s hard to argue with a network that has been so resilient. I, and , plan on supporting both. We’ll probably support other things that haven’t been invented yet in the future. At the end of the day, I have no allegiance to any particular network; I just want whatever brings the most benefit to the world.
2. Bitcoin vs. Other Industries: Bitcoin mining is the most efficient, cleanest industrial use of electricity, and is improving its energy efficiency at the fastest rate across any major industry. Our metrics show ~59.5% of energy for bitcoin mining comes from sustainable sources and energy efficiency improved 46% YoY. No other industry comes close (consider planes, trains, automobiles, healthcare, banking, construction, precious metals, etc.). The bitcoin network keeps getting more energy efficient because of the relentless improvement in the semiconductors (SHA-256 ASICs) that power the bitcoin mining centers, combined with the halving of bitcoin mining rewards every four years that is built into the protocol. This results in a consistent 18-36% improvement year after year in energy efficiency. More details on this are included in the .
6. Bitcoin & Environmental Benefits: There is an increasing awareness that Bitcoin is quite beneficial to the environment because it can be deployed to monetize stranded natural gas or methane gas energy sources. Methane gas emissions' curtailment is particularly compelling and () has written some impressive papers on this subject. It has also become clear that energy grids that rely primarily on sustainable power sources like wind, hydro, & solar can be unreliable at times due to lack of water, sunlight, or wind. In this case, they need to be paired with a large, flexible electricity consumer like a bitcoin miner in order to develop grid resilience & finance the buildout of additional capacity necessary to responsibly power major industrial/population centers. The recent example of major Bitcoin energy curtailment on the ERCOT grid in Texas is an example of the benefits of bitcoin mining to sustainable power providers. No other industrial energy consumer is so well suited to monetize excess power as well as curtail flexibly during periods of energy shortfall & production volatility.
Looking for a helping hand in the market? Members of Stock Waves get exclusive ideas and guidance to navigate any climate.
However, I turned bullish on Bitcoin in April 2020 in my research services including Stock Waves at about $6,900/BTC and went long. It had indeed underperformed many other asset classes from autumn 2017 into spring 2020, but from that point, a variety of factors turned strongly in its favor. I then wrote a about it in July when it was at $9,200/BTC, further elaborating on why I am bullish on Bitcoin.
That July article received a lot of press, and the CEO of MicroStrategy (), the first publicly-traded company to put part of its cash position into Bitcoin, stated that he sent that article among other key resources to his board of directors as part of his team education process. It's written with institutional readers in mind, in other words, in addition to retail investors.
Chart:
Chart Source: Chart Source: , with annotations added by Lyn Alden
Chart Source:
A common criticism of Bitcoin is that the number of transactions that the network can handle per 10 minutes is very low compared to, say, Visa () datacenters. This limits Bitcoin's ability to be used for everyday transactions, such as to buy coffee.
Source:
Chart Source:
Data Sources: ,
Here's the problem. Bitcoin has over $250 billion in market capitalization. Two publicly-traded companies, MicroStrategy () and Square () already own it, as do a variety of private companies and investment funds. Big investors like Cathie Woods, Paul Tudor Jones, and Stanley Druckenmiller own it, as does . Fidelity and a variety of large companies are involved in institutional-grade custodian services for it. Paypal () is getting involved. Federally-regulated U.S. banks . The IRS treats its like a commodity for tax purposes. That's a lot of mainstream momentum.
Bitcoin is accessible through some publicly-traded funds, like the Grayscale Bitcoin Trust (), of which I am long. However, funds like these trade at a premium to NAV, and rely on counterparties. A fund like that can be useful as part of a diversified portfolio in an IRA, due to tax advantages, but outside of that isn't the best way to establish a core position.
Bitcoin is also available on , where it can then be sent to a private hardware wallet or elsewhere. I don't have a strong view on which exchanges are the best. However, be careful about platforms that don't let you withdraw your Bitcoin, like Robinhood. I personally bought my core position through an exchange in April when I turned bullish, and transferred a lot of it to personal custody. There are also many dollar-cost averaging platforms that investors can use, which keep the costs down and cater to savers instead of traders.
I share model portfolios and exclusive analysis on . Members receive exclusive ideas, technical charts, and commentary from three analysts. The goal is to find opportunities where the fundamentals are solid and the technicals suggest a timing signal. We're looking for the best of both worlds, high-probability investing where fundamentals and technicals align.
This introductory paper was originally published in 2014 by Vitalik Buterin, the founder of , before the project's launch in 2015. It's worth noting that Ethereum, like many community-driven, open-source software projects, has evolved since its initial inception.
While several years old, we maintain this paper because it continues to serve as a useful reference and an accurate representation of Ethereum and its vision. To learn about the latest developments of Ethereum, and how changes to the protocol are made, we recommend .
Satoshi Nakamoto's development of Bitcoin in 2009 has often been hailed as a radical development in money and currency, being the first example of a digital asset which simultaneously has no backing or "" and no centralized issuer or controller. However, another, arguably more important, part of the Bitcoin experiment is the underlying blockchain technology as a tool of distributed consensus, and attention is rapidly starting to shift to this other aspect of Bitcoin. Commonly cited alternative applications of blockchain technology include using on-blockchain digital assets to represent custom currencies and financial instruments (""), the ownership of an underlying physical device (""), non-fungible assets such as domain names (""), as well as more complex applications involving having digital assets being directly controlled by a piece of code implementing arbitrary rules ("") or even blockchain-based "" (DAOs). What Ethereum intends to provide is a blockchain with a built-in fully fledged Turing-complete programming language that can be used to create "contracts" that can be used to encode arbitrary state transition functions, allowing users to create any of the systems described above, as well as many others that we have not yet imagined, simply by writing up the logic in a few lines of code.
The concept of decentralized digital currency, as well as alternative applications like property registries, has been around for decades. The anonymous e-cash protocols of the 1980s and the 1990s, mostly reliant on a cryptographic primitive known as Chaumian blinding, provided a currency with a high degree of privacy, but the protocols largely failed to gain traction because of their reliance on a centralized intermediary. In 1998, Wei Dai's became the first proposal to introduce the idea of creating money through solving computational puzzles as well as decentralized consensus, but the proposal was scant on details as to how decentralized consensus could actually be implemented. In 2005, Hal Finney introduced a concept of "", a system which uses ideas from b-money together with Adam Back's computationally difficult Hashcash puzzles to create a concept for a cryptocurrency, but once again fell short of the ideal by relying on trusted computing as a backend. In 2009, a decentralized currency was for the first time implemented in practice by Satoshi Nakamoto, combining established primitives for managing ownership through public key cryptography with a consensus algorithm for keeping track of who owns coins, known as "proof-of-work".
The "state" in Bitcoin is the collection of all coins (technically, "unspent transaction outputs" or UTXO) that have been minted and not yet spent, with each UTXO having a denomination and an owner (defined by a 20-byte address which is essentially a cryptographic public key). A transaction contains one or more inputs, with each input containing a reference to an existing UTXO and a cryptographic signature produced by the private key associated with the owner's address, and one or more outputs, with each output containing a new UTXO to be added to the state.
Check that the timestamp of the block is greater than that of the previous block and less than 2 hours into the future
The idea of taking the underlying blockchain idea and applying it to other concepts also has a long history. In 2005, Nick Szabo came out with the concept of "", a document describing how "new advances in replicated database technology" will allow for a blockchain-based system for storing a registry of who owns what land, creating an elaborate framework including concepts such as homesteading, adverse possession and Georgian land tax. However, there was unfortunately no effective replicated database system available at the time, and so the protocol was never implemented in practice. After 2009, however, once Bitcoin's decentralized consensus was developed a number of alternative applications rapidly began to emerge.
Namecoin - created in 2010, is best described as a decentralized name registration database. In decentralized protocols like Tor, Bitcoin and BitMessage, there needs to be some way of identifying accounts so that other people can interact with them, but in all existing solutions the only kind of identifier available is a pseudorandom hash like 1LW79wp5ZBqaHW1jL5TCiBCrhQYtHagUWy
. Ideally, one would like to be able to have an account with a name like "george". However, the problem is that if one person can create an account named "george" then someone else can use the same process to register "george" for themselves as well and impersonate them. The only solution is a first-to-file paradigm, where the first registerer succeeds and the second fails - a problem perfectly suited for the Bitcoin consensus protocol. Namecoin is the oldest, and most successful, implementation of a name registration system using such an idea.
Colored coins - the purpose of is to serve as a protocol to allow people to create their own digital currencies - or, in the important trivial case of a currency with one unit, digital tokens, on the Bitcoin blockchain. In the colored coins protocol, one "issues" a new currency by publicly assigning a color to a specific Bitcoin UTXO, and the protocol recursively defines the color of other UTXO to be the same as the color of the inputs that the transaction creating them spent (some special rules apply in the case of mixed-color inputs). This allows users to maintain wallets containing only UTXO of a specific color and send them around much like regular bitcoins, backtracking through the blockchain to determine the color of any UTXO that they receive.
The earliest alternative cryptocurrency of all, , attempted to use a Bitcoin-like blockchain to provide a name registration system, where users can register their names in a public database alongside other data. The major cited use case is for a system, mapping domain names like "bitcoin.org" (or, in Namecoin's case, "bitcoin.bit") to an IP address. Other use cases include email authentication and potentially more advanced reputation systems. Here is the basic contract to provide a Namecoin-like name registration system on Ethereum:
The contract would then have clauses for each of these. It would maintain a record of all open storage changes, along with a list of who voted for them. It would also have a list of all members. When any storage change gets to two thirds of members voting for it, a finalizing transaction could execute the change. A more sophisticated skeleton would also have built-in voting ability for features like sending a transaction, adding members and removing members, and may even provide for -style vote delegation (ie. anyone can assign someone to vote for them, and assignment is transitive so if A assigns B and B assigns C then C determines A's vote). This design would allow the DAO to grow organically as a decentralized community, allowing people to eventually delegate the task of filtering out who is a member to specialists, although unlike in the "current system" specialists can easily pop in and out of existence over time as individual community members change their alignments.
3. A decentralized data feed. For financial contracts for difference, it may actually be possible to decentralize the data feed via a protocol called "". SchellingCoin basically works as follows: N parties all put into the system the value of a given datum (eg. the ETH/USD price), the values are sorted, and everyone between the 25th and 75th percentile gets one token as a reward. Everyone has the incentive to provide the answer that everyone else will provide, and the only value that a large number of players can realistically agree on is the obvious default: the truth. This creates a decentralized protocol that can theoretically provide any number of values, including the ETH/USD price, the temperature in Berlin or even the result of a particular hard computation.
6. Peer-to-peer gambling. Any number of peer-to-peer gambling protocols, such as Frank Stajano and Richard Clayton's , can be implemented on the Ethereum blockchain. The simplest gambling protocol is actually simply a contract for difference on the next block hash, and more advanced protocols can be built up from there, creating gambling services with near-zero fees that have no ability to cheat.
7. Prediction markets. Provided an oracle or SchellingCoin, prediction markets are also easy to implement, and prediction markets together with SchellingCoin may prove to be the first mainstream application of as a governance protocol for decentralized organizations.
The "Greedy Heaviest Observed Subtree" (GHOST) protocol is an innovation first introduced by Yonatan Sompolinsky and Aviv Zohar in . The motivation behind GHOST is that blockchains with fast confirmation times currently suffer from reduced security due to a high stale rate - because blocks take a certain time to propagate through the network, if miner A mines a block and then miner B happens to mine another block before miner A's block propagates to B, miner B's block will end up wasted and will not contribute to network security. Furthermore, there is a centralization issue: if miner A is a mining pool with 30% hashpower and B has 10% hashpower, A will have a risk of producing a stale block 70% of the time (since the other 30% of the time A produced the last block and so will get mining data immediately) whereas B will have a risk of producing a stale block 90% of the time. Thus, if the block interval is short enough for the stale rate to be high, A will be substantially more efficient simply by virtue of its size. With these two effects combined, blockchains which produce blocks quickly are very likely to lead to one mining pool having a large enough percentage of the network hashpower to have de facto control over the mining process.
(1) provides a tendency for the miner to include fewer transactions, and (2) increases NC
; hence, these two effects at least partially cancel each other out. (3) and (4) are the major issue; to solve them we simply institute a floating cap: no block can have more operations than BLK_LIMIT_FACTOR
times the long-term exponential moving average. Specifically:
The problem with such a large blockchain size is centralization risk. If the blockchain size increases to, say, 100 TB, then the likely scenario would be that only a very small number of large businesses would run full nodes, with all regular users using light SPV nodes. In such a situation, there arises the potential concern that the full nodes could band together and all agree to cheat in some profitable fashion (eg. change the block reward, give themselves BTC). Light nodes would have no way of detecting this immediately. Of course, at least one honest full node would likely exist, and after a few hours information about the fraud would trickle out through channels like Reddit, but at that point it would be too late: it would be up to the ordinary users to organize an effort to blacklist the given blocks, a massive and likely infeasible coordination problem on a similar scale as that of pulling off a successful 51% attack. In the case of Bitcoin, this is currently a problem, but there exists a blockchain modification which will alleviate this issue.
Internally, 2 and "CHARLIE" are both numbers, with the latter being in big-endian base 256 representation. Numbers can be at least 0 and at most 2256-1
1. Aspnes, J., et al. 2005. Exposing computationally challenged Byzantine imposters. Yale University Department of Computer Science; .
2. Back, A. 1997. A partial hash collision based postage scheme; .
3. Back, A. 2001. Hash cash; .
4. Back, A. 2002. Hashcash—a denial of service counter measure; .
5. Bayer, D., Haber, S., Stornetta, W. S. Improving the efficiency and reliability of digital time-stamping. Proceedings of Sequences 1991; .
6. Benaloh, J., de Mare, M. 1991. Efficient broadcast timestamping; .
7. Boyle, T. F. 1997. GLT and GLR: Component architecture for general ledgers; .
8. Castro, M., Liskov, B. 1999. Practical Byzantine fault tolerance. Proceedings of the Third Symposium on Operating Systems Design and Implementation; .
9. Chaum, D. 1981. Untraceable electronic mail, return addresses, and digital pseudonyms. Communications of the ACM 24(2): 84-90; .
11. Chaum, D. 1985. Security without identification: transaction systems to make Big Brother obsolete. Communications of the ACM 28(10): 1030-1044; .
12. Chaum, D., et al. 1988. Untraceable electronic cash. Advances in Cryptology: 319-327; .
13. Dai, W. 1998; .
14. Douceur, J. R. 2002. The Sybil attack; .
15. Dwork, C., Naor, M. 1992. Pricing via processing or combatting junk mail; .
16. Felten, E. 2017. Smart contracts: neither smart nor contracts? Freedom to Tinker; .
17. Franklin, M. K., Malkhi, D. 1997. Auditable metering and lightweight security; .
18. Gabber, E., et al. 1998. Curbing Junk E-Mail via Secure Classiffication. .
19. Garay, J. A., et al. 2015. The bitcoin backbone protocol: analysis and applications. Advances in Cryptology: 281-310; .
20. Goldberg, I. 2000. A pseudonymous communications infrastructure for the Internet. Ph.D. dissertation, University of California Berkeley; .
21. Grigg, I. 2005. Triple entry accounting; .
22. Haber, S., Stornetta, W. S. 1991. How to timestamp a digital document. Journal of Cryptology 3(2): 99-111; .
23. Haber, S., Stornetta, W. S. 1997. Secure names for bit-strings. In Proceedings of the 4th ACM Conference on Computer and Communications Security: 28-35; .
24. Jakobsson, M., Juels, A. 1999. Proofs of work and bread pudding protocols; .
25. Juels, A., Brainard, J. 1999. Client puzzles: a cryptographic countermeasure against connection completion attacks. Proceedings of Networks and Distributed Security Systems: 151-165; .
26. Just, M. 1998. Some timestamping protocol failures; .
27. Lamport, L., et al. 1982. The Byzantine Generals Problem. ACM Transactions on Programming Languages and Systems 4(3): 382-401; .
28. Lamport, L. 1989. The part-time parliament. Digital Equipment Corporation; .
29. Lamport, L. 2001. Paxos made simple; .
30. Laurie, B. 2014. Certificate Transparency. acmqueue 12(8); .
31. Levy, K. E. C. 2017. Book-smart, not street-smart: blockchain-based smart contracts and the social workings of law. Engaging Science, Technology, and Society 3: 1-15; .
32. Melara, M., et al. 2015. CONIKS: bringing key transparency to end users. Proceedings of the 24th Usenix Security Symposium; .
33. Merkle, R. C. 1980. Protocols for public key cryptosystems. IEEE Symposium on Security and Privacy; .
34. Nakamoto, S. 2008. Bitcoin: a peer-to-peer electronic cash system; .
35. Nakamoto, S. 2008. Re: Bitcoin P2P e-cash paper; .
36. Narayanan, A., et al. 2016. Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction. Princeton University Press; .
37. Pass, R., et al. 2017. Analysis of the blockchain protocol in asynchronous networks. Annual International Conference on the Theory and Applications of Cryptographic Techniques; .
38. Pinkas, B., Sander, T. 2002. Securing passwords against dictionary attacks. Proceedings of the Ninth ACM Conference on Computer and Communications Security: 161-170; .
39. Reuters. 2014. Mind your wallet: why the underworld loves bitcoin; .
40. Sirer, E. G. 2016. Bitcoin guarantees strong, not eventual, consistency. Hacking, Distributed; .
41. Szabo, N. 1994. Smart contracts; .
42. Szabo, N. 2008. Bit gold. Unenumerated; .
The first step towards PoS in Ethereum was launching a standalone network that can come to consensus, called the . In return for providing security to this system, stakers are rewarded with new ETH from inflation. In the future, the Beacon Chain and Ethereum as we know it will merge, allowing stakers to also earn the transaction fees and Miner-Extractable Value () that currently go to PoW miners.
To solo stake in Ethereum, a user has to deposit 32 ETH to the , along with specifying two key parameters:
Any big exchange can trivially implement a staking pool. In fact, many () Beacon Chain staking.
Now that we have established the differences between solo and pooled staking, as well as how centralized staking pools work, we will explore the architecture of a decentralized staking pool, using as an example.
How are the withdrawal credentials managed? The withdrawal credentials are an ETH2 BLS key, . This is not optimal, but also not a risk while withdrawals from the Beacon Chain are not enabled. By the time stakers can withdraw, Lido will have transitioned to an ETH1 smart contract as the withdrawal credential instead of a multi-sig. After that point, 1 stETH will be trustlessly redeemable for 1 ETH, assuming the smart contract has no administrative functionalities over the funds.
Who are the validators and how do they get into the registry? Validators are like p2p.org, Chorus One, or stakefish, that have to be approved by governance. Each validator has a maximum stake that they can own, which is also voted on by governance.
Non-staking ETH holders: If stETH can be used as collateral to borrow ETH, it can unlock demand to borrow ETH to use it in leveraged staking. This would push up the rates for supplying ETH, ultimately benefiting all ETH holders with higher interest rates.
1 - We also refer the reader to Chitra and Evans’ work on and the which manifest between these 2 forces.
Acknowledgments: Thanks for valuable discussions and reviews to , ,
You understand the concept of a blockchain and how Bitcoin uses it to create a trustless digital currency. No? or . Hint, you should probably watch the long one.
You understand the basic concept of a and how it can be used to quickly verify information correctness.
is a distributed computer; each node in the network executes some bytecode (hint: Smart Contracts), and then stores the resulting state in a blockchain. Due to the properties of the blockchain representing application state, this results in “applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third party interference”.
Obviously an acceptable explanation won’t fit into a paragraph. Go ahead and read the . Or one of the other trillion “how do I Ethereum/Blockchain/Smart Contract” posts on the internet. Or watch this video titled .
Smart Contracts (again, just blobs of code) are executed by every single full node in the network, which is a lot of redundancy (good) but this costs a lot of energy and time (bad). Because it costs money to perform computations, the cost of performing computations is directly tied to which computations your code performs. To say that another way, each low level opcode in the EVM costs a certain amount of “gas” to perform. The word “gas” is totally arbitrary; it’s just an abstract label given to the cost of performing computation. There’s also a network-enforced gas limit, to solve the ; i.e., you can’t write a program that never ends, because you’d run out of gas and the computation would be rejected by the network.
The ethereum organization on github has a that has some references and examples. Make sure you check the activity on the file you’re looking at, though, cause this kind of information gets stale very quickly.
is the official Ethereum dApp browser. It’s really just a pretty web UI for interfacing with an Ethereum node and sending transactions to/from Smart Contracts (dApps!).
is a mobile browser with a unique approach to the UX of using dApps.
is Coinbase’s foray into building an Ethereum wallet and browser. It bets big on the “wechat” + “chatbot” vibes.
is a Chrome Extension that turns Chrome into a “dApps Browser”. Its core feature is that it injects web3
, a JavaScript Ethereum client library, into every page, allowing dApps to connect to MetaMask’s hosted Ethereum nodes. The Chrome extension allows you to manage wallets and connect to the different Ethereum networks available.
is a an Ethereum client (as well as a full-node implementation) that integrates with your web browser, turning it into a dApp browser.
Mostly everything you know about Bitcoin nodes applies here. Nodes store a copy of the blockchain and optionally execute all of the transactions to confirm the resulting state. Run yourself a full node or light client with (first-party, Go) or (third-party, Rust).
You should probably go ahead and run all of these node clients with docker and some sort of persistent storage. If you don’t feel like running a node on your own, you can use a third party like . There is also a way to run a local node for testing and development, discussed later.
Yeah, all of the “tokens” that you hear about are just numbers in a distributed hash table with an API (aka protocol) to add and subtract. (that’s 38 lines of code, half of it comments and whitespace).
Go ahead and read ; you’ll see that it’s just a contract (Crowdsale
) that interfaces with another contract (MyToken
) which is just like the basic token contract linked above. It’s not magic.
People are using tokens for a variety of uses, and you’ll quickly see that imagination has no limits. Tokens are frequently used to incentivize interaction with a protocol, prove ownership of assets, proof of voting rights, etc. .
, which is a good read.
Everyone started defining their own protocol for interfacing with their Token contract and that got old pretty quickly, so some some people got together and . It just says “hey, support these function signatures and we’ll all have a much better time”.
defines a non-fungible token standard. Non-fungibility means that each token is not equal to another; one “token” can be worth more or less than another and have unique properties. The best example of this is something like or .
* there was also but its ideas have been merged via community consensus into ERC721
If you’d like to programmatically interface with contracts, there are various Ethereum client implementations. For JavaScript, , , and are popular. For golang, the abigen
executable in provides go packages for interfacing with contracts. At the end of the day, though, it’s just a standard JSON RPC API, so you can always write your own adaptor for the language of your choice if once isn’t available. Some client libraries provide convenience functions as well, beyond simple function execution.
Run a local ethereum node for testing and development with (previously called ).
Truffle (written in Node) has the most developer adoption and seems to have the most active development; follow the guide to get up to speed.
(Node) has similar but different ideas for how developers should structure projects.
(Go) is very similar to Truffle.
(Python) is an actively developed python framework that satisfies the same niche.
When you first start playing around with contracts, you should avoid using a framework until you understand the value it provides, much in the same way you shouldn’t start learning how to write HTML with rails new
. The easiest thing to do at first is use to play around with the language and ideas.
Sharing is caring, so . Using ETHPM, you can inherit from or link to other well-known contracts and libraries, reducing code duplication and ideally providing a strong foundation of good contracts for future development.
for more information and background.
— The primary Ethereum testnet using Proof of Work. This network, because of low computation volume, is easy to DDOS, split, and otherwise mess with. It was recently revived and is usable again after being temporarily abandoned after a spam attack. It consistently has a very low gas limit, for no good reason, so many contracts will fail to deploy on this network.
— A parity-client only testnet using Proof of Authority which advertises immunity to spam attacks and a consistent 4 second block time.
— A geth-client only testnet using Clique Consensus, which is therefore more resilient to malicious actors, despite the low computation volume. (my personal favorite)
You can also run your own private Ethereum network. The go-ethereum team built to configure a full network, complete with custom bootnodes, genesis block, and consensus rules, which is what powers the Rinkeby network. You can also run your own infrastructure, perhaps using or . But you probably won’t need to run a private network any time soon.
A wallet is one of two things: it can be 1) a fancy interface for creating and sending transactions using your account (i.e. MyEtherWallet) or 2) just a smart contract (which, again, is just some code) that sends and receives ether; it’s not a native concept. . They can come in many flavors like multisignature, paper, etc.
is the first-party language for describing smart contracts. It’s the most popular and therefore has the most examples, references, and tutorials. You should probably learn this one unless you know what you’re doing.
Play around with Solidity in , a web-based playground for Solidity contracts.
LLL, which stands for Low-Level Lisp-Like Language, is exactly what it sounds like on the tin. While LLL isn’t advertised as the primary language supported by the Ethereum team, it still receives updates and is hosted in the .
.
is a high-level python-esq language that compiles to EVM bytecode. It has been effectively deprecated due to the numerous critical-severity bugs found by Zeppelin. For a similar approach to the language, see Vyper.
is also python-inspired and developed with a focus on security, simplicity, and no-surprises. It is still in development.
is a language designed to represent a smart contract as a finite state machine; your contract is a function of state and transaction and produces a new state. It is still in development.
Decompile Ethereum smart contract bytecode with or disassemble it with . This is a constantly evolving area, and new tools are being developed pretty quickly. The repo will contain more information.
ConsenSys has a beautiful repository of that you should understand at a deep level.
Before you deploy a smart contract that’ll handle real cash money, you should probably and have it pentested. If you’re handling RealMoney™, you should have your code professionally audited.
is a messaging system build into Ethereum. It allows dApps to publish small amounts of information for the purpose of communicating with each other in non-real-time.
Although it hasn’t been updated in a while, .
You’ve probably heard about the “” and how big of a deal it was. The exploit occurred due to a now highly-recognized re-entrancy bug; a malicious contract could cause a non-infinite, recursive call to the contract, causing it to update internal state incorrectly. In the case of the DAO contract, this meant lots of money being sent to someone that should have only gotten small amounts of money. The community voted as a super majority to hard-fork the network and restore funds to the DAO contributors. This resulted in the birth of , where no funds were returned; the code that was written is the law of the land, and cannot be reverted.
is also tackling the challenge of designing a company that operates according to smart contract logic, with a focus on creating a company, rather than an organization, that can accept investment, handle accounting, pay employees, distribute equity, and generally do business as we know it today. They’re also building a beautiful dApp client to make using their protocol as easy as possible.
Check out the for a better understanding of how it operates.
Swarm is a decentralized storage network being developed within the Ethereum ecosystem as a first-party project. Read , but the gist is that they’re basically the same except they have slightly different philosophies and use different protocols under the hood.
(InterPlanetary File System) is a protocol (among other things) for distributing files. Think of it as a filesystem using the ideas behind bittorrent and git where data is content-addressable and immutable. IPFS stores information using a data model called , which you can learn about via the links below.
is Protocol Labs’ effort to create a distributed market for storage on IPFS; aka, an incentive layer for providing storage to the network. The FileCoin consensus protocol (mostly) does away with wasteful Proof of Work and uses Proof of Replication and Proof of SpaceTime (yes, really) to ensure that a piece of data is replicated a certain number of times and is stored for a specific amount of time.
You should , the , and the .
While FileCoin isn’t yet released, you can use the existing IPFS storage network to host the html/css/js of your dApp client as well as use it as a database using something like .
is a decentralized prediction market (well, two markets) that lets users wager on real-world event outcomes. On one side, you have the prediction market where users trade tokens to indicate belief in a specific outcome; once the outcome is realized, the winning tokens have full value. To facilitate this, you have the decentralized oracle protocol, which creates a market for providing knowledge of real-world events,
is also a decentralized prediction market with a lot of the same ideas and concepts as Augur. and here’s a comparison:
is a distributed market for computing power in the same way that IPFS + FileCoin creates a distributed market for storage.
Skip the marketing talk and for a better understanding.
The is creating a protocol for trading tokens and a dApp that uses that protocol. Developers can build an exchange (technically a “relayer”) on top of their distributed application (aka smart contract collection) and users don’t have to worry about trusting your app to settle trades; settlement is handled on the blockchain. The 0x protocol is designed to use off-chain third-parties (those “relayers”) to broadcast trades and manage order books (so orders can be created/updated/deleted without sending a slow & costly transaction to Ethereum) but use Ethereum for settlement.
They’ve started by implementing (previously 0x OTC), a dApp that uses their protocol to transfer tokens directly between users. You can . They launched the contracts to the mainnet and are working with the community to build out Relayers.
Skip the buzzwords and .
ConsenSys is working on , which is very similar, but focused on communicating intents (rather than signed promises to trade) over “indexers” and then having the order be brokered p2p. .
is a protocol (and a set of smart contracts implementing that protocol) that lets you create a token that 1) prices itself based on orders and 2) provides instant liquidity by holding another token (like Ether or any ERC20) in reserve as collateral.
The is a DAO that is managing the . Dai was recently released and is currently stable; managing to keep its value during two notably turbulent periods. There are various other Stablecoins being attempted, namely Tether, Fragments, Basecoin.
attempts to solve this issue by 1) delivering the data to your smart contract from an external source and 2) providing a proof that the information is from that source and unchanged. So if you trust random.org, you can use Oraclize to provide your smart contract with a random number.
functions as an oracle for transactions on the Bitcoin blockchain, meaning you can write smart contracts on Ethereum that respond to events on the Bitcoin blockchain. For example, you can let people pay for your service in BTC and a smart contract can verify that the payment has gone through and then perform your service.
They manage , a set of vetted smart contract best practices that you can inherit from and use in your own dApps. Check out their for a great learning resource. Honestly you should probably just read every contract in there.
They’re taking that code reusability concept a step forward and creating . Ignore the term “OS”; it’s not an operating system in a classic sense. zeppelin_os is a collection of features, tools, and services that aim to provide a solid developer experience while maximizing smart contract security.
One part of the zeppelin_os is the “zeppelin_os Kernel” which is not a kernel, but is actually a set of well-known smart contracts that act as . They’re and can be independently upgraded in case of security patches. Because you’re including less code in the contract itself, deployment costs less gas and developers cut down on code repetition.
* sidenote: I’m really not a fan of people adopting these already well-defined terms (like OS and Kernel) and using them to describe things that aren’t what we already understand to be an OS and a Kernel. It just adds more confusion to an already ridiculously definition-saturated space. Call your project what it is and don’t fall into the that requires blog posts like this to accurately describe what you’re working on.
is a decentralized registry of human-readable names to addresses. Plus one for a descriptive project name. Various projects integrate with it, allowing you to pay .eth
addresses or otherwise use it as a convenient lookup tool. You can also create DNS records that resolve to your .eth
address.
is an attempt to decentralize digital advertising by monitoring “user attention” and distributing value between publishers, advertisers, and users, cutting out the middlemen.
and are tackling the problem of a decentralized identity system.
is a higher level approach to decentralized markets and communities. At its core, it’s a bunch of smart contracts governing how people post listings, search and filter listings, manage reputation within the community, manage payments, etc. It can be used to build markets like and .
ConsenSys (like, the word consensus, but as a company name; it’s pretty clever but I was pronouncing it as “con-SEn-SIS” for the longest time so don’t make that mistake) is a “venture production studio”. They’re an (honestly surprisingly huge) umbrella group that’s sponsoring development of a bunch of and . Of note, they sponsor truffle, Infura, MetaMask, Gnosis, and uPort.
was mentioned above regarding OpenZeppelin and zeppelin_os. They also perform smart contract audits and perform consulting work. .
is an impressive group of people working on IPFS, FileCoin, libp2p and IPLD, among other projects.
has a great overview of the landscape around Tokens, ICOs, and VCs.
Currency units
1.198X
1.458X
2.498X
Purchasers
83.5%
68.6%
40.0%
Reserve spent pre-sale
8.26%
6.79%
3.96%
Reserve used post-sale
8.26%
6.79%
3.96%
Miners
0%
17.8%
52.0%
Minimum amount needed to stake: This determines the barrier to entry.
A minimum of 32 ETH is required, and people can only stake in 32 ETH multiples.
Delegation: Can stakers outsource the work of running the physical validator node, or do they have to do it themselves? If delegation is impossible, hardware and bandwidth requirements can prevent some people from staking.
There is no in-protocol way to delegate stake to other validators.
Lockup: How long does it take to withdraw staked funds? Longer lockups tend to increase the security of the protocol, but also make it less attractive for stakers due to lower flexibility and higher opportunity costs.
Returns: How much do stakers earn over time? The higher the returns, the more people will stake, leading to higher security.
https://insights.deribit.com/market-research/making-sense-of-rollups-part-2-dispute-resolution-on-arbitrum-and-optimism/
Deribit Insights, Jul 2021
I want to note my implicit bias at the outset. I recently participated in Offchain Labs’ latest fundraising round, as did Mechanism Capital. While feigning objectivity is futile, my hope is that this piece helps the reader to grasp some of the crucial differences between the two projects, biased though I may be.
Something similar is true within each category of rollups. While Arbitrum and Optimism, the leading Optimistic Rollups, share much in common, it’s not just tribal loyalties that separate the two. In particular, the differences in their respective approaches to dispute resolution produce some important performance tradeoffs. These are tradeoffs that merit discussion, given that both platforms aim to offer full scaling functionality for Ethereum over the coming months.
First, some brief historical background about each project is in order. As it happens, both have somewhat distinctive origin stories.
Recall that Optimistic Rollups take an “innocent until proven guilty” approach to transaction validity. Optimistic Rollups process transactions and feed the result back to Ethereum for final inclusion in the base chain. A dispute period ensures that anyone monitoring the state of the rollup can submit a challenge if the rollup sequencer has processed transactions invalidly. This challenge immediately triggers a dispute resolution process. The difference between Arbitrum and Optimism is how the dispute resolution process works—including how much it costs and how long it takes.
The simplest way to describe the difference is that Optimism’s dispute resolution relies more heavily on the Ethereum Virtual Machine (EVM) than Arbitrum’s. When someone submits a challenge on Optimism, the entire transaction in question is run through the EVM. By contrast, Arbitrum uses an offchain dispute resolution process to whittle down the dispute to a single step within a transaction. The protocol then sends this single step assertion, as opposed to the entire transaction, to the EVM for final verification. Conceptually, Optimism’s dispute resolution process is thus considerably simpler than Arbitrum’s.
For its part, Arbitrum uses a recursive bisection algorithm for the offchain component of its dispute resolution process. This sounds complex, but in reality, the algorithm simply forces the “asserter” (the party that processed the transaction) and the “challenger” (the party submitting a challenge) to go back and forth to narrow down the point of dispute, in the manner illustrated by the diagram below. Interestingly enough, this back-and-forth resolution process via recursive bisection was part of the initial Arbitrum concept from 2015.
Optimism’s approach to dispute resolution—i.e. running entire transactions through the EVM—is not just conceptually simpler: it’s quicker too. There aren’t “multiple rounds” of back and forth, as there are in Arbitrum’s process. In fact, for this reason, Optimism’s rollups are often called “single round” whereas Arbitrum’s are “multi round.” Practically speaking, this means that in the case of a disputed transaction, final confirmation on Ethereum is delayed in Arbitrum’s case longer than it is in Optimism’s case. As we explored in Part One of this series, the speed of dispute resolution is important because it determines how long it takes for users to withdraw their tokens from the rollup back to Ethereum.
It would seem that the fundamental tradeoff between the two dispute resolution designs is simply one of speed versus onchain cost. But in reality, this framing is somewhat naive, since disputes are rarely expected to arise for two reasons:
Transaction processors on both Arbitrum and Optimism are economically disincentivized from processing transactions fraudulently. They are forced to put up stakes/bonds beforehand, which are then slashed in the case of a fraudulent transaction.
Parties monitoring the state of the rollup are disincentivized from submitting errant fraud proofs—in Optimism, because the challenger has to pay the onchain gas fee of the fraud proof, and in Arbitrum, because the challenger has to put up a bond that it forfeits in the case of a lost dispute.
So why exactly does the structure of the dispute resolution process matter if disputes are expected to be few and far between?
Even though disputes will be infrequent, rollups must be designed such that a dispute could arise at any time. Consequently, the design of the “disputed” case influences the structure of the prevailing “happy” (i.e. non-disputed) case.
Because Optimism must be able to run each transaction through the EVM in the case of a dispute, it cannot process transactions that surpass the Ethereum gas limit, as these could not be properly verified onchain. Arbitrum, by contrast, can execute transactions that are arbitrarily large, even if they exceed Ethereum’s gas limit, since transactions are never run wholesale through the EVM but are first broken into tiny “step assertions.”
It’s unclear how much of a practical constraint the gas limit on Optimism will pose for applications. But another, perhaps more important implication of the difference in dispute resolution design is that Arbitrum can save gas by checkpointing onchain (updating the “state root”) less frequently. More specifically, Arbitrum can assign a large amount of offchain computation to one update, since that state root update could theoretically include (miniscule) single-step fraud proofs of all of the transactions contained therein. Optimism, on the other hand, must checkpoint onchain after every transaction, significantly raising its onchain footprint.
In sum, Arbitrum should be more gas efficient than Optimism—and therefore cheaper for users—not only in the rare case of a dispute, but also in the predominant “happy” case.
One final point on these differing dispute resolution processes is worth discussing: namely, how resistant each design is to potential attacks. Above, we touched upon the economic incentives that discourage spamming attacks. More specifically, validators on both Optimism and Arbitrum are disincentivized from submitting unwarranted challenges.
But what about the case of a malicious attacker who does not mind bearing the economic cost of spamming the rollup? In other words, what happens if a person or entity is so committed to slowing down an Optimistic Rollup’s progress that they’re willing to do so even if it means repeatedly paying for bogus challenges?
As mentioned above, Optimism’s dispute resolution process is much simpler and quicker than Arbitrum’s, since it simply feeds disputed transactions through the EVM. This speed is to Optimism’s advantage here, since disputes can be resolved quickly and will not prevent future progress of the rollup chain.
In addition to the design of the dispute resolution process, there are other significant differences between Arbitrum and Optimism, especially
their codebase architectures, and
their approaches to Miner Extractable Value (MEV)
Ultimately, stepping back from the protocol-level nuances—important though they are—what will differentiate these two heavyweights is the “soft” stuff as well: bootstrapping strategies, incentive design, and community ethos, to name just a few. Indeed, if they are to succeed in the long run, Optimistic Rollups will have to become worlds unto their own, not simply appendages of Ethereum. Scaling is thus less of an arms race than it is a multi-front war. It might have one winner; it might have multiple. It might rage for years; it might have an end in sight sooner rather than later. It will certainly be consequential for crypto’s future.
Nothing in this piece constitutes investment advice.
[1] If the transaction uses contracts that have never been part of a fraud proof before, those will all need to be re-deployed on Ethereum, which is very gas-intensive. Thanks to transmissions11 for clarifying this point.
[2] “DA” stands for “disputable assertion,” which is the technical term for a processed transaction that is subject to potential challenge.
https://www.theblock.co/post/114225/layer-1-platforms-a-framework-for-comparison
The Block, Aug 2021
From explaining consensus algorithms in plain english to opining on the blockchain energy consumption debate to analyzing on-chain metrics, we cover a lot of ground in this report. Data, and lots of it, guides the way.
You can learn more about the report and complete the form below to download the report in its entirety.
Six years after the inception of Ethereum, we have seen the revolutionary impact of general purpose blockchains on full display. Foremost, in the financial sphere with the rise of decentralized finance (DeFi). Secondly, in the cultural sphere with the explosion of activity around non-fungible tokens (NFTs).
But anyone who has been closely following knows that we have only scratched the surface of discovering what experiences general purpose blockchains are capable of delivering.
And while the term “Ethereum killer” has fallen out of favor, the pace of development in the Layer 1 platform arena has only accelerated over the past years. Dozens of platforms have emerged.
Some are seeking to offer an easily adoptable alternative to Ethereum and challenge its status as the de-facto choice for launching decentralized applications. Others are focused on giving developers the highest level of flexibility in building their own blockchains and creating cross-chain communication protocols.
With each passing year, a “one blockchain to rule them all” outcome fades further and further into the rearview. But analyzing these different platforms remains challenging.
They are surrounded by technical jargon. Digestible comparisons amongst them are few and far between. Yet they already compose a significant portion of the “investable crypto landscape” and are poised to support ecosystems orders of magnitude larger than what we have seen to date. Analyzing them will be an important task for years to come.
Analyzing smart contract platforms outside of the context of Ethereum is difficult. Analyzing Ethereum outside of the context of Bitcoin is equally difficult. So, this report starts with a brief introduction to Bitcoin before diving into the current state of Ethereum.
https://medium.com/dragonfly-research/the-amm-test-a-no-bs-look-at-l1-performance-4c8c2129d581
Haseeb Qureshi, Mar 2022
Multichain is now a reality. Ethereum’s lack of scalability has caused a mass migration to a new generation of L1s. Most of these L1s use the EVM (the Ethereum Virtual Machine), which makes them compatible with Ethereum wallets and developer tools. But Solana has completely rebuilt its stack from the ground up. Solana claims to be the fastest blockchain in existence. So it begs the question: Just how much faster is Solana than the EVM chains?
Most benchmarks released by L1s themselves measure TPS of simple value transfers — i.e., transferring coins from one account to another. Simple transfers are extremely cheap and thus produce big numbers, and everyone loves big numbers. But no blockchain is actually bottlenecked on transfers like this, and this kind of activity doesn’t reflect real-world usage patterns. Furthermore, many of these numbers are generated on devnets or testnets rather than on mainnet. We don’t care about what someone’s software can do in the abstract: we care about what is possible on current mainnets.
Okay, fine. So how should we actually measure L1 performance?
That’s a tricky question, because performance has multiple dimensions.
But let’s say we want to ignore decentralization and purely focus on performance. Well, benchmarking blockchain performance is notoriously hard because most new chains have very poor data visibility.
7 years in, Ethereum performance is highly studied and very well-understood. But as you start exploring newer chains, most of them have much less tooling, poor observability, and are constantly evolving. By the time you read this, these benchmarks will probably be out of date.
Furthermore, benchmarking is always arbitrary and riddled with pitfalls. The best you can do is pick a benchmark that measures something valuable, and then qualify your results as carefully as you can. That’s what we’ll be attempting to do here.
But what do we even mean by performance? There are two aspects to performance: throughput and latency.
You can visualize blockchain performance like water flowing through a pipe. The transactions are the water — you want lots of transactions flowing through the pipe at once. But the length of the pipe is what determines its latency — if it takes a long time for a transaction to get confirmed, even if lots of transactions can get confirmed at once, that’s not ideal.
Latency can be subdivided into block time (how long between blocks) and time to finality (how long until a block definitely won’t be reverted). Block time and time to finality are easy to measure.
But to actually measure throughput you need a standard unit of measure. Throughput of what?
We chose this benchmark because 1) it’s simple and easy to measure, 2) every blockchain has a Uniswap V2-style AMM live in production, 3) it’s typical of common smart contract usage patterns.
For most chains that have a gas model, this back-of-the-envelope exercise should be straightforward. First, find the block gas limit and the block time to derive the gas/sec throughput of the chain; next, find an Uniswap v2-style AMM and pick a SwapETHforTokens equivalent transaction; lastly, divide the first number by the second to arrive at how many tx/sec would it achieve if its blocks were stuffed full of identical AMM trades.
Note: this is not a perfect benchmark! It’s idiosyncratic, it doesn’t account for parallelizable transactions (since Uniswap trades on the same pool must be linearized), and it’s not representative of every usage pattern. But smart contract usage is always power-law distributed, and the most used Dapps tend to be AMMs, so within a suite of benchmarks, we believe this is illustrative in getting a holistic view of performance.
So without further ado, let’s go down the list.
We will be using a similar calculation for other EVM chains on the list.
(Note: we are ignoring rollups with this methodology since all smart contract L1s are capable of adding rollups.)
Time to finality: There are two notions of finality on Polygon
Time to finality: ~1.75s after the block is produced
Avalanche is relatively hard to compare due to its block production mechanism being so different from Ethereum and the PoS chains. For Avalanche, there’s a large spread between what it can perform at maximum throughput and what it performs at average throughput. (Chains like Ethereum that have implemented EIP-1559 are bounded by 2x of their average throughput.)
This concludes the benchmarking of the EVM blockchains — the blockchains whose virtual machine is modeled on Ethereum’s. Since all EVM chains use the same gas model, we can look at gas/sec as a benchmark for throughput. The solid bars denote target throughput, and hollow bars represent the limit.
Gas/sec for EVM chains
You can imagine that Binance Smart Chain is what happens when you run the EVM at its absolute limit. If you want to get higher performance out of smart contracts, you’ll have to move away from the EVM entirely.
Here’s how we calculated this number. This one’s a doozy.
We first wanted to find a “gas limit” equivalent for Solana. You can’t find any number like that on block explorers. We started by asking some Solana developers we knew, but nobody seemed to know definitively if there even was such a limit. So we rolled up our sleeves and went on a trip to find out for ourselves.
Second, only a limited number of CUs are writable to a single account in a single block. This limit is to prevent too many transactions writing to the same account, therefore reducing a block’s parallelism — though this is exactly what happens during mass congestion, such as during a popular IDO, when all transactions are competing to use a single contract.
This number seems lower than expected! For us to trust this number, we’d want to verify this approach empirically.
We spammed the Orca SOL-ORCA trading pair on the devnet to see how many Orca swaps we could land in a single block, and then extrapolated to the max throughput.
The highest number we managed to hit was 184 swaps in a single block. Assuming a block time of 380 ms, this gives us 484.21 swaps/second on the devnet. (Note that block times are not exact, so there is some jitter in these numbers. If you average across the 3 blocks where we landed the most transactions, it looks more like 381 swaps/second, which seems more reasonable). This seems to confirm that our analytical approach was correct (~10–15% delta), which therefore implies Solana’s mainnet can likely perform about 273 swaps/second on an AMM.
The AMM test: Uniswap v2 style swaps/sec performance
So what’s the upshot of all this?
First, don’t take this as gospel. Do the math yourself.
Second, remember that all these blockchains are moving targets. They’re continually being optimized, and the technology is evolving rapidly, while any benchmark is a moment-in-time snapshot. We’d love to see more independent organizations creating standardized benchmarks, but this is our best attempt.
Third, notice that the spread in performance between these blockchains is not as big as advertised. The performance difference between Ethereum and the very best chain is about 10–25x, not 100x or 1000x. Nobody is getting that great performance out of linearized VM transactions; that will require a lot more work and optimization.
Fifth, users aren’t that sensitive to performance considerations on non-Ethereum L1s right now. They care a lot more about the overall strength of an ecosystem, good UX, and low fees. These blockchains are not currently competing on performance because none of them are actually being used to capacity except during rare spikes, such as during IDOs or market meltdowns.
We expect that all of the major L1s will improve in their performance over time, as the dev teams spend more and more time tuning the performance across typical usage patterns. It should be no surprise that in their early days, each of these blockchains is poorly optimized!
But overall I come away with this impression: Ethereum is the MS-DOS of smart contract operating systems. But the current era of blockchains takes us into the Windows 95 era.
Next-generation blockchains represent a marked improvement, but there’s much further to go from here to get to mainstream adoption.
https://limechain.tech/blog/optimistic-rollups-vs-zk-rollups/
Dimitar Bogdanov, Jul 2021
Ethereum is arguably the most influential blockchain project ever and the one that has contributed the most to the development and evolution of the DLT space. Of course, we cannot talk about influential blockchain projects without mentioning Bitcoin, which kicked off the whole thing. But between initial coin offerings, DeFi and this year’s NFT boom, Ethereum has basically been the driving force behind all major blockchain and crypto trends of the past few years. Not to mention that the vast majority of crypto tokens today started their life on Ethereum. So, it’s not an exaggeration to say that Ethereum has for years been shaping the blockchain and crypto landscape.
Rollups are one of the most promising categories of Layer 2 solutions. These solutions move transaction computations off-chain, but store transaction data to the Ethereum chain, which means that rollups are secured by Layer 1.
All this is accomplished via smart contracts whose primary function is to bundle, or ‘roll up’, transaction data and move it off chain for processing. This data is handled by network participants typically referred to as sequencers or validators, who then submit batches of highly compressed transaction data back to the main chain. Those batches contain the minimum information needed to verify whether the transactions are valid.
Because rollups move computation off-chain but still submit (highly compressed) data to the Ethereum mainnet, they can produce gains in scalability without creating data availability issues which is sometimes the case with other Layer 2 solutions. Some rollups also come with the option of off-chain data availability (where no data is actually posted on Ethereum), which can lead to significant gains in throughput, but at the cost of reduced security.
The method of verification is the key distinction between the two types of rollups – zero knowledge (ZK) rollups and optimistic rollups. ZK rollups generate cryptographic proofs that can be used to prove the validity of transactions. Each batch of transactions has its own ‘validity proof’ which is submitted to the main chain.
In contrast, optimistic rollups assume that all transactions are valid and submit batches without performing any computation whatsoever, which can lead to significant improvements in scalability. However, they include a challenge period during which anyone can dispute the legitimacy of the data contained in a batch. If a fraudulent transaction is detected, the rollup executes a so called fraud proof and runs the correct transaction computation using the data available on Layer 1. To ensure that they are incentivized to process only legitimate transaction data, sequencers are required to stake ETH. If they perform their duties diligently they receive staking rewards, but if a sequencer submits a fraudulent transaction to the main Ethereum chain, their stake is slashed.
One of the biggest strengths of optimistic rollups stems from the fact that they do not perform computation by default, which can lead to significant scalability gains – estimates suggest that optimistic rollups can offer up to 10-100x improvements in scalability. On the downside, the need to have a challenge period means that withdrawal periods are significantly longer than ZK rollups.
Another big advantage of optimistic rollups is that they are capable of executing smart contracts, whereas ZK rollups are mostly limited to simple transactions.
So let’s examine each in greater detail.
In a typical rollup fashion, Optimism uses a smart contract to relay transaction data from the main Ethereum chain to a Layer 2 network, where a sequencer can bundle multiple transactions into a batch and then submit that batch back to the main chain via a single transaction. Sequencers perform these duties optimistically under the assumption that all transactions are valid. The system has a one-week period during which that assumption can be challenged. If any discrepancies are found, the rollup generates a fraud proof. In order for such a proof to be generated, the whole Layer 2 transaction is executed on Layer 1. The advantage of this approach is that it enables very fast proof generation.
Optimism tries to stick as close as possible to the Ethereum ecosystem. It uses a modified GETH for its Layer 2 node and it has a Solidity compiler. However, it does not support any EMV languages apart from Solidity.
Currently the protocol does not have a native token and uses ETH for payments.
The Arbitrum project was set to be the main challenger to Optimism, but following the latter’s launch delay, it scored an early lead in the optimistic rollup race. Arbitrum launched on the Ethereum mainnet on May 28.
As mentioned above, Arbitrum is very similar to Optimism, with the main difference between the two projects being the way they generate fraud proof. Unlike Optimism, which executes the whole Layer 2 transaction, Arbitrum takes a multi-round approach where it executes small chunks of the L2 transaction until it finds a discrepancy. This approach has the benefit of enabling higher transaction capacity. On the downside, generating fraud proof this way typically takes a week – and can take up to two weeks in some cases – much longer than with the method used by Optimism.
On the compatibility side, Arbitrum supports all EMV languages, including YUL, Vyper and Solidity, among others. However, it uses a custom L2 node. Like Optimism, Arbitrum uses ETH for payments.
Whereas optimistic rollups assume that everyone acts in good faith, ZK rollups seek to ensure that that’s actually the case. The rollup moves bundles of transactions to Layer 2 and generates a validity proof for every bundle. The validity proofs are then submitted to Layer 1 to serve as proxy for their corresponding bundles. This method results in significant data size reduction and in turn lowers the time and gas cost for validating a block. You can achieve further optimization by employing some neat tricks. For example, accounts can be represented as indexes instead of an addresses, which greatly reduces transaction size.
One drawback of ZK rollups is that generating a validity proof is a complex and time consuming process. Another drawback is the aforementioned inability to execute smart contracts, although there are some exceptions, as we’ll see below.
On the other hand, ZK rollups do not require a challenge period, as the validity proof has already verified the legitimacy of transaction data. That’s why ZK rollups allow for very fast withdrawal times. So while ZK rollups are typically not good for general purpose applications, they are great for exchanges and other apps that require simple payments.
There are a number of promising projects that are currently populating the ZK rollup corner of the Ethereum ecosystem. Here are some of the most promising ones:
So what exactly is Hermez? Well, it is a ZK rollup that generates cryptographic proofs called SNARKs (succinct non-interactive argument of knowledge). It is developed by Iden3, the team behind the popular Circom and SnarkJS libraries. According to Iden3, Hermez can scale Ethereum to 2,000 transactions per second.
The Hermez Network relies on coordinators for processing batches to the Hermez rollup and generating validity proofs for those batches. Coordinators are selected via an auction process that sees registered network nodes placing bids to become the next coordinator. The winning bidder gets to process as many batches as they can during a single ‘slot’ that lasts 40 Ethereum blocks or approximately 10 minutes.
Currently, bids are paid in Hermez’s proprietary token, HEZ. However, this is about to change, as following the Polygon deal, HEZ will cease to exist and will be replaced by Polygon’s token Matic. The exact date of the change is not yet determined, but Hermez has announced plans to publish a smart contract that will allow HEZ holders to swap their HEZ for Matic tokens at a rate of 3.5 Matic per HEZ.
Recently, Hermez also launched an atomic transaction feature, which enables cheap token swaps on the network. Also recently, the Hermez team announced that they are working on a zero knowledge Ethereum Virtual Machine (ZKEMV) aimed at achieving full opcode compatibility. This means that Hermez, or to be more precise, Polygon Hermez, will be able to support smart contracts.
ZKSync also supports token swaps and NFT (non-fungible tokens) minting. Earlier this year, the platform launched in alfa its ZKEVM, which allows it to execute smart contracts. ZkSync supports most opcodes in Ethereum.
A big part of the vision for ZKSync 2.0 is an off-chain data availability solution dubbed ZK Porter. The solution is meant to complement the rollup component of ZKSync 2.0, meaning that rollup contracts and accounts will be able to interact with ZKPorter accounts and vice versa. Off-chain data availability in ZKPorter will be secured by so called guardians, who stake ZKSync tokens and sign blocks to confirm data availability in ZKPorter accounts. With their stakes on the line, guardians are motivated to ensure that there are no data availability failures. What’s more, Matter Labs claim that the ZKSync’s proof of stake is significantly more secure than PoS in alternative scaling solutions like sidechains, because guardians cannot steal funds.
The Loopring moniker stems from one of the protocol’s most interesting features – order rings. An order ring is a circular trading system that contains up to 16 individual orders. So whereas a buy order typically has to be matched by an opposing sell order, and vice versa, orders in an order ring do not need direct matches to be executed. This system can result in better liquidity, price discovery and other benefits.
Another difference between these two ZKP types is that while SNARKs are based on elliptic curve cryptography, STARKs rely on hash functions, which offers certain benefits, quantum resistance being among them.
On the downside, STARKs have a significantly bigger proof size and, because of this, are way more expensive to verify.
Nevertheless, the StarkEX protocol has already been utilized to power some interesting projects, including the DeversiFi DEX and the recently launched NFT minting and trading platform Immutable X.
“While there is little debate that scaling Ethereum is necessary to continue supporting the rapid growth of the network, this issue is sometimes framed as a competition between Ethereum 2.0, Layer 2 services, and “Ethereum killer” Layer 1s,” Andreessen Horowitz said in the announcement of their $25-million Series A investment in Optimism. “One lesson from internet history is that when you give developers a powerful new computing platform, they create applications at such a rapid rate that demand consistently outpaces supply. We believe the same will be true for Ethereum, and therefore that the answer to scaling is “all of the above,” including Ethereum 2.0, bridged Layer 1s, and Layer 2 solutions.”
And when it comes to Layer 2 solutions, rollups are up there with the best of them and might even be a bit better than the alternatives in some respects. It is no coincidence that Ethereum’s creator, Vitalik Buterin, is quite fond of rollups and sees them as a natural fit for PoS and sharding – the two main components of the Ethereum 2.0 project.
https://gourmetcrypto.substack.com/p/layer-2-for-beginners
Ali Atiia, Mar 202
Misinformation campaigns in the crypto space increase dramatically during bull cycles. Many sidechain projects misleadingly present themselves as layer-2 scaling solutions. This article explains what makes a chain an L2, for absolute beginners.
Back to the new shiny chain that may or may not be an L2, let's call this chain Macau. You want to "move" your 100 Dai from Ethereum to Macau because you want to buy something over there, or trade at cheaper gas prices, or maybe you just want to do something to feel something.
So how do you move your 100 Dai from Ethereum to Macau? Obviously you send an email to Vitalik's Masternode HQ and ask him to move it…. No, you actually transfer ownership of your 100 Dai to another contract on the Ethereum blockchain, which is typically referred to as the "deposit" contract (think of it like the deposit window at a casino).
Step 1/4: you send 100 Dai to Macau's deposit contract on the Ethereum blockchain.
Macau’s miners/validators detect your deposit because they constantly watch the Ethereum blockchain, particularly that deposit contract, and one of them says to the others: "hey guys! guys! we have a new victim, quick! look busy, ahem". She then says to you: "welcome chad, glad you could join us, here’s a 100 synthetic Dai for you to play with on our chain, freshly minted in this new Macau block that I just mined/validated".
Step 2/4: you are issued a 100 IOU notes on Macau (think casino chips) representing a claim on the real 100 Dai that is locked up on the deposit contract on Ethereum. We call these notes synthetic Dai, or sDai for short.
You being "on Macau" practically means you are on some website, which has some Javascript that is communicating with the Macau p2p network. Exactly like you are "on Ethereum" when you are e.g. on Aave’s website to borrow or on Uniswap’s website to trade: the Javascript on these frontends package a borrow/swap transaction for you, feed it to your Metamask, you sob for 5 minutes after seeing the gas fee, then proceed to click "Confirm" to sign and broadcast your transaction. You're familiar with this workflow.
It's the same thing with Macau. In fact, it may even be the same exact workflow if Macau is a fork of Ethereum, like Binance's BSC or Avalanche's C-Chain because you can use Metamask with both, without the need for a specialized wallet to sign packaged transactions. This is because the address format and the cryptographic signature scheme is the same in Ethereum/BSC etc.
Step 3/4: do stuff with the 100 sDai on Macau, e.g. trade, farm, gamble, invest etc.
Say you played poker and turned your 100 sDai to 200 sDai. The +100 sDai you gained came from other people who also came to Macau to gamble (and so they too had previously locked real Dai on Macau's deposit contract on Ethereum).
We came to the critical moment (FOCUS 👏):
You want to collect your gains and go back home, i.e. Ethereum, because you are a chad who values high security and deep liquidity, or maybe you are a masochist who has a thing for $1k+ gas fees. If you can unlock your 200 Dai, and ONLY that, from Macau's deposit contract (again, it's on the Ethereum blockchain) financially independently anytime anywhere 'round the wurrrrld and no one can stop you .. then Macau is an L2 🎉🍾 🔒.
If Macau's validators can in theory prevent you from unlocking and withdrawing your 200 Dai or steal it outright (by withdrawing it to themselves), then Macau is NOT an L2, but rather is a sidechain ☠️.
If YOU can in theory unlock and withdraw more than you are entitled to, say 300 Dai, then Macau is also NOT an L2.
Step 4/4: exit Macau and unlock whatever funds you are entitled to on the deposit contract on Ethereum….if you can!
When it comes to scaling solutions, it always boils down to: “who controls the exits?”
So how could the deposit contract on Ethereum be made smart enough to prevent you, other Macau users, and Macau's miners/validators/operators from cheating?
As you can imagine this is not a trivial thing to do, because it requires that the contract is smart enough to know who on Macau owes what to whom and when: while it's true you had won 100 sDai in a poker game on Macau an hour ago, and as such you’re entitled to withdrawing an additional 100 real Dai on Ethereum, you may have had since then lost it in a subsequent game! Therefore, the contract must be able to determine the truth, the whole truth, and nothing but the truth about the latest state of Macau.
Early approaches like state channels and plasma tried to do exactly this: coding up fraud signalling and dispute resolution logic into the deposit/withdraw contract on L1. However, they both put burdensome responsibilities on the users, such as:
A user must be “live” at all times watching the L1 contracts on Ethereum in order to challenge/inhibit/punish malicious withdrawal attempts that threaten her assets.
In rollups, all parties involved are kept honest by the power of math (ZKRU) or cryptoeconomic (ORU) guarantees, and users can always use the data on L1 to safely exit their funds should the rollup operator disappears or starts messing around: spamming, censoring, or (in the case of optimistic rollup) committing fraud. This is all enshrined in the rollup contract on L1 Ethereum, and so the only thing users need to trust is faithful execution of these contracts by the L1 network (same trust assumptions around any other L1 contract, like MakerDao MCD or Aave etc).
Done and done 🤝.
Note 1: Other layer-1 chains like NEAR, Polkadot, or CosmosHub could indeed be rollups relative to Ethereum, they just need to make a bridge that adheres to the rollup design pattern, and post the necessary data to Ethereum, just like any other rollup would.
Note 2: In the case of ZK Rollup, fraud couldn’t even be committed thanks to validity proofs attesting to the the correctness of the rollup state update, which get validated on L1 at every update. However, data must still be posted on-chain so that if the rollup operator disappears, users can use that data to submit a withdrawal request themselves directly to the deposit contract.
“What about muh Lightening Network?” Lightening is L2 only in theory. In practice, normal users will mostly certainly trust a third party to keep watch (see discussion above on state channels), which means Lightening is not an L2 in practice. Most people certainly do not have the time and/or skill to spin up a lambda function on AWS (with fail-safe backup) to keep watching their money on L1.
Rollups are the only layer-2 scaling solution with the assurance that (a) you can’t get robbed while you’re asleep, and (b) no one can prevent you exiting what you are entitled to, because the physical funds and exit mechanisms are under the control of L1 Ethereum chain.
Back to sidechains:
You could literally spin up an Ethereum sidechain in an afternoon: you just need a basic smart-wallet-like contract on Ethereum where people deposit their funds, a fork of Geth (just pick a new chain ID for your sidechain and rebuild)....and voila, you are basically done ... um, well, not quite .. you still need to hire shill armies, graphic designers to create a glitzy website, etc...but plenty of VCs are happy to take care of all that for you, they have massive bot farms ready for deployment.
But it boils down to a simple question: who controls the exits?
With rollups, exits are under the control and protection of mighty EVM of L1 Ethereum.
Take away messages:
Currently any claim of >2k tps of a chain selling itself as a scalability solution probably means it’s a sidechain and the user is making otherwise undisclosed trust assumptions.
Rollups may provide 10k+ tps after Eth2 data shards go live, they’re data-hungry.
Rollups are the only layer-2 scalability solutions without additional trust and/or liveness assumption on the user.
Sidechains refuse to die because they can be spun in 1 hour, typically in order to raise money and dump a token on retail.
When aping into another chain, examine the exits and the trust assumptions you must make to (a) remain safe while on it (b) exit your funds safely. There are typically mountains of marketing fluff and nonsensical jargon designed to obscure these security tradeoffs.
Other L1 chains can be rollups relative to Ethereum, they just need to adhere to the rollup design pattern and post the necessary data onto Ethereum.
Layer-2 without liveness assumption can’t be built on Bitcoin because it lacks the necessary programming primitives and state plumbing to enshrine the necessary protections on L1.
Thanks for reading and have a nice day! 👋
https://vitalik.ca/general/2021/01/05/rollup.html
Vitalik Buterin, Jan 21
Second, you can change the way that you use the blockchain. Instead of putting all activity on the blockchain directly, users perform the bulk of their activity off-chain in a "layer 2" protocol. There is a smart contract on-chain, which only has two tasks: processing deposits and withdrawals, and verifying proofs that everything happening off-chain is following the rules. There are multiple ways to do these proofs, but they all share the property that verifying the proofs on-chain is much cheaper than doing the original computation off-chain.
Imagine that Alice is offering an internet connection to Bob, in exchange for Bob paying her $0.001 per megabyte. Instead of making a transaction for each payment, Alice and Bob use the following layer-2 scheme.
First, Bob puts $1 (or some ETH or stablecoin equivalent) into a smart contract. To make his first payment to Alice, Bob signs a "ticket" (an off-chain message), that simply says "$0.001", and sends it to Alice. To make his second payment, Bob would sign another ticket that says "$0.002", and send it to Alice. And so on and so forth for as many payments as needed. When Alice and Bob are done transacting, Alice can publish the highest-value ticket to chain, wrapped in another signature from herself. The smart contract verifies Alice and Bob's signatures, pays Alice the amount on Bob's ticket and returns the rest to Bob. If Alice is unwilling to close the channel (due to malice or technical failure), Bob can initiate a withdrawal period (eg. 7 days); if Alice does not provide a ticket within that time, then Bob gets all his money back.
This technique is powerful: it can be adjusted to handle bidirectional payments, smart contract relationships (eg. Alice and Bob making a financial contract inside the channel), and composition (if Alice and Bob have an open channel and so do Bob and Charlie, Alice can trustlessly interact with Charlie). But there are limits to what channels can do. Channels cannot be used to send funds off-chain to people who are not yet participants. Channels cannot be used to represent objects that do not have a clear logical owner (eg. Uniswap). And channels, especially if used to do things more complex than simple recurring payments, require a large amount of capital to be locked up.
To deposit an asset, a user sends it to the smart contract managing the Plasma chain. The Plasma chain assigns that asset a new unique ID (eg. 537). Each Plasma chain has an operator (this could be a centralized actor, or a multisig, or something more complex like PoS or DPoS). Every interval (this could be 15 seconds, or an hour, or anything in between), the operator generates a "batch" consisting of all of the Plasma transactions they have received off-chain. They generate a Merkle tree, where at each index X
in the tree, there is a transaction transferring asset ID X
if such a transaction exists, and otherwise that leaf is zero. They publish the Merkle root of this tree to chain. They also send the Merkle branch of each index X
to the current owner of that asset. To withdraw an asset, a user publishes the Merkle branch of the most recent transaction sending the asset to them. The contract starts a challenge period, during which anyone can try to use other Merkle branches to invalidate the exit by proving that either (i) the sender did not own the asset at the time they sent it, or (ii) they sent the asset to someone else at some later point in time. If no one proves that the exit is fraudulent for (eg.) 7 days, the user can withdraw the asset.
Plasma provides stronger properties than channels: you can send assets to participants who were never part of the system, and the capital requirements are much lower. But it comes at a cost: channels require no data whatsoever to go on chain during "normal operation", but Plasma requires each chain to publish one hash at regular intervals. Additionally, Plasma transfers are not instant: you have to wait for the interval to end and for the block to be published.
Additionally, Plasma and channels share a key weakness in common: the game theory behind why they are secure relies on the idea that each object controlled by both systems has some logical "owner". If that owner does not care about their asset, then an "invalid" outcome involving that asset may result. This is okay for many applications, but it is a deal breaker for many others (eg. Uniswap). Even systems where the state of an object can be changed without the owner's consent (eg. account-based systems, where you can increase someone's balance without their consent) do not work well with Plasma. This all means that a large amount of "application-specific reasoning" is required in any realistic plasma or channels deployment, and it is not possible to make a plasma or channel system that just simulates the full ethereum environment (or "the EVM"). To get around this problem, we get to... rollups.
The fact that data is on-chain is key (note: putting data "on IPFS" does not work, because IPFS does not provide consensus on whether or not any given piece of data is available; the data must go on a blockchain). Putting data on-chain and having consensus on that fact allows anyone to locally process all the operations in the rollup if they wish to, allowing them to detect fraud, initiate withdrawals, or personally start producing transaction batches. The lack of data availability issues means that a malicious or offline operator can do even less harm (eg. they cannot cause a 1 week delay), opening up a much larger design space for who has the right to publish batches and making rollups vastly easier to reason about. And most importantly, the lack of data availability issues means that there is no longer any need to map assets to owners, leading to the key reason why the Ethereum community is so much more excited about rollups than previous forms of layer 2 scaling: rollups are fully general-purpose, and one can even run an EVM inside a rollup, allowing existing Ethereum applications to migrate to rollups with almost no need to write any new code.
There is a smart contract on-chain which maintains a state root: the Merkle root of the state of the rollup (meaning, the account balances, contract code, etc, that are "inside" the rollup).
Anyone can publish a batch, a collection of transactions in a highly compressed form together with the previous state root and the new state root (the Merkle root after processing the transactions). The contract checks that the previous state root in the batch matches its current state root; if it does, it switches the state root to the new state root.
To support depositing and withdrawing, we add the ability to have transactions whose input or output is "outside" the rollup state. If a batch has inputs from the outside, the transaction submitting the batch needs to also transfer these assets to the rollup contract. If a batch has outputs to the outside, then upon processing the batch the smart contract initiates those withdrawals.
And that's it! Except for one major detail: how to do know that the post-state roots in the batches are correct? If someone can submit a batch with any post-state root with no consequences, they could just transfer all the coins inside the rollup to themselves. This question is key because there are two very different families of solutions to the problem, and these two families of solutions lead to the two flavors of rollups.
The two types of rollups are:
Optimistic rollups, which use fraud proofs: the rollup contract keeps track of its entire history of state roots and the hash of each batch. If anyone discovers that one batch had an incorrect post-state root, they can publish a proof to chain, proving that the batch was computed incorrectly. The contract verifies the proof, and reverts that batch and all batches after it.
There are complex tradeoffs between the two flavors of rollups:
Fixed gas cost per batch
~40,000 (a lightweight transaction that mainly just changes the value of the state root)
~500,000 (verification of a ZK-SNARK is quite computationally intensive)
Withdrawal period
~1 week (withdrawals need to be delayed to give time for someone to publish a fraud proof and cancel the withdrawal if it is fraudulent)
Very fast (just wait for the next batch)
Complexity of technology
Low
High (ZK-SNARKs are very new and mathematically complex technology)
Generalizability
Easier (general-purpose EVM rollups are already close to mainnet)
Per-transaction on-chain gas costs
Higher
Lower (if data in a transaction is only used to verify, and not to cause state changes, then this data can be left out, whereas in an optimistic rollup it would need to be published in case it needs to be checked in a fraud proof)
Off-chain computation costs
Lower (though there is more need for many full nodes to redo the computation)
Higher (ZK-SNARK proving especially for general-purpose computation can be expensive, potentially many thousands of times more expensive than running the computation directly)
In general, my own view is that in the short term, optimistic rollups are likely to win out for general-purpose EVM computation and ZK rollups are likely to win out for simple payments, exchange and other application-specific use cases, but in the medium to long term ZK rollups will win out in all use cases as ZK-SNARK technology improves.
The security of an optimistic rollup depends on the idea that if someone publishes an invalid batch into the rollup, anyone else who was keeping up with the chain and detected the fraud can publish a fraud proof, proving to the contract that that batch is invalid and should be reverted.
It is guaranteed that if a batch was constructed incorrectly, and all previous batches were constructed correctly, then it is possible to create a fraud proof showing the the batch was constructed incorrectly. Note the claim about previous batches: if there was more than one invalid batch published to the rollup, then it is best to try to prove the earliest one invalid. It is also, of course, guaranteed that if a batch was constructed correctly, then it is never possible to create a fraud proof showing that the batch is invalid.
A simple Ethereum transaction (to send ETH) takes ~110 bytes. An ETH transfer on a rollup, however, takes only ~12 bytes:
Nonce
~3
0
Gasprice
~8
0-0.5
Gas
3
0-0.5
To
21
4
Value
~9
~3
Signature
~68 (2 + 33 + 33)
~0.5
From
0 (recovered from sig)
4
Total
~112
~12
Part of this is simply superior encoding: Ethereum's RLP wastes 1 byte per value on the length of each value. But there are also some very clever compression tricks that are going on:
Nonce: the purpose of this parameter is to prevent replays. If the current nonce of an account is 5, the next transaction from that account must have nonce 5, but once the transaction is processed the nonce in the account will be incremented to 6 so the transaction cannot be processed again. In the rollup, we can omit the nonce entirely, because we just recover the nonce from the pre-state; if someone tries replaying a transaction with an earlier nonce, the signature would fail to verify, as the signature would be checked against data that contains the new higher nonce.
Gasprice: we can allow users to pay with a fixed range of gasprices, eg. a choice of 16 consecutive powers of two. Alternatively, we could just have a fixed fee level in each batch, or even move gas payment outside the rollup protocol entirely and have transactors pay batch creators for inclusion through a channel.
Gas: we could similarly restrict the total gas to a choice of consecutive powers of two. Alternatively, we could just have a gas limit only at the batch level.
To: we can replace the 20-byte address with an index (eg. if an address is the 4527th address added to the tree, we just use the index 4527 to refer to it. We would add a subtree to the state to store the mapping of indices to addresses).
Value: we can store value in scientific notation. In most cases, transfers only need 1-3 significant digits.
One important compression trick that is specific to ZK rollups is that if a part of a transaction is only used for verification, and is not relevant to computing the state update, then that part can be left off-chain. This cannot be done in an optimistic rollup because that data would still need to be included on-chain in case it needs to be later checked in a fraud proof, whereas in a ZK rollup the SNARK proving correctness of the batch already proves that any data needed for verification was provided. An important example of this is privacy-preserving rollups: in an optimistic rollup the ~500 byte ZK-SNARK used for privacy in each transaction needs to go on chain, whereas in a ZK rollup the ZK-SNARK covering the entire batch already leaves no doubt that the "inner" ZK-SNARKs are valid.
These compression tricks are key to the scalability of rollups; without them, rollups would be perhaps only a ~10x improvement on the scalability of the base chain (though there are some specific computation-heavy applications where even simple rollups are powerful), whereas with compression tricks the scaling factor can go over 100x for almost all applications.
There are a number of schools of thought for who can submit a batch in an optimistic or ZK rollup. Generally, everyone agrees that in order to be able to submit a batch, a user must put down a large deposit; if that user ever submits a fraudulent batch (eg. with an invalid state root), that deposit would be part burned and part given as a reward to the fraud prover. But beyond that, there are many possibilities:
Total anarchy: anyone can submit a batch at any time. This is the simplest approach, but it has some important drawbacks. Particularly, there is a risk that multiple participants will generate and attempt to submit batches in parallel, and only one of those batches can be successfully included. This leads to a large amount of wasted effort in generating proofs and/or wasted gas in publishing batches to chain.
Centralized sequencer: there is a single actor, the sequencer, who can submit batches (with an exception for withdrawals: the usual technique is that a user can first submit a withdrawal request, and then if the sequencer does not process that withdrawal in the next batch, then the user can submit a single-operation batch themselves). This is the most "efficient", but it is reliant on a central actor for liveness.
Random selection from PoS set: anyone can deposit ETH (or perhaps the rollup's own protocol token) into the rollup contract, and the sequencer of each batch is randomly selected from one of the depositors, with the probability of being selected being proportional to the amount deposited. The main drawback of this technique is that it leads to large amounts of needless capital lockup.
DPoS voting: there is a single sequencer selected with an auction but if they perform poorly token holders can vote to kick them out and hold a new auction to replace them.
Split batching and state root provision
Some of the rollups being currently developed are using a "split batch" paradigm, where the action of submitting a batch of layer-2 transactions and the action of submitting a state root are done separately. This has some key advantages:
You can allow many sequencers in parallel to publish batches in order to improve censorship resistance, without worrying that some batches will be invalid because some other batch got included first.
If a state root is fraudulent, you don't need to revert the entire batch; you can revert just the state root, and wait for someone to provide a new state root for the same batch. This gives transaction senders a better guarantee that their transactions will not be reverted.
So all in all, there is a fairly complex zoo of techniques that are trying to balance between complicated tradeoffs involving efficiency, simplicity, censorship resistance and other goals. It's still too early to say which combination of these ideas works best; time will tell.
Here's a chart for some other example use cases:
ETH transfer
12
21,000
105x
ERC20 transfer
16 (4 more bytes to specify which token)
~50,000
187x
Uniswap trade
~14 (4 bytes sender + 4 bytes recipient + 3 bytes value + 1 byte max price + 1 byte misc)
~100,000
428x
Privacy-preserving withdrawal (Optimistic rollup)
296 (4 bytes index of root + 32 bytes nullifier + 4 bytes recipient + 256 bytes ZK-SNARK proof)
77x
Privacy-preserving withdrawal (ZK rollup)
40 (4 bytes index of root + 32 bytes nullifier + 4 bytes recipient)
570x
Max scalability gain is calculated as (L1 gas cost) / (bytes in rollup * 16) * 12 million / 12.5 million.
Now, it is worth keeping in mind that these figures are overly optimistic for a few reasons. Most importantly, a block would almost never just contain one batch, at the very least because there are and will be multiple rollups. Second, deposits and withdrawals will continue to exist. Third, in the short term usage will be low, and so fixed costs will dominate. But even with these factors taken into account, scalability gains of over 100x are expected to be the norm.
While the basic concept of a rollup is now well-understood, we are quite certain that they are fundamentally feasible and secure, and multiple rollups have already been deployed to mainnet, there are still many areas of rollup design that have not been well explored, and quite a few challenges in fully bringing large parts of the Ethereum ecosystem onto rollups to take advantage of their scalability. Some key challenges include:
User and ecosystem onboarding - not many applications use rollups, rollups are unfamiliar to users, and few wallets have started integrating rollups. Merchants and charities do not yet accept them for payments.
Cross-rollup transactions - efficiently moving assets and data (eg. oracle outputs) from one rollup into another without incurring the expense of going through the base layer.
Auditing incentives - how to maximize the chance that at least one honest node actually will be fully verifying an optimistic rollup so they can publish a fraud proof if something goes wrong? For small-scale rollups (up to a few hundred TPS) this is not a significant issue and one can simply rely on altruism, but for larger-scale rollups more explicit reasoning about this is needed.
Exploring the design space in between plasma and rollups - are there techniques that put some state-update-relevant data on chain but not all of it, and is there anything useful that could come out of that?
Maximizing security of pre-confirmations - many rollups provide a notion of "pre-confirmation" for faster UX, where the sequencer immediately provides a promise that a transaction will be included in the next batch, and the sequencer's deposit is destroyed if they break their word. But the economy security of this scheme is limited, because of the possibility of making many promises to very many actors at the same time. Can this mechanism be improved?
Improving speed of response to absent sequencers - if the sequencer of a rollup suddenly goes offline, it would be valuable to recover from that situation maximally quickly and cheaply, either quickly and cheaply mass-exiting to a different rollup or replacing the sequencer.
Efficient ZK-VM - generating a ZK-SNARK proof that general-purpose EVM code (or some different VM that existing smart contracts can be compiled to) has been executed correctly and has a given result.
Rollups are a powerful new layer-2 scaling paradigm, and are expected to be a cornerstone of Ethereum scaling in the short and medium-term future (and possibly long-term as well). They have seen a large amount of excitement from the Ethereum community because unlike previous attempts at layer-2 scaling, they can support general-purpose EVM code, allowing existing applications to easily migrate over. They do this by making a key compromise: not trying to go fully off-chain, but instead leaving a small amount of data per transaction on-chain.
https://vas3k.com/blog/ethereum/
Ethereum is the second most popular blockchain project in the world and looks like the most interesting from technical point of view.
Bitcoin, the starting point, was not just a system of financial transactions, but a display of the new way of network organization, where security is guaranteed not by “middlemen” and “customer agreement” but pure mathematics. That’s how the world found about blockchain – the constant data list guaranteed from any external data modifications by mathematics.
Ethereum used blockchain idea as the starting point and used it for a wider class of implementations. It became possible to guarantee not only the validity of financial transactions but literally any conditions and agreements. It also made possible automation of creation of such conditions.
In everyday life, we always make agreements based on “if… then” principle, regarding everything, not only finance. “If I help you to write an article then you let me play your Play Station”, “If I get fit by the summer then I get myself a holiday in Hawaii” etc.
The main problems about agreements are that nobody can really guarantee the fulfillment – help friend with the article, but he refused to share his PlayStation, spent time by playing PlayStation instead of going to the gym. But got a flight to Hawaii anyway – to break the agreement with yourself is pure pleasure.
To avoid these problems business uses contracts: trained people fill the papers with special words, cover them with stamps and signatures so that there is something to bring to the court, where agents on salary punish the violator using another special paper. At the end of the performance, everybody gets together and walks laughing all the way to the tax office.
It looks like a ridiculous game where I must trust someone only because he seems to be honest. All the same – agents, fees, arguments, and promises which looks just like the problem solved by blockchain technology.
That’s what authors of Ethereum were thinking of, and so appeared smart contracts.
Let’s start with the very common situation: our old friend Alex is going to visit another city and rent an apartment there. Some web browsing and he finds the following ad – girl named Kate offers an apartment in the very center for $300. Kate and Alex ain’t familiar so couldn’t trust each other. Kate is afraid that Alex could change his mind at the very last moment. Alex knows very well about the situation when nickname Kate is used by a swindler who has no apartment at all but is good in getting away with other people’s money.
There are two traditional ways to get the things sorted:
1Alex and Kate sign a long agreement, with ID details, TOS, fees, penalties and other scary things. Finally, Alex agrees to pay $100 in advance and hopes that Kate is a real girl.
2They find an agent who agrees to get all responsibility but wants 50% commission. Alex and Kate feel secure but have much lighter pockets.
It is not the best solution. It would be much better to have the system where both could set very strict scheme with logical rules and conditions that don’t rely on a piece of paper, like this:
In our scheme, the prepayment is a bit of extra caution exclusively for Kate’s peace of mind. The algorithmic agreement guarantees the fulfillment of all the conditions laid down in it.
Alex cannot suddenly take his money from the virtual wallet. Kate can’t slip Alex with the wrong code or apartment address as then she doesn’t get anything.
This set of conditions is the simplest one-time smart contract. In the Ethereum network, instead of lawyers and paper document, its execution is guaranteed by the blockchain – transparent and protected from forgery. When Alex creates a transaction in the blockchain “here are my $100 of prepayment” with the condition “to return back if Kate deceives me” – nobody can change this logic.
With such a system Kate just has no need to generate a new contract for every Alex. And nobody has, as it’s possible to describe the logic of the typical rent agreement for everyone – deposit money to the wallet, door combination is generated and if the apartment is available for selected dates – voilà! This contract will be kept in the system forever and it’s possible to apply some discounts, rate change depending on the season and change of conditions by direct request by Kate.
It’s even possible to upload code to GitHub and to create a universal contract for safe rental of any apartment by any Kate. When Alex transfers money to such contract he can feel safe that logic of contract won’t change.
The vital point is that while smart contract coding languages are simplified to the very extent, they’re Turing complete. It means that it’s possible to implement any programmable logic.
They have variables, functions, conditions, loops and even some imitation of classes and inheritance.
In theory, smart-contract can be written for any algorithm that looks pretty much like a brave new world without obscure agreements and paper documents. But with some limitations.
It is difficult to understand all Ethereum at once. In contrast to Bitcoin, many things are incrementally dependent on each other. In particular smart contracts have a lot of restrictions that are related to the features of Ethereum blockchain that guarantees the execution of these smart contracts.
Let’s figure out smart contracts first. It is hard to understand the altered Ethereum blockchain and other things straight away. I’ll tell you later, I promise.
Technically it is better to think of smart contracts not as signing a contract but as executing pieces of code. As a matter of fact, the contract is just a simple code with a result fixed forever in the blockchain.
A contract can be executed by making any network transaction to its address just as a function which returns result or error.
I read few articles where authors hated a “simple code” term. As they say that it is a sort of “smart code that gives guarantees and blah blah blah.” Do not listen to them. This is poured into your ears by marketing experts who want to look smart and sign you to their ICO service. For techies, this is a simple code executed inside of the virtual machine. No magic.
However, a smart contract cannot be written in your favorite programming language. And that’s why:
1Every operation in the contract must be able to dismiss/roll back all changes at any time as if they did not exist. When someone calls a smart contract function, all the miners in the network at the same time try to execute the code of this function to include its result in the new block. But only one miner will add this block while all others will have to forget all the changes. It will be discussed in more details in the Mining section.
So if you add two numbers to the computer you can easily remove and forget the result. But if you can make an HTTP request then this operation is irreversible. As a result, each miner performs this HTTP request on his computer and for the server with the site it’s DDoS.
22. Execution of every contract operation – condition, condition check or function call is not free. The one who calls the contract must pay. In our case, both Alex and Kate will pay few pennies to miners for their calls.
This is done to avoid endless cycles and overly complex computations. After all, the code is executed on the computers of the miners and they can simply hang and not be able to continue mining.
To do this, Ethereum uses the so-called Gas – this is a small piece of Ether (ETH) – the domestic currency. The gas is paid for using of CPU of the miners but only the one who finds the block gets the real penny – he includes it as his commission.
Each operation inside the virtual machine has its own “price”, which is as simple as this: it costs 1 cent to execute 1 line. To execute 15 lines, you need to put 15 cents into the transaction call. It is not necessary to remember the prices when creating a smart contract. The author automatically considers everything.
So after all, the code of smart contracts has access to data and calls within the Ethereum blockchain. You can call the function from another smart contract but you cannot read the file from the disk or go to the Internet to see the dollar rate.
Anyone who wants to call a function of a smart contract is required to send a little money (Gas) along. Usually, this amount is minimal and you can earn it simply by including in the Ethereum Wallet application a mining for a couple of minutes.
Each line of the code spends the Gas attached to the transaction. If it suddenly ends – the execution is terminated and the transaction is canceled. If the code has been successfully executed but Gas still remains. It is returned to the sender as unused. Everything is fair.
In fact, the basics of smart contracts were laid back in Bitcoin’s blockchain. Remember that every miner should verify the signature for each transaction to make sure that the sender does not try to pay with other people’s money?
If you really had a strong intention a smart contract with a lease can be made on bitcoin but Script language has neither cycles nor recursion which deprives it of completeness by Turing while Ethereum has everything needed and the whole virtual machine in addition.
So far, we have had only wallets, transactions, and blocks. So a smart contract is a wallet. It is called an account here.
If an ordinary wallet is managed by a pair of public and private keys to it – the smart contract is a hash from its own code. Changing at least one symbol in a smart contract (even a comment in the code) creates another smart contract. So they are guaranteed to be unique.
Smart contracts are created once and for all. Blockchain remembers everything and nothing can be modified.
A traditional user wallet here is called “externally owned account” and created “contract account” in the network of smart contracts. In the following text, they’re called “wallet” and “contract”, for brevity.
Communication with both types of accounts is possible only through transactions
A transaction to a user’s wallet is a transfer of funds. A complete analog with bitcoin. The transfer includes the number of ETHs sent and the address of the recipient.
A transaction to a contract is a call to its method. Therefore it is usually called a “message”. In addition to the quantity and address of the contract, it includes additional call parameters and Gas for code execution.
A transaction without a recipient is the creation of a smart contract.> In such a transaction it is necessary to transfer the compiled bytecode of the contract and Gas for execution of the code for creating the contract (constuctor in OOP terms).
An important feature: timers that trigger on a certain time are impossible in contracts impossible timers. A contract can only be triggered by a transaction which is always launched by a live person. The contract does not work “in the background”. If it was called – it can well cause another contract.
It looks like our example about the refund of the advance payment shows that Alex should request it himself from the contract.
In my previous post, I told in simple words that blockchain consists of a long chain of all changes – its history. To calculate the current balance of the wallet you need to go through it and put everything down. I received 5 BTC, paid 3 BTC, then received another 4 BTC, total balance is 5 – 3 + 4 = 6 BTC – this was the current state of the wallet.
Ethereum in its white paper dedicates a good deal to explain why the chain of changes and the chain of states are essentially the same thing.
The state is a mold of all changes for a certain moment.
History and States are not some different entities but two approaches to understanding the same thing. Technically, even bitcoin wallets inside themselves turn the history into the state in order to make it simple.
It looks like a war between functional and imperative programming. Two fundamentally different approaches in describing the same – the logic of the problem (hope this analogy is more clear).
Understanding the blockchain as a history of states simplifies the picture pretty much. You no longer have to check complete history and look for unused transactions in order to show the balance. You can simply look at the current state of the network.
Ethereum is a transactional machine of states. A set of current wallet balances and contract data which is changed by creating new transactions.
The creators of Ethereum modified the classic blockchain adding one important feature – the storage, which could be easily described as a single GitHub repository, which is downloaded along with blockchain.
Each transaction writes changes to this repository (very similar to git commits). When Alex calls Kate’s contract, the transaction “Alex called Kate’s contract” is added to blockchain. “On the contract balance now 100 from Alex / on Alex’s balance N-100″ is “committed” to the storage. This is the new state after the changes.
Inside the storage, the Merkle Tree we talked about before, is implemented. Here it is slightly modified to optimize the size and is called Patricia Tree. But the idea is still the same – hash neighbors until we get one main hash. This main hash is the hash of the tree root always included into each new block.
It turns out that knowing one single block we can know the state of the whole system at this point by reading the root of the storage tree from the block.
But it does not mean that when Ethereum is launched we do not have to download the entire blockchain and to check the validity of the entire chain of blocks. Decentralization means that no one can be trusted and we must check if somebody sends a corrupted block. But when it is completed – all extra information can be removed.
The storage is specially implemented so that the new block writes only new changes to it. The old ones stay in their places with links to them created. It saves space a lot.
When we have two kinds of accounts and both can accept transactions confusion is quite possible. For convenience transactions between purses of users are called transfers. Transactions with calls to contracts are messages.
Technically, these are still the same objects.
For the common man, the transaction in Bitcoin consists of five basic elements:
Ethereum transactions inherit this logic but slightly change the composition of transactions. We no longer need to collect inputs to prove the availability of funds. Everyone already knows the current state, and therefore the balance of all wallets. If a user tries to transfer money that he does not have in the current state – this transaction will simply be rejected by the miners as erroneous.
Here is the list of the main components of the transaction in Ethereum:
It is the combination of these two values that is now responsible for the commission unlike the explicit quantity of BTC as in Bitcoin. The Gas attached to the transaction pays for the calculation of the miners as if you paid them.
Gas Limit is the number of units that are paid for the execution of each line of code in smart contracts. The price of each operation is fixed in these units and is the same on all machines. To put it simply: a comparison of the two variables costs 10 Gas, and the creation of a new transaction is 100. In addition, the recording of new data in the system’s storage is paid with Gas too. The call of a smart contract itself also costs some fixed amount of Gas because downloading its bytecode into a virtual machine is also an operation. But there are operations that are intentionally made free. It includes, for example, cleaning temporary data, roughly speaking, a destructor that is done to motivate the creators of contracts to clean the global storage from garbage.
The execution of the contract may not consume all of the attached Gas. If so, the unused balance is simply returned to the sender. The opposite may happen. If there is not enough Gas then the contract gets broken. In this case, the miner will receive all of the attached gas as a payment for his work for nothing. And you are a valuable experience that it is better to send more Gas along.
Gas Price is the price of one unit of gas. It regulates a higher or lower price for each operation. Boosting Gas Price relative to its market value, you can motivate the miners to process your transaction faster. You seem to say “I pay for every line of code 10% higher than the market.”
Sometimes an increase in the gas price is really needed. Remember ICO from Brave who raised $ 36 million in 24 seconds? It is less than two blocks and so buyers with market Gas price could simply not have time to catch the train for two blocks. Miners would have left their transactions “for later” as cheap while the contract was already closed. If Gas Limit is measured simply in “pieces”, Gas Price is in real currency. The price of Gas is always negligible compared to the main currency. Therefore for such insignificant “pennies” there are their own names:
1 Wei – the minimum unit of calculation in the system
1012 Wei = 1 Szabo
1015 Wei = 1 Finney
1018 Wei = 1 Ether (the famous Ether or ETH)
Like any financial system, Ethereum operates only with integers (problems with float and double are known to all normal IT students). In other words, Ethereum processes ETH with an accuracy of 18 decimal places. Much more accurate than the BTC with its 8 digits.
Not quite an obvious detail: to read data from the contract it is not necessary to pay Gas and even make a transaction. For example, to read your balance from an existing contract, you can simply download the current version of the blockchain and find the value of the desired variable in the global repository. This feature is built into Ethereum clients.
Refreshing in memory. The blockchain is always built in blocks. Miners collect transactions from the pool and try to compose them into a new block to include in the general chain. To do this, they are looking for a random number to add to it, so that the block satisfies certain global complexity rules.
In Ethereum, the blocks are slightly different from the classic Bitcoin.
The hash of the entire storage is always added to the block. As if you made a hash of all the data on your hard drive. Such a cast always reflects the exact state of the system at the moment: all contract data and account balances. In Ethereum, it is simply the root of the Merkle tree mentioned above.
Hash of the storage becomes one of the most important components of the block. It is always included in every block found and gets verified by all network members so that no one tries to “forge” the state of the system. Merely the Merkle tree allows you to do this quickly: you only need to check the changed parts of the tree, the hashes of the others remain the same.
Imagine that you sent a request to execute the smart contract code, and it returned the error (ICO is over, for example). Or you have not attached enough Gas to it and the execution has been interrupted.
In both of these cases, the transaction will be considered successful from the point of view of the block – because the code was executed, the Gas was transferred, and the state was changed. However, for the sender, this transaction should be marked as unsuccessful. But where? To do it, Ethereum introduces one more entity:
In addition to the list of transactions, the execution of results of each of them is added to the block via receipts.
Now if I attached 1000 Gas but the code execution took only 800 – the receipt states that I actually spent 800. 200 will be returned back while the transaction will have 1000 as it was sent.
When building a new block, miners include not only a reference to its parent but also a list of references to blocks whose parent is equal to the parent of the parent of the current block. They are called “uncles”.
Time to shout “Nooo!”
It’s easier to imagine it as your real uncles and aunts – this block could be my parent if I were mining the neighboring chain (was born in a neighboring family).
The purpose of it will be covered later in the section about Mining and GHOST algorithm.
As a result, the block in Ethereum looks like this:
Quick reminder – miners are a number of computers around the world. They simultaneously build a new block from transactions and then try to add this block to blockchain.
If the miners start simultaneously announcing their blocks to the network it turns out to the race with the unpredictable result when it is impossible to determine the winner. Therefore, each miner must solve a complex problem with an easily verifiable answer which is written to the found block. The first who found the answer and announced the block receives a reward of 3 ETH.
The complexity of the task is set automatically by the network. In Bitcoin, the complexity is set to be huge on purpose, so that on average the block is found once in 10 minutes. In Ethereum, the new blocks are inserted into the block system every 15 seconds.
Well, it was simplified a lot. For a person, it’s easier to understand the phrase “starts at 10 zeros” than “has complexity below the limit”. In fact, these zeros appear in hashes of blocks exactly because such high complexity is established in bitcoin that it can be achieved only by hash with a bunch of zeros at the beginning.
The block hash which is just a number must be less than a certain preset number. So those 10 minutes are guaranteed for which the entire network finds a new block. 15 seconds in Ethereum is nothing, so in Ethereum the complexity is made in a way that you will not find zeros at the beginning of the blocks.
In the Ethereum network, the block is found in 15 seconds and gets spread throughout the network in about 12 seconds. It leads to the fact that the blockhain is more often than usual in the uncertain state. No one can be certain which of the last blocks is correct until they find the next one.
As nobody canceled the rule of the longest chain – as soon as one of the chains of the blockchain becomes longer than the others, it is accepted as the only true one. However with a 15-second mining there may be so many of competing chains, that an uncertain blockchain can live for hours, and eventually rollback and re-run most of the transactions.
It’s not only uncomfortable when you try to pay with ETH and wait for confirmation for half an hour but it also carries a serious danger. If the miners spend most of their time trying to run the blocks in “unnecessary” chains, then they have the motivation to join the pool where they can drive together one particular chain, thereby increasing the chances of success and reward.
The motivation for pooling leads to the possibility of a “51% attack” – when more than 50% of the capacity is concentrated in the hands of one pool manager. With this power, he can change the history of the blockchain and roll back the transactions which ordinary miners will only learn about from the Reddit posts.
GHOST has a simple message: to give a small reward including those miners who have found “uncle” – a logically correct block which had just a bad luck to get into the neighbour chain. The uncle receives 12.5% of the cost of a full-fledged block. It motivates the miners to continue to mine independently because you can also make good money looking for uncles.
In ancient times the grass was greener and bitcoins were mined using CPU. Later bitcoin started to grow and graphics adapters were turned into mining tools. They are not as versatile as the CPU and implement a limited set of operations, but they can do them much faster and in thousands of threads.
The arms race continued with the introduction of the ASIC – Application-Specific Integrated Circuit. The crooks in the cellars built “processors” which were unable to do nothing but to sort through the hashes under bitcoin mining with great speed.
Many consider ASICs as cheating but this is the reality.
In Ethereum, the process of mining presumes not only the selection of hashes but also the code execution of smart contracts which are universal Turing-complete calculations. If you try to build an ASIC under Ethereum – it’s going to be a CPU as you need the stack, the memory and everything else.
And on the eve of the transition to Proof-of-Stake, an idea to build ASIC under Ethereum has no financial sense at all.
Last time I told that classic mining with the search for the answer to a complex problem is called Proof-of-Work. Its main disadvantage is that mankind spends huge resources on its functioning. More precisely, it does not actually have to but everyone wants to snatch their reward. It leads to a permanent arms race and mining farms the size of half of India.
Miners will no longer need to go through random numbers at the speed of light to find the desired hash, instead, a “lottery” begins among the miners when they fix transactions on their accounts for a certain amount, hash only them and check whether they won or not.
Any transaction whether transferring funds, calling or creating a contract, gets signed and sent in the same way to a common pool of unconfirmed transactions where it waits until it gets mined. There is no difference from bitcoin, even though it is common in Ethereum to think of “states”. I’m pretty sure that the bitcoin wallets inside turn history into states.
If the transaction is a transfer between users, its mining is almost identical to bitcoin, just instead of the check of inputs here the status of users’ wallets is checked.
Differences begin when a transaction triggers a smart contract. Then the miner computer finds this smart contact in the downloaded storage copy and launches its code with the transferred parameters using its virtual machine (EVM – Ethereum Virtual Machine). Every miner does the same but the result in the network will appear only once. Therefore, all operations in the contract must be deterministic and could be easily forgotten when needed.
Each contract call registers Gas Limit – the number of calculations needed to execute it. If in bitcoin miners collect transactions until they reach the desired block size where they decide how many to include in the block by their total complexity. You can quickly execute hundreds of basic smart contracts or take one “fat” instead.
As a result of the calculations, the miners in addition to the creation of the transaction tree rebuild the tree of states. They generate receipts for each completed transaction and include everything in a new block. Only after that, they try to find a hash of the necessary complexity to include their block into the main blockchain. By the way, instead of the standard SHA-256 in Ethereum, a KECCAK-256 hash is used.
After all, Ethereum blockchain is the same classic blockchain with the addition of the storage and EVM which allows executing arbitrary code of smart contracts on each miner’s computer. It’s useful to fix it in memory.
Yes, in comparison with the bitcoin algorithm, at first it looks like hell and rocket science. But pretty soon you begin to understand that every detail here is really necessary and solves a certain task.
The new technology in the hands of the programmer is like a new chainsaw for the Californian maniac. You just can’t wait to try it all at once.
Here I'll tell you two stories about smart contracts that will allow you to better understand the practical possibilities of smart contracts and possible troubles after mistakes.
Now there are dozens of ways to conduct ICO, some are even held on self-made exchanges and have nothing to do with the blockchain. But we will take a look at the example of the classic ICO as they were originally technically conceived – via smart contracts.
ICO is a beautiful example of the use of smart contracts, and by now you most likely guess what it is.
Alex makes plush sharks at home and sells them on the Internet. Sharks became very popular and Alex decided to expand the business but he cannot be born again in a rich family or take a loan from a bank. Alex decides to hold an ICO – to issue his own tokens or in other words, a currency – a shark-coin, sell them and collect some money on it.
So-called "ICO-professionals" are now outraged wondering where is the Whitepaper, where the fashionable Landing with the list of Advisors, the contract with the Exchange and the thread from Bitcoin Talk on the stolen account. Yes, everything this is useful but has nothing to the technical side.
For a full-fledged ICO, the logic of the sale itself is missing: how much is a token, what is the sales limit, whether there are discounts to early buyers and other features. Alex can lay the whole logic right in the shark-coin but then it couldn’t be changed either set new limits nor arrange a sale.
Here is a little hint for Alex – it can be resolved by the creation of a second smart contract – a sales contract. It has the whole logic of the ICO inside: the start and end dates, the initial price, the limitation of the number of tokens being sold and so on. Now investors will send money to this contract and it will decide how many shark-coins to give out to everyone and at what price.
This contract cannot be changed after uploading to the network because it guarantees the honesty of the ICO, but now it works only in the terms established by the author and then you can create a new one by arranging a new round or sale. When conducting an ICO it is standard practice to upload a contract to the network in advance so that users can familiarize themselves with it. Also, a popular practice is to conduct pre-sale creating a temporary discount contract which can only be called for a limited period before the sale. It intensifies the activity of the participants.
After that, the ICO shark-coins exist essentially as records within the contract. Technically, like the basic dictionary {John: 20 coins, Peter: 100 coins} which is now permanently recorded in the global storage. Digits in the dictionary have no floating price because they cannot be bought or sold. They can only be transferred or donated to someone.
To sell the tokens at a price Alex should add them to any stock exchange. In this case, Alex somehow translates all the collected tokens into the account of the exchange (with all the obvious risks of it being closed) which starts to run their price depending on supply and demand.
That’s how ICO is arranged in a few words. Sometime in 2016, it could even be carried out this way but very soon Big Shots(TM) came to this market and the rules of the game became much more complicated. Anyway as an example of using smart contract ICO is awesome.
The success of contracts has a downside: the story of The DAO – a good example of how to do everything right but finally get bummed, split the blockchain and lose half of the community.
It was in 2016, ICO was not yet a mass phenomenon but the Ethereum community was already inspired by the idea of total crowdfunding using smart contracts. Then the project of the Decentralized Autonomous Organization was born. The DAO was, in fact, a big smart contract, in which the mechanisms of the classic investment fund were laid: the participants contribute their money, get their share, vote with their votes in which projects to invest the collected funds to cut the profit at the end.
A smart contract was a guarantee that no one will deceive anyone and there will be absolute democracy in all aspects. It even took into account the option that some participants will want to leave the DAO and organize their own funds. For example, if they do not agree with the choice of projects or just want to play for investors themselves. Decentralization within decentralization – crypto-anarchists in ecstasy.
The launch of The DAO was anticipated by almost everyone so immediately after the launch, about $ 165 million was sent to the fund (and at the current rate of the ETH it is more than $ 4.3 billion). This was a great event in the community.
So the hackers brought to their accounts more than $ 65 million. Panic began. Even though the bug was not in Ethereum but in the code of the smart contract, the hysteria hit the creators of Ethereum themselves. The crowd demanded “to close and roll everything off,” which is simply impossible in the blockchain. Desperate situation – you seem to have done well but it’s still to blame.
This is a brief retelling of the story from The DAO.
There were two solutions: surrender and give the stolen money to hackers or “stop” the blockchain renewing all clients, roll back to the early blocks and restart everything – in essence, split the block and make a hard fork. The second option was chosen. So two blockchains appeared – Ethereum and Ethereum Classic.
It was a strong blow to the community. Many people still cannot accept this external interference in their independent decentralized world. “If the creators can do so at any time with our money – how can we trust such a blockchain,” they shouted at the forums and Reddit, and were pretty much right. Imagine what it was like to ordinary users who at this time bought ETH somewhere in the currency exchange kiosk.
There is a popular analogy: “Bitcoins is Gold, Ethereum is Oil”.
Bitcoin is gold. Rare soft metal, useless in everyday life. You can not build a house or make a weapon from it, but it becomes valuable when the whole world agrees to use it as a universal currency for settlements between people. Therefore, gold is very expensive.
Ethereum is oil (although I prefer the analogy with Electricity). Oil can be extracted not merely to sell, you can always use it for heating, generation of energy or gasoline for machinery and plants. Even if nobody wants to buy your oil, it solves real applied problems – it helps people to exist and survive. As well as electricity.
I like this analogy. It shows that there is no competition between Bitcoin and Ether. Nobody can say that tomorrow oil can replace gold or vice versa. These are two independent resources that can move us all forward. And they might collapse tomorrow as the Roman Empire. Who knows.
https://medium.com/coinmonks/stablecoin-primer-section-3-stablecoin-types-c416ce5f455f
Osman Sarman, Mar 2022
Central bank on a blockchain is actually one of the most widely used analogies for algorithmic stablecoins, but more on that later.
Before we start talking about different types of stablecoins, I want to pause for a caveat in hopes of making this and the upcoming sections of the Primer more digestible.
To prepare you on what to expect in this section and in Section 4, I’ll propose two mental models:
Stablecoin protocols are like financial institutions that run on code: Like I said, stablecoin protocols are complex. I think a helpful and over-simplified way of thinking about them is the following: stablecoin protocols, especially the decentralized ones, are like banks that operate on code (i.e., blockchain) and tokens are critical to how these protocols operate. As simple as that may sound, if you don’t have a finance background or are not interested in the inner workings of financial institutions, it may not be all too intuitive to you how banks work. Add another layer of complexity on top of that thanks to blockchain and tokens. That’s why I think expecting some complexity to start with is helpful.
With these mental models in mind, let’s speak about the general principles to design a successful stablecoin.
Unfortunately, there isn’t a playbook out there (yet) that clearly lays out how to design a successful stablecoin. The stablecoin landscape is still a wild west where a variety of experiments choose unique mechanisms to achieve stability. Reviewing the overall landscape however, we can observe a pattern where each project optimizes for a common set of parameters and trades off for others, and these make up the design principles of stablecoins.
The first, and perhaps the most important, parameter is stability. This refers to the price of a stablecoin to have minimum variance around its price target (e.g., $1). Stablecoins that optimize for stability usually choose to have a collateral or reserve mechanism backing the stablecoin. This means that the stablecoin simply represents an IOU for the collateral in the issuer’s reserves. With reserve mechanisms, the supply of the stablecoins are dependent on the amount of collateral backing them, given stablecoins can only be minted once the system receives a collateral deposit. This prevents a stablecoin issuer from increasing the supply of stablecoins at will, causing instability in the price of the stableocin.
So why do we need these design principles to further complicate our lives? The goal of these design principles is not to benchmark different stablecoin projects against each other but to review the overall landscape and decipher how stablecoins and the mechanisms behind them evolved over time. Ultimately, new projects build on the older ones by identifying the gaps in the latters’ mechanisms.
Through the lens of these design principles, we can see that three types of stablecoins have emerged so far: (1) fiat-backed stablecoins, (2) crypto-backed stablecoins, and (3) algorithmic stablecoins.
Let’s double click into each stablecoin type using our design principles.
Market Cap:
Stability: Fiat-backed stablecoins achieve stability using reserve mechanisms. A user can only mint (create) a fiat-backed stablecoin if they deposit some fiat currency to the issuer’s reserves. In other words, an issuer is not able to mint new stablecoins without receiving the fiat currency deposit in the first place. By keeping users deposits in reserves, the issuers warrants that every user will be able to redeem their original fiat deposit once they return the stablecoin. Since the price of the stablecoin tracks the price of the underlying fiat, stability is dependant on that of the underlying currency. If a relatively stable fiat is used as collateral (e.g., EUR), the stablecoin will remain more stable.
Definition: For decentralization seekers who want to avoid the systemic risk of fiat currencies (e.g., inflation, censorship), crypto-backed stablecoins achieve stability by overcollateralizing their reserves with other cryptocurrencies. “Over” is essential because the issuing protocols want to ensure the volatility of the backing cryptocurrency does not make the system undercollateralized, or not sufficiently backed. These stablecoins do not only function as permissionless, inflation-protection monies though. As crypto-native currencies created on the blockchain, these stablecoins also have a variety of uses in DeFi applications such as leverage trading, which we discuss in Section 4.
Market Cap:
Stability: A user can only mint a crypto-backed stablecoin if they deposit the required amount of collateral to the issuing protocol. This ensures that the protocol does not generate stablecoins out of thin air, causing inflation. Price of a crypto-backed stablecoin is usually pegged to the price of a fiat currency. To maintain a stable peg to their fiat counterparts, these stablecoin protocols deploy a variety of specialized mechanisms to influence their users’ demand to and the supply of stablecoins. For example, if a stablecoin’s price is above a target peg, a protocol will implement strategies to increase the supply of these stablecoins.
Capital Efficiency: These stablecoins are more capital intensive than fiat-backed stablecoins due to overcollateralization. To mint new stablecoins, the amount of collateral a user deposits is usually higher in value than the stablecoins they receive in return. While with fiat-backed stablecoins, the collateral user deposits and the stablecoins they recieve in return are equal.
Market Cap:
Stability: Algorithmic stablecoin protocols rely on protocol participants to achieve stability. For example, when demand to an algorithmic stablecoin falls, the price of that stablecoin falls below its peg. This requires existing participants of the protocol to collectively absorb short-term volatility. They do this by holding onto their share tokens (and not selling off like the rest of the market) and betting on the future of growth of the demand to stablecoin. This is difficult to achieve with so many trustless parties, and historically, many algorithmic stablecoins lost their pegs during market downturn. This will become more clear as we talk about specific projects in Section 4.
Decentralization: Similar to crypto-backed stablecoins, algorithmic stablecoins achieve decentralization via DAO governance. Monetary policies implemented by these protocols are voted on by share token holders and are transparent to anyone that uses their stablecoins. Lack of collateral backing these stablecoins make them the most permissionless form of money.
Capital Efficiency: Algorithmic stablecoins do not require any collateral and hence they are collateral efficient.
As we can see, there isn’t a clear cut answer on which type of stablecoin is better. “It’s a spectrum, bro!” applies here too and tradeoffs are all over the place. One thing for sure based on data is that fiat-backed stablecoins are the most widely used, accounting for ~85% of the total circulating stablecoin supply. If you ask why that may be, I think it boils down to three simple facts:
Dollar access: People around the World want to transact and save in US dollars. Dollar denominated fiat-backed stablecoins are simply a faster and easier way to get dollar exposure compared to traditional banking channels.
Familiar mechanism: At the end of the day, people will ask “so, how does this work?”. It’s quite easy to explain the reserve mechanism of fiat-backed stablecoins, and that simply builds trust.
Early entry: Tether has been operational since 2015, and has grown its popularity following the 2018 ICO boom. People are just familiar with USDT and hence other fiat-backed stablecoins.
So you can answer that question for yourself though, in Section 4, let’s take a closer look at specific stablecoin projects to understand how they achieve stability, decentralization, and capital efficiency.
https://multicoin.capital/2018/01/17/an-overview-of-stablecoins/
Kyle Samani, Jan 2018
To combat hyperinflation, Argentinian citizens developed a thriving black market for US dollars. Citizens purchased US dollars legally, but their purchases were capped. People regularly exchanged pesos for dollars from black market vendors, literally stashing their savings in mattresses. When the government currency stopped serving their needs, people widely turned to other currencies. Amidst this crisis, certain features of Bitcoin had serious appeal. It was a currency that wasn’t controlled by any government, could easily be sent or transported internationally without interference, was easier to safeguard than physical dollars, and couldn’t be seized. The only problem was that Bitcoin wasn’t a safe hedge against fiat inflation because Bitcoin itself was too volatile.
The solution is a stablecoin. Stablecoins, in their most ideal form, are simply cryptocurrencies with stable value. They share all the features listed above that make Bitcoin so appealing, but don’t suffer from the same volatility, making them much more usable as a store of value, medium of exchange, and unit of account.
You might wonder, how can one profit from a system whose final product is intrinsically a price-stable asset? All of the trustless stablecoins described below have some sort of associated equity-like token, which yields cash flows from the stable functioning of the system.
At a high level, a trustless, fiat-free stablecoin sounds impossible. How can a free-floating currency remain price stable given the natural ebbs and flows of supply and demand? The concept, on the surface, appears to violate basic economic principles. Despite the perceived challenges, many teams are attempting to create stablecoins.
I contend that there are four features that a cryptocurrency needs in order to become global, fiat-free, digital cash:
Price stability
Scalability
Privacy
Decentralization (i.e. collateral is not held by a single entity, like Tether)
None of the current stablecoin projects have all of these features, but some are aiming to offer all of these. Scalability and privacy are likely further out. But stable, decentralized cryptoassets are possible today.
There are three fundamental approaches to designing stablecoins: centralized IOU issuance, collateral backed, and seigniorage shares. I’ll examine each below.
Stablecoin Model #1: Centralized IOU Issuance
Stablecoin Model #2: Collateral Backed
In short, this approach allows users to create stablecoins by locking up collateral in excess of the amount of stablecoins created. For example, a Maker user could generate $100 worth of Dai stablecoins by locking up $150 worth of Ether. The collateral is held in a smart contract, where it can be accessed by paying back the stablecoin debt, or can be automatically sold by the contract software if the collateral falls below a certain threshold. This allows for collateral-backed stablecoins that don’t require trust in a central party.
The problem, of course, is that the collateral backing the stablecoin is often a volatile cryptoasset such as BTS or ETH. If the value of this asset drops too quickly, the stablecoins issued could become undercollateralized. For this reason, most of the projects using this model require that the stablecoins be overcollateralized enough to protect against sharp price movements. While this can provide some degree of certainty, there always exists the possibility of a black swan event that causes collateral prices to drop so quickly that the stablecoins are undercollateralized. Projects using on-chain collateral have different approaches for handling black swan events.
Stablecoin Model #3: Seigniorage Shares
As the network grows, so too does demand for the stablecoins. Given fixed supply, an increase in demand will cause the price to increase. In the seigniorage shares model, however, increased demand causes the system to issue new stablecoins, thus increasing supply, and ultimately lowering price to the target level. This works conversely, using “bonds” to remove coins from circulation (more details below).
The major challenge of seigniorage shares is figuring out how to increase and decrease the monetary supply in a way that is both decentralized, resilient, and un-gameable. Expanding the money supply is easy: print money! Contracting the money supply, on the other hand, is not. Who loses money? Is it forced, or voluntary? If voluntary, what motivation does the person have to part ways with her stablecoin?
When the supply must contract, the system issues bonds with a par value of $1 that are sold at some discount to incentivize holders to remove stablecoins from circulation. Users purchase bonds (which may pay out at some future date) using stablecoins, thus removing some stablecoins from the supply. This creates a mechanism to decrease supply in the event that the price of the stablecoin falls below the target range. At some point in the future, if demand increases such that the system needs to increase the money supply, it first pays out bond holders (in the order that the bonds were purchased). If all of the bond holders have been paid out, then the software pays those who own shares (the equity token of the system). Shares represent a claim on future stablecoin distributions as demand increases. Shares can be thought of much like equity in that both shareholders and equity holders can value their asset as a function of expected dividends of holding the asset. Additionally, in most seigniorage shares implementations, shareholders are offered voting rights.
The seigniorage shares model is the most exciting, most experimental, and most “crypto-native” approach to creating a trustless decentralized stablecoin. There are many economists who believe it cannot work. Indeed, it’s fundamentally predicated on perpetual growth of the stablecoin system.
Oracles
All stablecoins must address the oracle problem. If stablecoins are pegged to the value of some external asset like the US dollar, the system needs some way to get data about the exchange rate between the stablecoin and the asset that it is pegged to. There are three fundamental approaches to this problem.
Use a trusted data source (aka a trusted oracle).
This re-centralizes trust in the system on the oracle.
Data sources can be manipulated.
Use a set of delegated data feeds and take the median.
This is the approach used by BitShares. Users use stake-weighted voting to elect delegates to provide price feeds.
The median of the price feed is used, meaning a majority of the delegates would have to collude to manipulate the price feed.
The software can set limits on how much the price feed can move in certain time frames.
Delegates can be voted out for providing faulty data.
Users who stake tokens are able to provide price inputs. Votes are weighted by the amount of tokens staked.
The software sorts the values input by users. Users who provided an answer between the 25th and 75th percentile are rewarded, while users who submitted answers below the 25th percentile and above the 75th percentile are slashed (and their tokens redistributed to those who answered correctly).
This approaches uses game theory to make the optimal input the one that most accurately reflects reality.
Challenges
One of the other challenges facing stablecoins is that they all are designed to be “pegged” to some underlying asset, usually USD. The problem is that people generally assume this to mean that the stablecoin is perfectly fungible for USD, when in fact it really means that the stablecoins are designed such that their value generally converges around the price of USD. Even stablecoins that are fully backed by and redeemable for collateral may not always trade at the peg, depending on market dynamics (accounting for counterparty risk). In order for stablecoins to succeed, users must view stablecoins not as fungible to the pegged asset, but as their own free-floating assets that very closely track the value of USD through a combination of redeemable collateral, market incentives, and future expectations. It is entirely possible that stablecoins could provide the desired stability without maintaining a perfect peg. In fact, once economies develop around the stablecoin itself, the peg will begin to matter less and less. If merchants are willing to hold and accept USD-pegged stablecoins, and they in turn pay their suppliers in the same stablecoin, and that stablecoin is widely used as a medium of exchange, then maintaining a perfect peg becomes increasingly less important.
Getting to that future state, however, requires a long process of bootstrapping such a network into existence and getting people to collectively believe that such a stablecoin is sound money. This process will be arduous, and will likely be even more difficult for seigniorage shares-based stablecoins that are not actually “backed” by anything.
Conclusion
While decentralized stablecoins are highly experimental, a successful implementation could be a major catalyst for fundamental long-term changes in the global economy. Lack of price stability prevents cryptocurrencies from displacing most forms of fiat money, and stablecoins can provide the solution. The decoupling of governments and money could provide an end to hyperinflationary policies, economic controls, and other damaging policies that result from government mismanagement of national economies.
Furthermore, stablecoins open up all sorts of possibilities for decentralized applications, especially those that require long-term lockups or escrow mechanisms. Decentralized insurance, prediction markets, savings accounts, decentralized exchange trading pairs, credit and debt markets, remittances, and more are all much more viable with the inclusion of a stablecoin.
While there are different approaches to creating a decentralized stablecoin, ultimately the market will decide which one will emerge the winner.
https://medium.com/coinmonks/stablecoin-primer-intro-54689d6fcdba
Osman Sarman, Mar 2022
This article is part of the Stablecoin Primer series.
This year, when I went back home to Istanbul for the winter break, I was greeted by boxes of new stuff at my family house. I thought to myself, my years of begging for a New Year present is finally over. (Christmas is not a thing in Istanbul). A few minutes later, I found my dad hastily unpacking the same boxes, which contained a bunch of fitness and cooking equipment — and definitely not the headphones I was wishing for. So I immediately proceeded to ask him what he was up to. He looked at me with a half-smiling face and said: “You must have missed that Lira is 16, it’s not worth keeping my money in cash.”
Having freshly arrived from the US, what he said didn’t immediately make sense. After all, I was expecting him to make a comment about working out or cooking. It took me a full day of acclimatization and catching up with other family members and friends, and local business owners to grasp what he really meant. So let me try to clarify what my dad intended to say:
“Over the past month, Turkish Lira lost close to 42% of its value against the US Dollar, making the USDTRY exchange rate ~16. If I were to keep my hard earned money in my bank account in Liras, it would be like setting it on fire, throwing it into the ocean, or giving it out for free on the street. Instead, I have decided to buy a rowing machine and a slow cooker because I believe that these will have more use to me in the short term and more value to me in the future when I resell them, compared to the Lira.”
My dad is a reasonable person. He is not a finance guru or an econ wizard per se, but as a Board Certified clinical pathologist operating his own practice, he makes decisions that extend lives on a daily basis and puts his skin in the game (economy) as a business owner. So his reaction to my question made me think, a lot.
What does a slow cooker have to do with stablecoins? And pathology? Why is all of this relevant? Because this anecdote made me think about how abstract, macro-level monetary policies impact our lives so tangibly at the micro level. More specifically, while giving birth to this Primer, this anecdote made me question a few things:
What do we instinctively use money for?
Why haven’t we already replaced fiat money with bitcoin, the promised peer-to-peer electronic cash?
If fiat and bitcoin do not meet all our money needs, could stablecoins be the last piece in the money puzzle?
We then talk about the overall stablecoin landscape in Section 2. Here, we provide real life examples from around the world that highlight stablecoins’ strong product-market fit. Exploring further, we discuss the various ways people are using stablecoins. Building on the stablecoin lanscape discussion, in Section 3 we highlight the three broad types of stablecoins that exist and the different parameters each stablecoin type is optimized for.
In Section 4, we deep dive into a select set of stablecoin projects to shed light on how different stablecoins achieve stability and the trade-offs they make. Finally, we end the Primer in Section 5 by discussing the circumstances that need to be true for stablecoins to reach mass adoption. Per formality, here we also discuss risks associated with stablecoins.
https://medium.com/neworderdao/an-overview-on-zk-rollups-and-zkevm-33ee5ffb2f6a
Zero knowledge (ZK) tech has positioned itself as the frontrunner for scaling blockchains over the past couple of years. As scaling solutions mature, we’ve seen many ZK adaptations reveal themselves, but how these solutions work can be confusing to many. In this post we hope to demystify ZK tech so that readers can better understand the future of our industry.
A rollup is an L2 scaling solution that operates off-chain from the main L1 (in most cases Ethereum, but soon also on Celestia). This solution performs transactions off-chain, which means it doesn’t have to compete for precious blockspace on the L1. After executing the transactions, it will send a batch of transaction data (Optimistic) or proof of execution (ZK) to the L1 where it will be settled. Because of this, the layer 2 scaling solutions are secured by the same layer 1 security measures, since the data availability (DA) layer or settlement layer acts as the source of truth for the rollup.
ZK Rollups in particular are a solution that uses validity proofs to scale computation: each batch of transactions comes with a cryptographic proof (SNARKs, Kimchi, STARKs, etc) that is verified by an Ethereum smart contract — often referred to as an L2 bridge contract. This way, every single transaction is fully verified by all Ethereum full nodes before a block is finalized.
zkEVM is a virtual machine that executes EVM smart contracts in a way that is compatible with ZKproof computation. There are various types of zk(E)VMs, so we must distinguish between these types of solutions to a “zkEVM”:
EVM-compatibility = Solidity/Vyper-level compatibility
EVM-equivalence = EVM bytecode-level compatibility + Ethereum execution client (Geth etc)
Full-scale zkEVM = EVM specification-level compatibility
What they all have in common is that they add in-circuit support of EVM in their own way with various tradeoffs.
The way the these solutions work is relatively simple:
Note: Smart contract code written in high-level languages (such as Solidity/Vyper) needs to be compiled into EVM bytecode to get deployed to the Ethereum blockchain.
For EVM-compatibility, you transpile Solidity/Vyper code into the VM’s bytecode and then prove the validity of the execution trace in-circuit.
For EVM-equivalence, you transpile or interpret the EVM bytecode into your VM’s bytecode and then prove the validity of the execution trace in-circuit.
For a full-scale zkEVM, you prove the validity of the EVM execution trace in-circuit.
With most existing zkEVMs, such as Hermez and ZKSync, they run different bytecode through an interpreter/compiler that mirrors all the functions of EVM bytecode but is not complete EVM equivalent bytecode. Thus, EVM-compatible ZK rollups compile Solidity/Vyper into a bytecode targeting a custom VM rather than EVM.
So why does this matter? It matters because of EVM tooling compatibility and on a security trade-off level since adding extra layers of complexity adds new vulnerabilities.
It has two main focuses:
Design a circuit to link the bytecode with the real execution trace.
Design circuits for each opcode (prove and write that computations in each opcode are correct).
For Scroll this is the most important part — to prove each opcode in the execution trace is correct and consistent. This means that Scroll’s focus is on building an EVM opcode for opcode zkEVM. This route has the benefit of making it extremely compatible with Ethereum and thus extremely secure, but also adds a damper on its performance — especially in regards to the proving time of their ZK proofs.
Scroll’s zkEVM will be able to prove the correctness of an execution trace extracted from Geth (Ethereum execution client in Go — widely used). Their approach is thus closely aligned with Ethereum. Because of this Ethereum alignment, it enables them to minimize additional attack surfaces (by reusing the code and sticking as closely as possible to the specification of Ethereum) while maximizing compatibility with existing tooling.
Polygon’s design is unique since they use an interpreter that translates EVM opcodes into their own assembly language called zkASM, which is then ZK verified. This means that it is EVM-compatible, however it uses internal logic to verify EVM code.
Hermez’s approach introduces a few extra steps when executing EVM bytecode, which does add quite a surface where there could be vulnerabilities and results in it being less compatible with existing tooling than other solutions. Though, this approach is a lot more prover friendly and speeds up the proving of ZK proofs significantly.
This means that they minimize prover overhead while being somewhat compatible with Ethereum by making a few sacrifices. Thus, Hermez will be compatible with most EVM applications and only require some re-writing for a few outliers that use things such as pre-compiles which Hermez doesn’t inhabit.
ZKSync’s and Polygon Hermez’s approach to building their zkVM is quite similar. They both aim for EVM-Compatibility, which is Solidity/Vyper compatibility.
Instead of Scroll’s approach where you take your Solidity smart contracts and compile them into EVM Bytecode, you instead take your Solidity code and compile it into zkSync Bytecode. The bytecode of this virtual machine is a different bytecode. For a developer this doesn’t make much of a difference, however, it does add some hurdles and possible vulnerabilities in the system, same as with Hermez. Regardless, you still just take your code, compile it, and get some bytecode which is then deployable. This bytecode is optimized to be run in ZK systems. This means that ZKSync supports solidity at the source code level.
This has the advantage of causing very fast ZKP proving times as the specific language/VM in question is very ZK friendly and therefore you can minimize overhead and increase performance.
ZKSync is also working on a validium solution for their DA, which is called ZKPorter: that holds calldata and provides data availability — essential transaction data needed to reconstruct state off-chain rather than on Ethereum. Instead, the data remains available using PoS that signs off on signatures, where stakers are zkSync token holders.
They are the closest to mainnet launch, which will happen in the next few months.
It’s clear that regardless of which zkEVM you find most interesting, they all make various trade-offs for gains whether that’s in regards to compatibility or performance. Some of them value having more EVM compatibility than performance, while others prefer performance over compatibility.
At the end of the day, it’s up to the developers of applications to decide on what is right for their specific application. As such, there’s a place for several different types of solutions to the same problem, which is making zkEVMs that are fit for a specific product type. This means that in all likelihood you’ll see several ZK Rollups succeed in their specific market fit, just like how we’ve seen specific L1s find their own market fit.
https://medium.com/coinmonks/stablecoin-primer-section-2-stablecoin-landscape-132b27f7f2d3
Osman Sarman, Mar 2022
Indeed they are! And there are actually many signs of mainstream adoption. We are 7 years into the stablecoin journey and adoption has never been stronger. In reality, the lifespan of the stablecoin subcategory is in and of itself a proof of stablecoins’ anti-fragility. However, in hopes of strengthening your conviction on the product-market fit of stablecoins, I want to rely on real data and market updates on where we stand at the moment — both globally and locally. Reviewing these should give us an understanding of the reasons why people are increasingly preferring stablecoins for their daily needs. And this in return, should highlight the potential ahead of stablecoins.
So now that we established that the stablecoin market has grown significantly over the last two years and developed an understanding of where the market is, let’s look at where the market could be.
In this section, I aimed to select a set of countries where I have seen the strongest product-market fit indicators for stablecoins. Product-market fit usually tells whether a new product (physical — e.g., Away luggage; virtual — e.g., Telegram app) has successfully found a large user base and is measured quantitatively using popular metrics. For stablecoins, we already established that the potential market size is huge, so the spotlight will be on stablecoins, which are the products. Instead of quantitative metrics however, the focus will be on qualitative data that indicate stablecoins’ success in meeting its markets needs, and thus reaching product-market fit.
United States
In the US, the strongest product-market fit indicator for stablecoins is the level of regulatory activity targeting stablecoins since the beginning of 2021. As I have mentioned above, the strong demand for the US dollar is a great thing — the more demand for it, the more valuable it is. So why regulate stablecoins although they are US dollar denominated, and thus increase the demand for the US dollar globally?
Because when this demand is in the form of stablecoins, the US government becomes alarmed. Flourishing of the stablecoin market is alarming in two ways. First, this means that the private stablecoin issuers, especially the US dollar backed ones like Circle and Tether are benefiting from the strength of the US dollar while not paying their dues like banks do. Second, increased adoption of the US dollar-backed stablecoins results in US dollars exiting the real economy for the crypto economy.
The subtle message here is that, while still early to tell, stablecoins do have the potential to pose a real threat to the US dollar given how widely they are already used. And this substantiates the amount of regulatory activity targeting stablecoins in the US. With that said, regulation is an extensive subject which I won’t be discussing in detail in this Primer — my goal was to simply use regulation as a product-market fit indicator for stablecoins in the US.
Let’s now shift our focus to indicators from the emerging market, where the value proposition of stablecoins as an inflation-hedge tool is even more apparent.
Turkey
What this means is that, when Turkish people wanted to protect their hard earned savings from Lira’s inflation, stablecoins were on their path of least resistance. From the local hairdresser to the most seasoned business owner, everybody in Turkey is interested in cryptocurrencies and especially in stablecoins. People’s trust in Turkish Lira is simply damaged. The question now is, will Turkish people ever forget the ease of accessing US dollars via stablecoins or is this going to be a trend going forward?
Argentina
Ukraine and Russia
Asia
PayPal
If it seems like we’ve focused on abstract country-level examples a little too much, let’s take a look at a company-level example that may indicate stablecoins’ product-market fit.
Currency to move value between exchanges — Not every crypto exchange offers all tokens. Users can convert their fiat money to stablecoins via fiat-to-crypto exchanges (e.g., Coinbase) and send these stablecoins to their wallet at a crypto-to-crypto exchange (e.g., Binance), which may offer their desired token.
Lend and borrow in DeFi dApps — DeFi protocols such as Maker Protocol allow for participants to take loans by depositing their stablecoins. Before stablecoins existed, such protocols relied on overcollateralizing loans using volatile tokens like ether, resulting in capital inefficient and risky systems. Thanks to stablecoin collaterals, DeFi protocols can now offer fixed returns, which may attract a completely new user base to the ecosystem (more on this in Section 4). Similarly, stablecoins can be lent in DeFi protocols such as Compound in exchange for stable returns (i.e., APYs) often beating rates provided by traditional savings accounts.
Salary payments in Web3 companies — Some DAOs may pay their full time contributors’ salaries in stablecoins, especially in the early days when their own DAO token does not yet have value.
Clearly, not all the use cases are equally common and this isn’t an exhaustive list of use cases. If I were to guess, I would say, safe haven from crypto’s volatility, lend and borrow in DeFi protocols, and safe haven from fiat money’s inflation are probably the top three use cases in terms of unique individuals conducting them (I will look into a way to visualize use case popularity better.) The overarching point of this section is that more stablecoin adoption leads to a wider variety of use cases, and this makes stablecoins an even stronger money contender.
With these various use cases and product-market fit indicators in mind, in Section 3 let’s continue our exploration by taking a deeper look at the different types of stablecoins and discuss design principles behind each type. This way we can find out how different stablecoin types can meet people’s money needs in different ways.
https://medium.com/coinmonks/stablecoin-primer-section-1-path-to-stablecoins-8bcdb39c73e1
Osman Sarman, Mar 2022
Let’s start by looking at why we need money and how fiat money (e.g., US Dollar or Turkish Lira) lacks some of the features we desire from money. When I ask myself what I would like to accomplish with money, say $1k, my initial answer is: “To buy the things I want.” As decisive as that may sound, it doesn’t give the full picture. For example, why do I sometimes keep some cash in the drawer of my bedside table? So to really understand why I need money, I find myself going one level deeper and observing what I really do with money. Restating the question accordingly:
What do I use money for?
To transact: Consider again the scenario where I’m an apple producer looking for pears, but this time I already know that the pear producer does not want my produce. So I go to a tomato producer who wants my apples and exchange my apples for tomatoes, which I intend to exchange for pears. Unfortunately, once again the pear producer rejects my offer because she doesn’t want my tomatoes. With money however, I could go to the tomato producer, exchange my apples for money, and buy pears with that money. A more efficient process thanks to money acting as a medium of exchange for apples, pears, tomatoes, and many other goods and services.
To save: Sometimes our productivity exceeds our current needs and we want to save the “fruits” of our work for later enjoyment. For example, as an apple producer, I may have produced more apples than my needs. So instead of wasting my excess apples by having them rot, I need to exchange them for something that will preserve its value the next week and/or month. Without money, I end up having to go through the burdensome process of knocking on the door of many producers, hoping that one with durable goods would buy my excess apples. With money however, I wouldn’t have to go through the tedious task of finding producers of durable goods specifically looking for apples. I could simply sell my apples to anyone that wants apples and keep the money from this sale in my safe as a store of value.
So elementary, right!? I think that because money is such an ingrained part of our lives, we don’t often explicitly think about the various ways we use it. These three functions of money make up a good framework that allows us to elaborate on what we really need and use money for.
So why is it that we can use pieces of paper as money but not apples or pebbles? As simple as the above three functions may sound, for a material to be used as money, it has to have many characteristics at once. There are many pieces written on the topic of characteristics ofhard money, but for the purpose of this Primer I will try to summarize them in one sentence. Essentially, for a material to function as money, it needs to be widely accepted and portable across the globe, durable and scarce enough to preserve its value over time, and uniform and granular so that it can be used as a base unit. Luckily, fiat money has all these characteristics. But does it really?
During its early years, fiat money was scarce because it was tied to the Gold Standard, which required money to be fully backed by gold. In simple terms, this meant that there was only as much money as gold available. Given gold is a naturally scarce commodity that is technologically impossible to replicate and a lot of it already exists, fiat money was bound by gold’s scarcity or high stock-to-flow ratio.
However, with the detachment from the Gold Standard, fiat money’s scarcity was not bound by gold’s availability any longer but was controlled by governments’ (central government and central bank combined) monetary policies. The issue here is that long-term thinking is rare and each political administration wants to be remembered well regarding how much money they put in their citizens’ pockets. This brings us to:
So for the next century, the world’s major powers departed from the solidity of the Gold Standard to utilize inflationism as a solution to economic problems. And when the reserve currencies detached from the Gold Standard, non-reserve currencies did nothing else but follow suit. What this meant was, administration after administration, fiat money’s scarcity and thus stability was put to peril.
CPI refers to the price of the majority of goods and services in an economy going up. The key here is that it is the purchasing power of money that decreases and not the price of goods that increases. Think about it, from one year to the next, the same number of apples grow from the same number of trees. If there is more money in the economy as a result of inflation, money’s relative value to apples becomes less because more people have money but the apple availability remains constant. Below diagram works well for me as a reminder when I think about CPI.
This begs the question: can fiat money really function as a store of value?
One of the most important features of bitcoin is its fixed supply schedule. This feature arguably drove its adoption to expand from the cypherpunk community to millions as a peer-to-peer electronic cash. Let’s quickly review bitcoin’s “deflationary” supply schedule:
Bitcoin has a finite supply of 21 million units (vs. fiat money has infinite supply)
Bitcoin supply grows each time a miner (think a gold miner) discovers a new block on the blockchain
Bitcoin supply rate is fixed and decreases over time. Block discovery rate is adjusted every two weeks, and the number of bitcoins generated per block decreases geometrically. This means that every 210,000 blocks (~4 years), number of bitcoins created per block decreases by 50% (rate of increase of blue line decreases)
Bitcoin’s supply algorithm is cemented on the blockchain and agreed upon by all node operators. To change bitcoin’s supply schedule requires more than 50% of the decentralized set of node operators to agree — which is algorithmically disincentivized given it would be like shooting themselves on the foot for miners
What this translates to in plain language is that bitcoin’s scarcity is established, agreed upon, and cannot be changed by any person (e.g., president of a country) or persons (e.g., set of central banks). So in the long-run, as bitcoin’s adoption continues to grow from millions of people to billions of people, the 0.00052 BTC ($20 worth as of March 22) you forget in your crypto wallet now will most definitely buy you a few more apples in five years.
“In its (BTC) present state, it may not be convenient for transactions, not good enough to buy your decaffeinated espresso macchiato at your local virtue-signaling coffee chain. It may be too volatile to be a currency for now. But it is the first organic currency.”
Above I mentioned “Like the whole crypto market” when I was ranting about bitcoin’s volatility. Volatility actually does not fully apply to all types of cryptocurrencies, with the exception being stablecoins.
Before defining what stablecoins are, let’s consider this. If we want to use any cryptocurrency as a transactional base money, we want it to preserve its value over a long enough time horizon and be stable in the short-medium term. For example, if I receive my salary in bitcoins, while one day my salary offers me 20 apples, the next day it may offer me 14 apples. Since this would be awfully confusing, I would naturally seek for an alternative currency that’s more stable and thus less confusing.
This is the logical path that leads us to stablecoins, which are crypto native non-volatile assets that enable global transfer of value.
You may ask, “As an end-user, why do I have to be concerned about the pegging mechanism of a stablecoin as long as it guarantees me stability and allows me to transact conveniently?” The answer is, we are still in the early days of these crypto experiments, and given these experiments tend to require participation and commitment from users for bootstrapping purposes, we need to have informed theses around what needs to be true for these experiments to succeed.
Ultimately, the pegging mechanism of choice is what leads to the existence of different types of stablecoins. And with no pegging mechanism proven to have a clear product-market fit, different types of stablecoins optimize for different parameters including stability, decentralization, and capital efficiency. As a result of fine-tuning for these parameters, three broad types of stablecoins emerge:
Before we further dive into the design principles and types of stablecoins in the upcoming sections of the Primer, let’s pause and quickly summarize our progress up to this point. So far, we have established that fiat money’s inflation and bitcoin’s volatility are leading the way for stablecoin innovation. For any stablecoin to successfully reach mainstream adoption (i.e., used as a replacement of fiat money), regardless of their mechanism, they need to achieve stability by developing trust and exist for a long enough time. As we make the case for stablecoin’s mass adoption, in Section 2 let’s look at where the market is at and if there are any early signs of mass adoption.
https://members.delphidigital.io/reports/the-complete-guide-to-rollups
Jon Charbonneau, Aug 22
Introduction
First, what are “modular” blockchains? It’s mostly a meme at this point with plenty of disagreement, but I’ll define how I use the term for simplicity. Then you can fight over why I’m wrong in my Twitter comments.
Modular stacks strip apart the following tasks into separate technical components:
Data Availability (DA) – Ensuring the transaction data behind rollup block headers has been published and made available so that anyone can recreate the state.
Consensus – At minimum agreement over the transactions and their ordering.
Settlement – This varies based on the implementation, but tasks can include verifying/arbitrating proofs and coordinating cross-chain asset transfers/arbitrary messaging.
Execution – Computation taking the pre-state → run transactions → transition to the post-state.
Ethereum can handle each of these. It offers a unified DA, consensus, and settlement layer with general execution. When you transact on L1, Ethereum acts as any monolithic chain does. Alternatively, rollups can handle execution with Ethereum providing DA, consensus, and settlement.
Celestia only provides DA and consensus. No Uniswap living on L1 Celestia, and no verification/arbitration of proofs by the L1. Celestia has no enshrined settlement layer or smart contract execution. The L1’s functionality is limited to Celestia token transfers and validator set management.
However, these are not rollups, and they bring meaningful additional security assumptions. This report will focus on actual “rollup” stacks. I’ll break down the economics first, then dive into each stack in depth.
I use a few abbreviations throughout to save your eyes:
SCR – Smart contract rollup
ER – Enshrined rollup
SR – Sovereign rollup
Part I – Modular Economics
Rollup Fees
Let’s analyze a rollup transaction from first principles. This simplified graphic depicts the parts which apply to both Ethereum optimistic rollups (ORUs) and zk-rollups (ZKRs):
Sequencer receives and orders transactions. Users quickly get a soft confirmation of their transaction eventually being recorded on L1 (if they trust the sequencer feed). Sequencer is only relied upon for ordering and transaction completion. They’re unable to submit invalid transactions.
Deterministic state transition function takes each transaction and updates the L2 state, creating an L2 block. These blocks can be produced more quickly than L1 blocks.
Every so often a batch of transactions is compressed and sent to the L1. Currently stored as calldata, but eventually rollups will use data blobs.
Users are charged L2 gas when the state transition is applied, executing their transactions. L1 gas is paid later when the batch is posted. L1 and L2 gas prices vary based on their respective congestion. So the user is incurring costs in two types of gas, and there’s a timing mismatch. Sequencers commit to a transaction and collect L2 fees before they know the full contents of the batch, how well it’ll compress, or what the L1 base fee will be when posted.
L2s do their best to guess what their L1 cost will be and charge users accordingly. When things are quiet, only a small margin is charged over the L1 cost on average. When rollups become constrained by their own execution environment (and not L1 costs), that margin goes up. Fee market surge pricing kicks in to meter demand. Higher margins correspond to bursts of high local demand.
We clearly see this in Arbitrum’s latest spike, though their margin has been quite stable otherwise. The recent bottleneck was L2 execution (not L1 costs), and so their native fee market kicked in:
We see similar trends with Optimism, though with higher variability:
Lastly – the figures above do not include any MEV (outside of regular transaction fees). In reality, rollup tokens are able to accrue significant value from MEV (more on this shortly).
Fixed vs. Variable Costs
ORUs post compressed full L2 transaction data (with signatures), L2 state roots, and fraud proofs (only in the event of dispute) back to L1. While the full transaction data is stored in Ethereum’s history, the hash of the transaction data is added to its state. If a fraud proof is delivered, then it can check the inclusion proof against the previously stored hash as part of the fraud proof.
ZKRs don’t need to post the full transaction data to L1. It suffices to post the state differences (they choose this because it’s cheaper). Imagine Alice and Bob trade 1 ETH back and forth within the batch – ZKRs only need to post the state change at the end (who has that 1 ETH, and who doesn’t) whereas ORUs would need to post each trade. State diffs are enough to reconstruct the state. ORUs must include all transactions in the event they’re needed for fraud proofs. ZKR provers must also include a validity proof with every batching proving that the associated state root is valid.
SCRs have fixed costs regardless of transaction activity they must pay to Ethereum:
State commitments
Validity proofs (only for ZKRs)
And they also have variable costs which scale with the transaction activity:
Transaction data (plus signatures for ORUs)
Note the timing/cost tradeoff here. Settling frequently means the L1 gives true finality sooner. However, waiting longer to settle amortizes those fixed costs over more transactions = cheaper transactions for rollup users. Rollups balance settling often enough for safety vs. giving rollup users cheaper fees with lower assurances (pre-confirmations).
Let’s look at Optimism as an example. For background, Optimism has two smart contracts that sequencers and proposers post to:
Optimism’s cost breakdown looks like this:
Posting transaction batches to the Canonical Transaction Chain (CTC) incurs two costs:
Variable cost – Calldata gas used by the L1 CTC (transaction batch) submitter which scales roughly linearly with the size of the transaction batch
Overhead cost – Posting to the CTC also incurs small non-calldata costs
The gas used by posting state roots to the State Commitment Chain (SCC) is pure overhead cost (though note part of this cost uses L1 calldata as well). Splitting them up by variable/overhead you get this:
Base Layer Fees – Ethereum
Ethereum fees are derived from:
L1 Execution & Settlement – Go to L1 Uniswap, and swap some USDC for ETH.
Settlement from Rollups – Rollups post proofs to the L1. Ethereum also handles trust-minimized bridging between rollups and the L1.
DA from Rollups – Rollups post data to the L1 using calldata.
Fees for isolated monolithic chains are capped at:
Fees = Throughput x $ users will pay for individual transactions
Modular DA and settlement layers (such as Ethereum) have a higher ceiling as they’re capped at:
Fees = Throughput x $ users will pay for aggregated transactions
A rollup can pay Ethereum a larger single fee to settle many transactions (e.g., a proof securing many blocks, settling many aggregated DeFi pooling transactions, etc.) compared to what a single user can pay for an L1 transaction taking up the same block space. Introducing new fee payers to Ethereum (rollups) with a higher marginal utility per transaction adds value in the long-run assuming sufficient demand.
However, note that Ethereum’s revenue kinda looks like this today:
Ethereum is a long way from being primarily a settlement or DA layer for rollups. The vast majority of fees paid to the L1 come from native L1 execution:
So while modular base layers are capable of extracting meaningful revenue should the demand arise from rollups, we need vastly more demand before they do so. It’s critical that Ethereum’s rollup fee capture is in addition to its native L1 execution, not reliant solely on it.
“Asset… needs to derive its security from some… means of generating yield. And so I see for now only two of these, which are the settlement layer and the data availability layer. I think short-term, we will probably see that the settlement layer still generates much more value.
We’ve seen it already that we have had significant fees on Ethereum in the past, and also some on other settlement layers. But long-term, I believe that the most valuable asset will just be in the data availability, that the block space will become the most valuable asset in the decentralized economy.”
What rollups should be willing to pay for premium DA is an interesting argument. However, the reality is that DA will soon be massively oversupplied between Ethereum scaling and alternative DA solutions. For some context on scale:
Ethereum blocks currently average ~90 KB with calldata ~10 KB of this. The DA supply shock is looming, and rollups will continue to significantly improve data compression. When data blobs get their own fee market and supply < demand, DA fees hit the floor.
Rollup users will pay higher fees, but the primary bottleneck will likely be the rollups’ own native execution environments (based on current order of magnitude activity and DA bandwidth in EIP-4844). DA will no longer be your primary cost when you go to swap on your favorite rollup. Super cheap fees will drive incremental activity, more rollups pop up, etc. Only when DA is saturated up to the target will the EIP-1559 mechanism kick in, pulling fees off the lower bound. However, rollups have many upcoming optimizations which will increase their currently constrained native execution. If this is used up, that could eventually shift the goalposts closer toward DA becoming a larger cost again.
For Ethereum to start charging anything meaningful for DA, you need more than 1.3 MB/s (assuming the current danksharding spec) of actual valuable data that demands the highest security. Even if you hit 1.3 MB/s, the overflow can just go elsewhere. Alternatives such as Celestia, DataLayr, DACs, Adamantiums, Polygon Avail and others will offer massive amounts of cheap DA. Only the most secure transactions require full Ethereum security. Also, DA throughput can be safely increased (with more validators), so 1.3 MB/s is not a fundamental limit.
I’m confident that for at least several years, DA fee revenue will be negligible compared to robust settlement layers such as Ethereum’s.
Base Layer Fees – Celestia
This is where Celestia comes in – the only cost is DA. Rollups handle settlement. In that same podcast, John made an interesting point regarding dYdX’s decision to move from StarkEx to Cosmos. Base layers should accrue as much value as possible for economic security, but economically rational apps want to accrue as much value as possible for themselves. All else equal, they prefer not to pay rent to a settlement layer. DA is the minimum cost of shared security. This accrues less value to Celestia, but it could incentivize more rollups to stay because it’s cheaper.
I tend to disagree with this argument for the reasons I highlighted earlier. The costs paid to the settlement layer are actually relatively low. For ZKRs, costs approach 0 as the fixed cost is amortized over many transactions except for transactions that are settled, which users can pay for (so the rollup isn’t paying this). For ORUs, there are no real settlement costs. I therefore view this cost portion of this equation as negligible – teams will choose between SRs or using a settlement layer based on their particular needs. Both have interesting technical and social arguments which I’ll dig into later.
It remains to be seen if relying on DA alone provides sufficient value capture to build economic security. DA is valuable, but it’s a resource we know how to scale incredibly well. A paradigm shift would be required for this to change. Looking at something like 5+ years out though, I don’t think anyone can reasonably predict.
So that’s exactly what I’ll do now anyway. Back of the envelope DA giga-bull case math could look like this years down the line:
Not bad, $30 bn!
It’ll be important to consider how those margins stack up for L2 fees in the long run (i.e., what % is paying for DA). If it becomes easy to spin up many chains, and bridging gets so good that app-specific rollups finally make sense, execution could become relatively inexpensive. This leaves some room for DA to get its penny. Significant bridging development will also matter a lot for whether SRs ever make sense over sharing a settlement layer, and the general stickiness of DA.
For reference, Ethereum captured ~$10bn fees in 2021, and its peak months had an annualized run rate >$20bn (note this does not include other forms of MEV). Something like $30bn/year could imply a market cap of a few hundred billion maybe? Is that enough? I don’t know. Personally, I’d want far higher security for nation-state level security, so this gets into a bit of a philosophical debate. At this scale, you’d be securing many trillions of dollars of economic activity, so I’d want more.
Note the difference between Ethereum and Celestia here as well. Being “meaningful” for each one means different things. Maybe 10% of total Ethereum revenue is meaningful, maybe it’s 20%, pick a number. It’s got a cash cow settlement layer paying the bills anyway. For Celestia it’s simple – it has to be enough to fund its entire security budget to protect everything built on top of it.
The reality is those numbers shouldn’t be taken with a grain of salt, they should be taken with the whole shaker. They’re completely made up in a hyper-optimized rollup world to give you a sense of orders of magnitude over a very long time horizon.
Celestia faces a hurdle in bootstrapping economic security with little value capture. We’re used to startup valuations being a bet on future growth, but it’s an admittedly risky proposition when the entire business model (DA) hinges on a paradigm shift to accrue enough value for economic security.
MEV in the Modular Stack
MEV is a deeeeep rabbit hole, so I’ll keep it short here.
Let’s start with the basics. MEV still isn’t quite formally defined, but I’ll oversimplify it here to all potential value that block producers can extract (including regular transaction fees, arbitrage, liquidations etc.).
Searchers bundle transactions and bid for their inclusion to builders. Builders aggregate searcher bids and bid for the full block to be included by validators. In an efficient market, searchers bid most of their revenue to builders who then bid most of their revenue to validators. As a result, ETH captures the majority of MEV.
Rollups look a bit different, but similar concepts apply. This simplified example shows Optimism running an MEV auction (MEVA) for sequencing rights:
Again, searchers bundle transactions and bid for their inclusion to sequencers. Sequencers aggregate searcher bids and bid for the full block to be included in the MEVA. In an efficient market, searchers bid most of their revenue to sequencers who then bid most of their revenue to the MEVA. You could also insert validators in place of the MEVA if the rollup uses staking for leader-selection. In either case, the rollup captures the majority of MEV. This can accrue to the token, or it can fund something like public goods (as Optimism does).
All else equal, shifting a transaction from L1 to a rollup doesn’t change the amount of MEV. It just shifts how it’s captured and who gets it. In this simplified model, the MEV capture has now been pushed from the L1 asset to the L2 asset.
But is it that simple? Honestly…
And neither does anyone else yet. So if you’re a curious fellow this is a great area of research. How much MEV will leak down to shared DA and/or settlement layers? Will rollup operators start paying the L1 for their MEV executed on L2? If something like a cross-domain Flashbots arises we could gather more concrete data. Let’s look at some hypotheticals for now.
The simplest idea is that it will always accrue to the bottom of the stack. For example, look at a stack with Celestia DA + settlement rollup + recursive rollups. Recursive rollups may be dependent on the settlement rollup for cross-chain MEV, but the DA layer may be able to censor and delay the settlement rollup’s blocks, demanding a piece of that MEV being captured. This kind of negotiation is contingent on social norms and could be a tricky line to cross (similar to time bandit attacks in PoW). The power is there, but will that negotiation happen? Maybe the L1 wants to eat the rollup’s lunch, maybe not.
Then there’s the fancier cross-chain MEV. In the following example:
R = Rollup
B = Block
S = Swap
Importantly, this is a wild card for value accrual. If more MEV leaks down than we’re currently aware of, this could accrue significant value to Celestia. This would help subsidize the security budget.
DA Layer Economic Security
Let’s take a look now at why we even care about DA layer economic security.
33% Attack
This one is pretty simple – halt the chain. Tendermint cannot progress (liveness failure) if it does not have the requisite votes to finalize (⅔). Rollups dependent on Celestia wouldn’t love this.
67% Attack
First off – validity conditions are a part of consensus. So even if you control 100% stake you can’t start printing a bunch of Celestia tokens/Ether out of thin air, steal funds, etc. Honest nodes will reject this as invalid. You could try these though:
Double signing – Re-orgs are possible in Gasper, but they’re not in Tendermint. You could still double sign though, causing confusion. The chain would halt and revert to social consensus. It’s provable that at least ⅓ of stake double signed at the same block height, and will be slashed in-protocol. So you can do this, but it will be expensive (if staked value is high).
Fraud proof censorship – A censoring DA layer can maliciously bridge funds across rollups (even for rollups within the same DA zone of security). This is not protected by running a DAS node. This one’s a bit nuanced so I’ll give an example.
SR1 (sovereign rollup) is an ORU on Celestia that wants to bridge funds to SR2. One option is to directly embed light clients of each other into each rollup, and distribute proofs P2P. No censorship attack vector here via a malicious consensus (other attack vectors exist, such as an eclipse attack). Alternatively, SR1 could post fraud proofs directly to SR2’s Namespace Merkle Tree when it wants to bridge. This effectively swaps out the synchrony assumption (in the case of P2P bridging) for an honest majority assumption of Celestia (in the case of posting it to Celestia). SR2 just looks out for a proof on its own NMT that it’s already tracking, and executes when it sees it. However, this now allows SR1 to maliciously bridge funds to SR2, and Celestia could censor your proofs at the DA layer. Ethereum can also censor fraud proofs sent to smart contracts, which is why their ORUs require such a long timeout period.
The data withholding and censorship attacks do not directly affect the DA layer itself, but they are attack vectors for the rollups reliant on the DA layer.
TLDR here – DA layers need economic security. With varying levels of stake taken over, you can do bad stuff. A key point here that I have seen many confuse – just because something is relied upon for security, does not make it valuable. You must first build a valuable asset, and only then it can be relied upon for meaningful economic security. The asset securing the system needs some way to accrue value and/or monetary premium.
Summary
Revenue generation is a key component of building economic security, so understanding value flows in the modular stack is critical to design. Base layer native assets need to be designed for value capture (fees and other MEV) and/or being good money.
As it stands today, DA captures negligible value. This will remain the case for the foreseeable future. Rollup execution layers capture a relatively small amount of value currently. As certain ones become incredibly popular and potentially contain high value financial transactions, they could see meaningful revenue. Premium general-purpose settlement layers with high value transactions dominate (potentially even L2 settlement layers as StarkNet plans to be with many L3s on top), and likely will for the foreseeable future.
Part II – Ethereum Rollup Stacks
Rollups have various actors such as sequencers, proposers, provers, and challengers depending on the implementation. I collectively refer to them as “operators.” Operators combine user transactions and periodically post data to the base layer. They also submit commitments to the updated rollup state (e.g., Merkle roots).
Anyone can use the compressed transaction data (as long as it was made available) to recreate the state and check if a state transition is valid. Rollup light clients do not download and execute the full transaction data though, so they rely on fraud/validity proofs for assurances that the state transition is valid.
Ethereum – Smart Contract Rollups
We see these on Ethereum today (e.g, Arbitrum, Optimism, StarkNet, zkSync, etc.). They look like this:
Smart contract rollups (SCRs) effectively live in a series of L1 smart contracts. Rollup execution is handled off-chain, and operators periodically post back to the L1. The smart contracts verify proofs/arbitrate disputes as needed. They also track all rollup state roots and transaction data so they can recompute the state:
Submitting and arbitrating a fraud proof is actually quite fast. ORUs need such a long timeout period primarily because the L1 smart contract needs to receive the proof, and this can be censored. In the event that the L1 censors the proof, we need time to coordinate via social consensus.
These smart contracts also serve as trust-minimized two-way bridges between the rollups and L1. This is one of their key advantages. It’s trust-minimized because rollups implicitly fully validate Ethereum, and the smart contract on the L1 acts as a light client (receiving block headers protected by fraud/validity proofs) of the rollup.
Now for the FUD.
There’s a balancing act on upgrade timing between safety and trustlessness. If you discover a bug, you probably don’t want to blast it out on your governance forum and wait a month. You want to fix it.
One idea is to have delayed upgradeability, but add a halt button which could stop the rollup immediately. It’s not pretty if you need to halt, but it reduces the trust placed in whoever holds the upgrade keys.
Here’s a related fun take to get everyone mad at me:
I understand it takes time for the training wheels to come off here. My point is just to be realistic about what stage much of this is still at. As with the centralized sequencers and fraud proofs (if any), the teams here intend to progressively decentralize.
There are two options for handing over upgrade keys:
Immutable rollup – Nobody can upgrade the smart contract (unless the L1 will fork to do so). This is a massive tradeoff. You basically accept that this rollup will be deprecated eventually, and you’ll just deploy another instance from scratch. Rollups that aim to be EVM-equivalent definitely can’t do this. When the L1 changes, they must upgrade their own VM in line. Fuel v1 is currently the only immutable rollup live on Ethereum today.
Decentralized governance – This is what I actually expect. It adds trust assumptions beyond transacting on L1 though – rollup governance now controls the upgrade keys. SCRs can build in delays (set longer than the rollup’s withdrawal period) before upgrades can take effect. Don’t like the upgrade? Then you can safely exit. This is trust-minimized as you’re now just acting on a synchrony assumption for rollup governance (not an honest majority assumption). You just assume that you see the planned upgrade and exit before then.
Back to that last point – rollup exits. An L1 execution layer to safely exit to is a nice fallback. Ethereum has this option, but Celestia will not. They’re practical in isolated cases, but quickly forcing thousands of users back onto L1 with small amounts of money in a future where L1 gas fees are $1000/tx doesn’t seem great. One area that should be explored more is designing the ability to exit to another rollup.
Key Takeaways
Trust-minimized two-way bridging is incredibly valuable – you have a giant ecosystem of rollups all tapping into shared liquidity and interoperability. Mostly condensing security assumptions into one ultra-secure base layer is great.
But smart contract bugs and upgradeability are kinda scary for now. Immutable rollups alleviate many concerns, but for an unworthy tradeoff.
Eventually, battle hardened contracts with well crafted governance (with upgrade delays) will be a great solution approaching the security of the L1 itself.
Ethereum – Smart Contract Settlement Rollup & Smart Contract Recursive Rollups
Today – StarkEx. dYdX, Sorare, and Immutable X are already running at massive scale. But they aren’t general-purpose or composable. They’re islands built via contractual agreements with StarkWare.
Soon – StarkNet. A general-purpose permissionless rollup like the SCRs we’re used to.
Later – Fractal scaling. StarkEx instances (and other L3s) can deploy on top of StarkNet:
Today you have the following relationship between L2 and L1:
And it would look like this with an L3 on top:
Recursive proofs are a pretty mind blowing path to scaling. A verifier contract on L2 can accept many L3 validity proofs, create one validity proof that it verified those proofs, and post that one proof to Ethereum. This efficiency matters in a gas-constrained environment like Ethereum. L3s will likely also experiment with off-chain DA solutions for even further cost reduction (though with stronger security assumptions).
App-specific Rollups
These L3s are a natural fit for app-specific ZKRs. Much of the L3 appeal is controlling your own environment away from “noisy neighbors” while retaining network effects via the L2 settlement layer. As bridging and interoperability continue to improve, I expect to see more app-specific ZKRs deploy. They’ll retain most of the advantages of shared execution environments but shed the downsides.
Additionally – show me the incentives, and I’ll show you the outcome. App-specific rollups capture more value than being deployed as a smart contract, as your token now captures execution fees and other MEV. This isn’t worth the tradeoff though if leaving a shared execution environment isolates you. Which is why as I mentioned, better bridging, liquidity sharing, etc. will be critical for this trend to unfold.
Key Takeaways
Recursive rollups present an attractive value proposition, particularly for app-specific use cases. You get increased scale in a flexible environment while retaining network effects. The L2 can act as a more scalable settlement layer than Ethereum is currently capable of. As long as their data gets pushed back to L1, they’ll have similar security assurances as L2s – they’ll be able to recreate state and exit.
Ethereum – Enshrined Rollups?
This is the crazy stuff – enshrined rollups (ERs) are a bit futuristic. Some of it’s also controversial, which makes them even more fun.
ERs are built directly into the L1 spec rather than deployed as a smart contract. This removes the risks of malicious governance/multisig upgrades or bug risk (external to Ethereum itself). They’re aligned with the social consensus of the base layer within the scope of hard forks.
At the merge, the execution layer (f.k.a. Eth1) will be merged with the consensus layer (f.k.a. Eth2). The execution layer (which settles to the consensus layer) is what could upgrade to an enshrined rollup.
Enshrined Rollups – Optimistic vs. zkEVM
Transitioning to an ER is likely quickest as an ORU. With weak statelessness already in Ethereum’s roadmap, you can easily add fraud proofs. The simplest fraud proof is distributing the block itself in isolation to re-execute (which will now include witnesses along with the pre/post-state roots, transactions, etc.). However, fraud proofs and finality aren’t exactly friends, so this isn’t ideal.
Whether ORU or ZKR, the point is that full nodes no longer need to execute transactions (unless there’s a fraud proof). This applies to consensus nodes as well – neither the proposer nor anyone in the committee executes.
Step 1 – Upgrade the Settlement Layer to an Enshrined zkEVM Rollup
This is the pretty non-controversial step (once the tech is safe, well understood, etc.). The single instance EVM we know today gets upgraded to a state root equivalent zkEVM. This means each L1 block comes with a SNARK proving the state root is valid.
There are some open implementation details, as this is still years away. Regarding proof distribution, you now have two options:
Sidecar – The SNARK (or SNARKs, implementation detail) is just sent around as a sidecar, and people can verify it implicitly.
On-chain – The SNARK does actually go on-chain, and smart contracts can readily have access to the proof itself to verify it.
Ultimately, there are two layers to consensus – social and code. The sidecar approach would lean toward social reliance, and the on-chain approach would lean toward code.
Performance Bottlenecks
As mentioned, ERs remove the need for full nodes to re-execute transactions. This makes syncing to the chain easier, and it also removes compute as a bottleneck for validators. Several consensus bottlenecks exist for validators today:
Weak statelessness already knocks off storage and disk I/O – clearing the primary bottleneck we have today. But full nodes would still need to execute every transaction at this stage. Getting a SNARK for every block’s validity would knock compute off as well. Statelessness and zkEVM each represent opportunities to raise gas limits as you remove more bottlenecks. Bandwidth is then the only major resource requirement left (which also scales with Nielsen’s law).
zkEVM Enshrined Rollup Benefits
Simpler consensus – Statelessness (without fraud proofs) still requires full node execution, but zkEVM removes this bottleneck. Execution clients can swap out thousands of lines of consensus-critical EVM execution code for much simpler SNARK verification code.
No state witnesses – Statelessness allows consensus nodes to stop keeping state on hand. They now receive witnesses (Verkle proofs) proving correct state access in every block along with the transactions. zkEVM removes this need as well – state diffs suffice. This increases validators’ consensus bandwidth efficiency allowing for higher gas limits.
Safer light clients – Light clients filter invalid state roots with SNARKs much more quickly than with fraud proofs. This allows for safer and/or faster Ethereum to alt-L1 bridges.
Step 2 – Deploy Many Parallel zkEVM Enshrined Rollups
Oh, you thought execution shards were gone, did you? Well get ready for the sequel – ERs. (Despite the meme, these are gonna be like Part II not Part III, so even better than the first one).
Once the single-instance EVM has been upgraded, it’s not too hard to deploy many (e.g., 64) parallel zkEVM ERs. This is a form of homogenous L1 execution sharding. It’s also a question whether this sharding will even be needed any time soon. As a zkEVM ER, the single instance throughput could be increased by orders of magnitude over current capacity.
The distinction can seem blurry for typical execution shards vs. ERs, and there’s been plenty of messy Twitter debates over it. The key point again is that full nodes do not need to execute ER blocks. So execution shards with zk-proven state roots can be ERs. You get the same scalability and functionality as a regular SCR (proposers just need to accept a proof and check DA), but you get the advantages of the rollup being enshrined in consensus (will elaborate on these shortly).
The old execution sharding assigned subsets of the Ethereum validator set to each shard. Each monolithic shard would handle its own execution, consensus, and DA then checkpoint back to the Beacon Chain.
zkEVM ERs only handle execution (making them far more efficient), similar to the rollups we see today. They rely on the Beacon Chain for consensus, settlement (posting proofs), and DA (they would consume data blobs).
Another implementation detail regards committees. It’s possible to have a committee of proposers at each rollup verifying a SNARK and then sending it along to the main settlement rollup. Alternatively, you could just skip the committees and have rollup block producers send the SNARKs directly to the main chain where all proposers operate.
Either way, proposers are just accepting the highest bid, checking DA, and verifying a SNARK which comes with each rollup block as an extra field. You can think of it all as one big block. It looks something like this:
Advantages of zkEVM Enshrined Rollups
Upgrading the current settlement layer seemed obvious, but what’s so great about enshrining all these other zkEVMs vs. just being zkEVM SCRs?
Social alignment – ERs inherit L1 social consensus. SCRs currently have centralized operators holding the upgrade keys, and eventually this risk will lie with rollup governance.
Economic alignment – All ER fees and MEV accrue to ETH. Nuclear zombie apocalypse World War III economic security.
Subsidized proof verification – ERs can subsidize the cost of proof verification vs. SCRs which have to pay EVM gas for settlement. Verifying SNARKs will be very cheap at this time anyway. This mitigates issues that rollups work around today. For example, rollups like Arbitrum will delay settlement to avoid L1 gas spikes at times.
Optimal settlement latency – Consensus can enforce SNARK proofs for every ER block giving per-block settlement latency.
Optimal liveness – Centralized rollup sequencers can go down (we’ve seen this happen). SCRs can also elect to have their own consensus for sequencers to give pre-confirmations (StarkNet will do this). This suffers from suboptimal liveness because the external consensus may fail and L1 escape hatches only activate after a timeout.
State root EVM equivalence – Current tooling and light clients will work out of the box. SCRs today are mostly targeting a Solidity-compatible VM (e.g., zkSync) or a bytecode equivalent EVM (e.g., Scroll), not full EVM state root equivalence.
Network effects – Grows the EVM’s network effects even further from its already leading position. Allowing developers to coalesce around a standard is valuable. (Though conversely, experimenting with new VMs could also yield exciting possibilities).
Maximum gas efficiency – ERs use native opcodes because they’re built into the protocol. SCRs instead may have to work around VM inefficiencies.
Disadvantages of zkEVM Enshrined Rollups
They’re not perfect though:
No public goods funding – Private rollups like Optimism have discretion to redirect their revenue to fund public goods. ERs would be limited to funding L1 security and burning ETH.
Suboptimal compression – SCRs currently settle on-chain far less frequently than every block, allowing for better data compression. They may also have custom or frequently-updated dictionaries for improved data compression.
VM inflexibility – An Ethereum ER would likely be an EVM. SCRs are able to adopt other popular VMs (e.g., WASM, RISC-V, MIPS) or create a new one (e.g., Cairo, FuelVM). A custom zkVM may be able to achieve better data compression than a zkEVM.
Harder pre-confirmations – SCRs may choose to have a centralized sequencer that provides instant (~100ms) pre-confirmations for better UX. This is harder to achieve with decentralized sequencing for ERs.
Last mover – ERs will be very slow to make any changes due to the conservatism of the L1. Upgrades would go through the governance process (EIP and forks). To hedge against circuit bugs, a redundant multi-circuit setup (e.g., 2-of-3) or heavy formal verification may be required.
Potentially increased builder cost – The builder now must create a proof, unless the sequencer and prover are split into separate roles (which is possible, implementation detail). Ethereum could also decide to issue ETH to offset ER proving costs. This gets to the question of “exactly how specialized will block building end up?” I currently expect meaningful centralization regardless (due to the economic incentives, as I’ve written about previously), but we don’t know for sure. Proving will also be significantly cheaper and faster when this is implemented.
Speculative Advantages of zkEVM Enshrined Rollups
Builders operating across multiple ERs (or potentially building all rollup blocks, settlement, and data) could offer synchronous interoperability between ZKRs.
For example, builders could put down collateral and promise to include your synchronous transactions. If they accept your bid but fail to execute the transactions, they would stand to lose their bond. crLists could also assist in providing pre-confirmations, as proposers (who can also optionally be bonded with collateral) would be able to force inclusion of transactions.
The economic incentives lead to builders operating across rollups – we already see this centralizing force with cross-chain MEV today. These soft confirmations for synchronous interoperability are something that users will want, and they’ll pay for it. If you’re able to offer this service, you can make more money. If you can make more money than everyone else, you become the best builder.
Note that similar dynamics are at play with ZKRs making synchronous calls to the L1 even prior to any of this. This possibility is retained by the new danksharding design as everything is confirmed in one block. For example, you could bridge down funds to L1 and then execute a swap with those funds all in the same block. To offer this service though, you would need to control L1 and L2 block production to ensure both legs will execute (i.e., only bridge these funds if this swap gets executed right behind it). This is a centralizing force for block builders across rollups and the L1.
Arguments Against Enshrined Rollups – Credible Neutrality
Upgrading the single instance EVM to a zkEVM seems like a no-brainer (if and when we get comfortable with the technology). That next step of 64 enshrined rollups is the more controversial part.
Arguments for Enshrined Rollups – Credible Neutrality
Going a step further, a successful rollup ecosystem could even break away from their Ethereum alliance.
The current Ethereum scaling plan is primarily to be a DA layer for SCRs with a constrained settlement layer. Ethereum relies on rollups for scalable computation. However, SCRs could cut off the “aging middleman” and bring in their own rollup-native assets.
The hypothetical example cited was that Optimism, Circle (USDC), MakerDAO (DAI) all share the same investors. Investors promote cross-pollination of portcos, and natively deployed assets are cheaper with better UX. It thus makes economic sense for Circle and MakerDAO to deploy USDC and DAI respectively on Optimism natively. Once a rollup has built up sufficient native assets, DeFi, and security they may decide Ethereum is no longer providing them sufficient value.
To build stickiness, Ethereum needs a better settlement layer. DA alone may not be sticky enough. To build the best settlement layer, you need scalability and shared liquidity. ERs make communication between SCRs vastly more efficient. The “hyper-scale” L3s we hear about today could become more feasible just one layer above Ethereum’s ERs. ERs become the go-to place for sovereign and institutional settlement using the highest value dApps = massive liquidity.
Personal Views on Enshrined Rollups
ERs do not threaten Ethereum’s credible neutrality. Yes, activity will occur on ERs that may have in theory gone to SCRs. But this is exactly the same argument that was made against Ethereum SCRs, and continues to be spread by many monolithic alt-L1 proponents. SCRs are not parasitic to Ethereum – they are synergistic. ERs are not parasitic to SCRs – they are synergistic. Ethereum is better off with SCRs, and SCRs are better off with ERs.
ERs grow the pie by far more than they capture for themselves. 50% of a watermelon is better than 100% of a grape.
SCRs will always be the home of innovation and higher scale, and they’ll continue to host the vast majority of rollup activity as a result. ERs are about making sure that Ethereum lives up to its end of the bargain of remaining the best settlement layer around:
Trust-minimized bridging and access to the largest pool of shared liquidity
Higher scale makes bridging massively cheaper for SCRs living on top, opening up incredibly exciting new possibilities
ERs tip this scale even more favorably in both directions – they make Ethereum a better settlement layer and they provide lower cost to SCRs on top.
Spinning off a rollup to become an isolated L1 (with an insecure pairwise bridge) shrinks your pie massively, and all you get in return are ever so slightly improved profit margins (saving on cheap settlement fees). Additionally, you’d need to subsidize your own security now, likely by issuing inflationary block rewards. This makes no economic sense.
The one potentially economically rational argument for why a rollup would leave Ethereum is if they can make a better settlement layer. Then maybe you could capture a bigger pie in theory. The only way I think anyone has any shot is with the following:
However, you sacrifice massively on that latter half of network effects, decentralization, and credible neutrality. It makes far more sense to build a settlement layer *on top of* Ethereum, exactly as StarkNet is doing with their L2. You get the best of both worlds – a high value financial settlement layer that retains the massive upsides of Ethereum, all for a very small price.
Protocol Complexity
It’s fair to note that adding zkEVM ERs isn’t exactly a walk in the park. However, they appear to me as the final major step in Ethereum’s ambitious roadmap, somewhere around the top of that graph as the curve is plateauing. We’ll slow down one day, but not quite yet.
Key Takeaways
Ethereum already has a convincing plan for scaling DA. But that doesn’t mean it should be content with its settlement layer. Ethereum can and should compete here as well. zkEVM ERs offer the best route to scaling L1 Ethereum, positioning it as the dominant settlement layer.
ERs’ economic and social alignment with Ethereum retains their credible neutrality, creating an incredibly cohesive system. Ultrasound money2 yields next level economic security, subsidizing security for the entire ecosystem.
The only argument against ERs is a “social” one around neutrality and competition. While I understand these concerns, I believe they are generally overstated and are vastly outweighed by the upside. Ethereum should certainly pursue one zkEVM ER when the tech is ready, then evaluate parallel instances later on.
You didn’t think that Ethereum’s base layer would be weak and low throughput forever did you, anon?
Ethereum – Settlement & DA Layers
Now that we have a better understanding of how these rollups work, let’s turn back to what they actually need out of their base layer.
DA is obvious – make it super cheap, offer a lot of it, make it easily verifiable with DAS. Ethereum and Celestia both aim to do this in one way or another.
My general thesis regarding settlement – I do not believe the best settlement layers will be restricted to the core functions of just verifying/arbitrating proofs and bridging tokens. Ethereum should retain a rich general-purpose native execution environment. This will be home to the likes of large scale DeFi which is not price-sensitive on a per-transaction basis. This provides massive network effects, liquidity, and built up security for SCRs to tap into.
Ethereum should have a long-term focus on scaling the settlement layer (statelessness, zkEVM).
DeFi Pooling
Fragmented liquidity is bad – so it’s good to pool liquidity in one place (like Ethereum).
But high fees are also bad – so do we need to break up the liquidity to avoid congestion?
No – we just fly commercial on L2.
Here’s an oversimplified version of what it looks like:
Decentralized AMM (dAMM)
dAMM is an L2 AMM that aggregates liquidity in a single L1 pool enforced by an L1 contract. Execution happens across multiple ZKRs asynchronously sharing liquidity. LPs serve the L1 AMM while partaking in L2 trading, exposing them to more trades. (Higher trading volume) / (same liquidity) = higher fee % → better capital efficiency.
Asynchronicity = L2s can process transactions without mandatory communication with other L2s using the same dAMM L1 liquidity pool. This is key to making shared AMMs possible.
This is achieved by separating the liquidity pool from the pricing state. Because the funds and state are decoupled, you can have multiple states (for multiple ZKRs) utilizing the same liquidity pool. Each ZKR gets its own dAMM state.
The contract agrees to provide whatever price is offered by the L2 state as long as the L1 contract has enough liquidity to fulfill the quote. All states should return to an equilibrium ratio due to arbitrage.
This solves the main hurdle in sharing an AMM between L2s – concurrency risk if users can trade at the same time. This could otherwise make it impossible for each L2 to know if it can settle safely.
Impermanent Loss (IL)
LPs are at a greater risk of IL in dAMMs – the risk grows linearly with the number of markets the LPs are exposed to. So the worst case subjects LPs to n times the IL they would have suffered under a regular UniswapV2 AMM. This isn’t concerning for stable pairs (e.g., in Curve), but it could be high for other pools. This can be mitigated using a parameter in the smart contract known as the dAMM “Health Factor” which limits the max IL LPs can suffer. This Health Factor can guarantee that the dAMM liquidity ratio does not fall too low.
L1 <> L2 Simultaneous Trading
This design can support multiple independent markets on L1 and L2 all sharing the same liquidity. L1 liquidity (e.g., trading on Uniswap) can be used as dAMM liquidity by allowing L1 LP tokens (e.g., from UniSwap ETH/DAI) to be deposited directly to the corresponding pool on the dAMM (e.g., ETH/DAI).
dAMM can support L1 trading, but the Health Factor can no longer be easily mitigated. LPs can choose to take on this additional risk and expose themselves to L1 trading though.
Stickiness
Vitalik has made his thoughts clear on the other side of Ali that sharing a DA layer along with settlement is indeed quite sticky:
Rollups sharing a DA and settlement layer avoid these issues because they get trust-minimized bridges with each other (it’s also possible to get trust-minimized bridging with only DA and no shared settlement layer, will touch on this later). If Ethereum doesn’t get 51% attacked, you can’t 51% attack the rollups separately. Even if Ethereum were to get 51% attacked and reverted, then its rollups would revert as well, remaining consistent. As a result, you can safely hold assets that were issued on one rollup and wrapped on another.
So sharing a DA and settlement layer gives the best security. But what’s the benefit/tradeoff if you decide to use Ethereum as a settlement layer, but then offer another DA source (e.g., volition like zkPorter, validium, celestium, etc.)? You still can use the same bridge and connection to the L1 as any rollup – you share liquidity, DeFi pooling, etc. For volitions, both off-chain (e.g., zkPorter) and on-chain (e.g., zkSync) DA modes share the same state and are fully composable. You just choose at a user level when you want your data stored on or off-chain.
Settlement alone then largely provides network effects, while DA adds on full security. These network effects are safely optimized by sharing DA. For example, you can more comfortably implement a burn and mint mechanism across rollups, allowing you to natively deploy a token across chains. However, you wouldn’t be quite as comfortable doing this with very different trust zones.
Ultimately, we can see there’s a security spectrum here. Many Ethereum rollups, validiums, etc. will use Ethereum for settlement – but not all of them will use its DA. DA is at least somewhat more commoditizable, and it will be an abundant resource.
Bridging
Part III – Celestia Rollup Stacks
Celestia – Sovereign Rollups
SRs still post transaction data to a base layer for DA and consensus (just agreement over transaction ordering, not validity), but they handle settlement client side within the rollup. Full and light nodes download the blocks from their P2P network, and they also check for DA on Celestia. Fraud proofs are sent around the SR’s P2P layer for light nodes to see. Validity proofs can also be sent around the P2P layer.
Alternatively, ZKRs can post their validity proofs along with the rollup blocks directly to Celestia. Celestia will not be able to verify the proof, but rollup light clients tracking Celestia will be able to do so easily.
Note that this is only a trust-minimized one-way bridge now. That smart contract verifying the rollup’s validity is what gives you a trust-minimized bridge back. You could in theory send the Celestia token up to a rollup, but it could never go back (not a great idea). Celestia’s token will be confined. More on this later.
Sovereignty
“Sovereignty” refers to the right to fork permissionlessly, whenever you want, without losing security. Your DEX got hacked? Some whale gamed your airdrop?
SCRs are still able to fork, but the decision over what is the canonical chain is delegated to an L1 smart contract. This relies on multisigs/centralized teams (today), majority governance of the rollup (as they decentralize), or they become immutable and forfeit this right (or the L1 could fork, unlikely). Note this upgrade process is subject to majority rule via on-chain governance. Off-chain coordination could deploy a new instance of a SCR, but then you’re starting from scratch drawing users with no history to the chain. SRs are able to fork permissionlessly, even as a minority, via off-chain governance. However, this fork may not be very useful depending on your bridging situation (more on this shortly).
Considerations Around Sovereignty & Forking
You can always fork a SR in theory, but in practice there are plenty of scenarios where this wouldn’t work so well. So let’s understand where this sovereignty is particularly useful, and where it isn’t.
One mitigant to sovereignty would be a SR’s (or any chain’s) reliance on meaningful centralized assets. A perps exchange heavily reliant on USDC would have a useless fork if USDC doesn’t support it.
Smart contracts deployed on general-purpose SRs have a similar situation. Large communities have murkier socioeconomic alignment. Let’s say your DEX on Ethereum gets hacked. You’re free to fork the chain and revert it, but if every other user and dApp stays on the original chain it’s quite pointless. The SR as a whole certainly has the sovereignty to fork and make changes, but you won’t have much power as an individual contract on a shared chain. App-specific SRs thus most likely derive the greatest benefits of sovereignty.
Sovereignty could be valuable to a project with a strong community, but I argue that most projects lack this today. It’s hard to predict what the future of crypto looks like, but most of it is currently DeFi. Applications driven by purely financial and transactional relationships are likely to value other technical benefits of a given rollup stack over sovereignty, so SRs will need to compete on features vs. SCRs. This is highly dependent on how SR bridging plays out, which is still unclear at this stage (more on this shortly).
Forking rollups with trust-minimized bridges would get very tricky if they were enshrined (e.g., if the bridge logic of SRA is part of SRB’s consensus rules). This would mean that all linked rollups need to hard fork in lockstep. This would make rollup uses such as DeFi (which would connect to many other chains) very difficult.
But if the bridges are not enshrined, SRB does not have to fork when SRA forks, even if they have a trust-minimized bridge. The rollups can instead simply run a smart contract which interprets the state of each bridged rollup and verifies a validity proof for them. You bake everything (including the fork choice rule) into the proof, so that there can only be one canonical version of the chain. This is analogous to SCRs on Ethereum today. If they upgrade via governance, smart contracts interacting with them may need to upgrade as well.
Flexibility
Another benefit of SRs – they grant developers more flexibility over their execution environments. Ethereum SCRs are bound by the settlement layer’s ability to process their fraud or validity proofs, which can present challenges in some instances.
SRs aren’t bound by having fraud or validity proofs interpretable by any particular VM (such as the EVM, as Ethereum SCRs are). This execution agnosticism could foster higher innovation in designing VMs, as we see with new L1s.
This isn’t without some difficulties though. Trust-minimized bridging to other SRs requires them to interpret your execution environment and verify proofs. So you may create interoperability challenges for getting too creative. Uniform standards avoid this.
Fraud Proofs in Sovereign Rollups
First, fraud proofs come in two flavors:
Non-interactive (a.k.a. single-round) – Challenger submits a fraud claim which is checked by fully executing all state transitions between the two asserted states. This constrains rollup execution to what can be re-executed by light clients (such as a smart contract). In SRs, a light client receives the proof via the P2P layer to execute it.
Interactive – Challenger submits a fraud claim, and the responder defends themself. They play an interactive verification game (IVG) – the challenger requests the responder to split their claim into smaller claims. They iteratively narrow down to a disagreement over a single instruction. Someone “referees” this dispute – an Ethereum smart contract (for SCRs) or a P2P light client (for SRs). Finally, light clients run that single instruction to check fraud. If either party stops responding, the other side wins. This is more complex than a single-round fraud proof, but checking fraud is more efficient.
Now we’ll focus on how these relate to SRs. One benefit of distributing fraud proofs via P2P is faster light client finality. The synchrony delay here in normal cases would likely be lower than getting it included on-chain, and you no longer have to worry about L1 miners/validators censoring your fraud proof. You avoid SCR ORUs’ long timeout period.
Let’s say a challenger wants to prove that transaction TC is a double spend. They submit as evidence that the money was already spent in TB. That seems reasonable to prove fraud, but what if there exists some TA that proves TB was actually invalid? If TB was invalid, then maybe TC is perfectly valid. With a dirty ledger you’ll never know the true state unless you go back to the chain’s genesis and replay every transaction. This means challengers and responders would both have to be archival nodes – this is incredibly burdensome.
This differs from a “clean” ledger like Ethereum, where the smart contract will reject any invalid transactions. You know the latest confirmed tip of the chain was valid before the new proposed state.
To reduce archival nodes’ requirements, you could introduce a new weak subjectivity assumption. Say you only require challengers and responders to hold onto 3 weeks of data. But even this would drastically increase node requirements compared to challengers and responders for SCR ORUs:
SCR challengers/responders are required to store the state for the actual dispute period
SR challengers/responders are required to store all of the different historical states over the entire weak subjectivity period
This increases their requirements significantly which makes the SR less scalable. Block space would need to be limited in order to bound challenger/responder requirements.
There’s also a stronger synchrony assumption in SRs for the IVG game:
SCRs assume that one honest challenger exists who will submit a fraud proof to the smart contract, which will then be arbitrated for everyone to see. You assume there is no eclipse attack that completely cuts off the network from submitting a fraud proof, and you see that fraud proof.
SRs assume that each light node is connected to honest challengers and responders who can prove fraud. These challengers and responders are constrained in how many light clients they can connect to, so you’re now assuming many more synchronous connections. IP is also trivially Sybil attackable.
Committee-based Bridges vs. Proof-based Bridges
Committee-based bridges – A committee attests to the validity of blocks. Not trust-minimized – the committee can steal funds. IBC is an example of a committee-based bridge (committee is the validator set of the source chain). You could also have a committee operated by bridge providers attesting to multiple chains.
Proof-based bridges – For the bridge to be trust-minimized, SRA and SRB must be able to verify fraud/validity proofs for each other. Therefore, they must be able to interpret each other’s state machine. This has higher complexity than a committee-based bridge.
Peer-to-peer Settlement vs. On-chain Settlement
There are two options here for proof-based bridges:
P2P settlement – SRA and SRB both run light clients of each other which are embedded in their chains. These receive block headers and associated fraud/validity proofs via the P2P network. Both SRs have a bridging contract allowing for asset transfers via a lock-and-mint mechanism. The bridging contract is monitored (either directly or indirectly via a relayer) by each chain’s sequencers or validators to execute the transfers.
On-chain settlement – SRA and SRB both run light clients of each other which are implemented as on-chain smart contracts. These smart contracts receive the block headers and fraud/validity proofs. This is how Ethereum SCRs work.
Static Bridging vs. Dynamic Bridging
Static bridging – Bridges must be explicitly added by chain upgrades or hard forks. SRA and SRB must support each others’ execution environments to interpret their fraud/validity proofs.
Example – SRA is an ORU that wants to bridge to SRB. SRA’s state machine is written directly in Golang (e.g., using the Cosmos SDK). SRB is an EVM chain that doesn’t understand this. SRB must upgrade their node software to include SRA’s state machine as a library in order to validate SRA’s fraud proofs. SRB can’t just automatically add SRA’s state machine code because it could pose a security risk. This similarly applies to validity proofs which otherwise would not be understood by the connected chain. In either case, social consensus or governance is needed to add this bridge by a chain upgrade.
Dynamic bridging – If the ORU SRA was instead written in a sandboxed smart contract environment (e.g., EVM or CosmWasm), then SRB could allow SRA’s state machine code to be added directly into its own. This could be done without any social consensus or governance, for example using a smart contract. This similarly applies to ZKRs who are able to understand each others’ validity proofs.
Upgradeable Bridges vs. Non-upgradeable Bridges
SCRs enshrine Ethereum as a settlement layer which determines its validity. SR bridges are not enshrined, and as such they must be mutable with an upgrade path. SRB must acknowledge that the validity of SRA is determined by its community. SRA could fork via social consensus at any time without requiring on-chain governance or hard fork a settlement layer.
Let’s play this example out where SRA is hard forking, and SRB needs to upgrade its light client for SRA. There are several approaches, and they impact whether or not the bridge is trust-minimized:
Pairwise Bridging vs. Hub-and-spoke Bridging
Rollups on a shared settlement layer only require N bridges to the settlement layer. However, pairwise SR bridging would result in N2 bridges for N rollups to connect to each other. They also lack the shared liquidity and DeFi of a settlement layer.
There’s another issue with the path dependency of assets. If I transfer an asset from chain A → B → C, that asset is not fungible with one transferred from chain A → D → C (unless it’s natively deployed on all chains, and you’re using a burn-and-mint mechanism). This again fragments liquidity, in a manner that is solved by bridging through a shared settlement layer.
A hub-and-spoke model mitigates many of these complexities – many SRs connect to a central hub SR. This reduces bridging overhead back to N. The central hub can be the focal point of shared liquidity and cross-chain communication. It serves a very similar functional role as a shared settlement layer would in this regard.
This is similar to the debate over how the Cosmos ecosystem should unfold. The initial “plan” was for the Cosmos Hub to serve as the focal point with all zones communicating through it via IBC. In reality though, entirely pairwise bridging has come about. Every zone just creates an IBC connection with whatever chains they want to interact with.
A hub would serve many of the valuable roles that Ethereum’s settlement layer does today. It would host valuable DeFi and liquidity while coordinating cross-chain messaging. This likely captures very significant value. However, unlike Ethereum’s settlement layer, it provides no economic security to the Celestia ecosystem. It is not enshrined in the Celestia protocol.
Aggregated ZK Bridging
Let’s run through an example – SR1 wants to bridge with SR2 through SRN.
Naive solution – As described earlier, the obvious option is for SR1 to run N-1 light clients, 1 for each chain, and verify a proof for each. N2 bridging complexity – very tricky, hence the hub-and-spoke model described above. Can we do better? Yes (I think).
Better solution:
The aggregator for SR1 receives the proofs for SR2 – SRN. The aggregator runs a light client of each chain, but this is now done off-chain making it far cheaper.
The (untrusted) prover verifies and then aggregates the proofs of all N chains into a single proof. This aggregation is basically free as verifying proofs off-chain is incredibly cheap.
The aggregator pushes only that single proof to a smart contract on-chain.
SR1 verifies this single proof in the same amount of time it would take to verify any individual SR’s proof.
The on-chain verifier contract would contain a mapping from chainId to state root for SR2 – SRN. Whenever it verifies a new aggregated proof, it would update the state roots for SR2 – SRN. The only remaining N2 complexity is now in the state roots – each chain is tracking every other chain’s state roots (but the constant here is tiny). SR1 has removed the N2 complexity of running N-1 light clients and verifying N-1 proofs. Each rollup would likely run its own prover, though in theory you could also have one prover aggregate and send the proof to all N SRs.
There are some assumptions about the interpretability of the other SR’s state machines, but those problems are solvable. Each type of alternative VM that you’re bridged to would require its own interpreter smart contract. Say that SR1 is an EVM chain connected to 3 EVM, 3 Move, and 3 Sealevel SRs. It would need to run two interpreter contracts – one for Move and one for Sealevel.
It’s likely still beneficial to have a DeFi “hub” (e.g., as in the hub-and-spoke model or a shared settlement layer). There’s value in concentrated liquidity. But this hub is no longer relied upon for settlement and two-hop bridging. This eliminates the latency of L3s using something like StarkNet as a scalable settlement layer with high liquidity. You’ve flattened the stack and created shorter paths between all rollups.
Additionally, there’s no longer a locked-in settlement layer. This hub which is a DeFi focal point can evolve over time and progressively shift to be other chains as tech improves. You’re no longer locked into legacy tech of your settlement layer.
Final note – this is still an extremely new idea, and will need continued vetting. This will likely take years to build.
Ethereum Sovereign Rollups
SRs are certainly the brainchild of Celestia, but note you could just as easily use Ethereum’s DA layer and simply ignore the settlement layer. It’ll be efficient once data blobs get their own cheap fee market post EIP-4844. You get the same benefits and tradeoffs, with a few exceptions:
Higher economic security – ETH has far higher economic security than Celestia due to its settlement layer. The settlement layer subsidizes the security cost for DA which is provided altruistically.
Trust-minimized bridging to Ethereum rollups – Recall that SRs can construct pairwise trust-minimized bridges with rollups sharing the same DA layer (it’s just harder). You now get this option with Ethereum rollups rather than Celestia rollups.
Better liveness – Ethereum’s consensus mechanism Gasper can retain liveness even in the event of a 33% attack, though it will not finalize (as discussed earlier). This is valuable for a base layer which rollups are dependent upon.
Slower finality – While Gasper gives you better liveness, it trades off on finality. Ethereum is much slower than Tendermint to finalize (which does so instantly). Ethereum is actively researching single-slot finality, but it’ll be quite some time if and when it’s implemented.
Not overhead minimized – Celestia SRs only need to run a Celestia light client, but Ethereum SRs must run Ethereum full nodes for comparable assurances. They must check the validity of all L1 execution to know the canonical chain because validity rules are part of the consensus rules. However, this will no longer be the case once Ethereum implements light client DAS (could be several years away though). Then a light client would be sufficient – even in the event of an Ethereum honest majority failure, it wouldn’t cause a safety failure for the SRs as DA would still be guaranteed in a fork.
Key Takeaways
Sovereignty can be quite useful for communities with clear social alignment that don’t want to bootstrap their own L1 validator set. However, complexities may arise during forks if these SRs are bridging to many other chains in a trust-minimized way. Relatively isolated non-financial applications may be a more niche market – they at least don’t exist broadly at the moment.
Purely financial applications are likely to value other technical benefits of a rollup stack over sovereignty, so SRs need to compete on features vs. SCRs here. The hub-and-spoke model mitigates many of the downsides of not sharing a settlement layer, but introduces a number of complexities. By far the most compelling bridging vision is the aggregated ZK-bridging I described above. If it works out, this presents a very interesting route.
Celestia – Sovereign Settlement Rollup & Smart Contract Recursive Rollups
Many Celestia rollups won’t be willing to forfeit a shared settlement layer. You see just how powerful it is in the Ethereum rollup ecosystem, and you want that too.
Enter settlement rollups. These are specialized SRs designed for non-sovereign recursive rollups to live on top of. Recursive rollups place smart contracts on the settlement rollup, providing them with a trust-minimized two-way bridge. These recursive rollups post proofs, state updates, and transaction data to the settlement rollup. The settlement rollup batches the recursive rollups’ data and posts it to Celestia.
Now you avoid all those bridging tradeoffs we just discussed for SRs, and you avoid the complexities surrounding P2P proof settlement. N trust-minimized bridges to a shared settlement layer allowing for easy bridging of tokens.
Restricted vs. General-purpose Settlement Layers
Now, should your settlement rollup be a restricted environment optimized solely for posting proofs and token transfers? Recursive rollups would then get cheaper fees – they no longer compete in an expensive general execution environment against DeFi, NFTs, etc.
There’s a couple of ways to do this potentially:
Permissioned settlement rollups with contracts being whitelisted
Change gas costs of certain instructions to heavily discourage any activity other than proofs and transfers
If you go this route, you lose out on all the rich DeFi use cases (e.g., DeFi pooling, dAMMs, etc.) and make it more difficult to build up liquidity. StarkNet is of course going the opposite route of Cevmos here. StarkNet will be a general-purpose settlement rollup for recursive StarkEx rollups. The other difference is that StarkNet is a SCR, so it ultimately settles back to Ethereum and retains a trust-minimized two-way bridge (vs. Cevmos is a SR without a trust-minimized two-way bridge to the base layer).
The other argument is that the routine operating costs of a rollup at scale paid to a settlement layer are low anyway. Even SCRs today pay very little to Ethereum’s settlement layer, and this will reduce further with upcoming optimizations. Any other settlement layer would be far cheaper and less constrained (particularly a settlement rollup). These costs will drop further with more efficient proofs and amortization of them over more transactions. Ethereum will also scale its settlement layer (though years down the road) potentially with statelessness, state expiry, and zkEVM.
Still, nothing is stopping you from deploying a general-purpose settlement rollup (L3) on top of Cevmos (L2), with recursive rollups (L4) sitting atop that yet another layer up. It’s possible, but now you’re stretching all the way to L3 to get any kind of rich trust-minimized shared liquidity and DeFi (which you would normally just get at L1 Ethereum). You’re potentially relying on additional layers for censorship resistance.
Positive of a four layered approach:
More efficient resource pricing – Price certain resources more optimally for some use cases. For example, if a Celestia version of ENS just wants a place to settle proofs and transfer tokens, it doesn’t need to overpay to post proofs to a general-purpose execution environment where it competes with DeFi and receives no value.
Negatives of a four layered approach:
Degradation in liveness and censorship resistance – If any part of this stack fails, your rollup on top could be out of luck. Unfortunately, this is exacerbated by the above point. More on this below as it’s nuanced.
SCR forced transaction inclusion is pretty well-understood. Generally you can allow any L2 user to submit a transaction directly to the L1 contract. If it’s not submitted by the sequencer after some predetermined window (say one day), then the sequencer will no longer be able to submit blocks without it. Submitting blocks will become permissionless, and you’ll be able to force withdrawal to the L1 individually. This is more expensive to the user than getting batched with other L2 withdrawals, but it works.
Now for recursive rollups on top of sovereign settlement rollups. If the settlement rollup is censoring you, there needs to be a way to force inclusion within your own rollup. You can still inherit censorship guarantees from the L1 DA layer though. You can enforce a rule within your recursive rollup such that transactions posted to your namespace on Celestia must be processed for the rollup block to be valid. Users can force transactions into the DA layer individually which would be ordered by execution layer fees. This places no reliance on the settlement rollup, but would need to be built into the rollup’s state transition rule. This becomes 1 of N honesty – if the sequencer doesn’t include your transaction, a fraud proof can be submitted showing they must submit this transaction.
Regarding liveness – if the sequencers for the recursive rollup itself fail, then you could still exit to the settlement rollup (as you can on Ethereum today). In the event of a settlement rollup liveness failure though, your recursive rollup is out of luck. You won’t be able to progress.
Key Takeaways
You tradeoff the benefits of sovereignty to regain those practical benefits of a shared settlement layer. Adding more and more layers to the stack can start to feel uncomfortable though.
Celestia – Enshrined Rollups???
If the Ethereum ERs were speculative, this is a mile further. There are no current plans for this. This is just one left side of the bell curver offering an opinion. I think enshrining a general-purpose rollup into Celestia is a potentially attractive option. Let me explain.
We’ve discussed how valuable settlement layers are for shared liquidity, trust-minimized two-way bridging etc. But enshrining it in the base layer itself is also particularly important for two reasons in my view:
Value Capture – A well-designed general-purpose rollup is likely to capture more value than a DA layer. Because the base layer asset is relied upon for economic security by the entire stack above it, this is crucial.
Native Asset Utility – It makes the base asset useful. ETH can be used in DeFi etc. on L1, or it can be bridged to any enshrined or SCR via a trust-minimized two-way bridge, allowing it to flow through the entire ecosystem as money. Celestia lacks this trust-minimized two-way bridge to be sent elsewhere, and you won’t be able to do anything other than stake or transfer it on Celestia itself. This effectively gives up on Celestia ever being “money,” which arguably remains the largest TAM in all of crypto.
Multisig Bridge – A small third party external committee must approve all transactions. Not very exciting.
Custodial (e.g., WBTC) – Hopefully I don’t have to explain why this isn’t great. This is also basically just a bad 1/1 multisig with no crypto-economic penalties.
Light Client Bridge (e.g., IBC) – It’s not trust-minimized, as you rely on the honest majority of any chain you connect to. The number of IBC connections, if any, to Celestia must also be kept very minimal to minimize complexity and prevent a large burden on Celestia’s state machine.
Enshrined Settlement Rollup – This gives Celestia trust-minimized two-way bridging, enhanced asset utility, and higher economic security.
Getting a trust-minimized bridge requires verification of DA and fraud/validity proofs for the state transition of the connected chain. Ethereum SCRs get this because their smart contracts verify the rollup’s state transitions. Celestia is not capable of supporting smart contracts for this bridge, so it would have to enshrine the rollup into the core protocol to make this work.
Celestia rollups could then all have a trust-minimized two-way bridge with this settlement rollup. This provides easier communication between all of them and allows the Celestia token to flow throughout the ecosystem.
The downside is that this increases the complexity of Celestia. Validators must now ensure its validity, and non-consensus nodes would verify its state transitions. It would somewhat increase their bandwidth requirements to download the ER’s data. However, because it’s an enshrined rollup with fraud/validity proofs, this verification is overhead-minimized – no execution is required by any validators or non-consensus nodes.
Ultimately, I view this as a bit of a free option. I think the downside is incredibly limited with potentially very high upside:
Worst case – Settlement layers aren’t needed at all (so it won’t serve that role) because SRs win out, and this ER gets outcompeted by every other SR. Then this ER won’t be super active, and it won’t pay for your security budget. At minimum, you get a trust-minimized two-way bridge for your token which can now be used broadly throughout the ecosystem. Still a win.
Medium case – SRs win out, and a settlement layer is not needed. Well this can just be yet another rollup in that ecosystem, except it’s likely to have a strong Schelling point and garner meaningful activity. This helps fund the security budget, and gives Celestia a trust-minimized two-way bridge.
Bull case – SCRs win out, and they really want a settlement layer. One that is enshrined in consensus not pulling apart socioeconomic concerns is very appealing. This would likely be a very active focal point of the ecosystem with high liquidity and activity. This helps ensure a sustainable value capture mechanism subsidizing security for the broader Celestia rollup ecosystem. And of course, trust-minimized two-way bridge.
Maybe SRs eat the world, and DA accrues a ton of value securing everything. The DA value accrual bull case would be that even with massive amounts of DA supply, the network effects on one ecosystem are massive. As a result, SRs pay up for prime real estate to get trust-minimized bridging in that ecosystem.
Even still, we know how to scale DA to pretty extreme lengths, and artificial pricing may need to be imposed to generate revenue. Or maybe it happens naturally because there’s an obscene amount of demand. In either case, there will be many other DA layers offering far higher throughput at far lower cost, and things are likely to load balance. For example, Ethereum’s security budget is subsidized by its settlement layer (at least currently) – it wouldn’t need to artificially raise DA prices. Other, far cheaper alternatives will also exist.
The other argument against enshrining a rollup is for “credible neutrality.” I.e., other rollups will be scared to deploy on Celestia because the base layer is competing with them, so it can’t be neutral. I generally disagree with this argument. If we think the future of France is a 5x on what we see around us today, we seriously need to think bigger. There’s room for a heck of a lot more than for one rollup out there. Rollups would continue to happily deploy on Celestia.
Alternatively, this enshrined rollup could become the settlement layer that every Celestia rollup decides to use. This becomes quite sticky, and makes it difficult to replace. Even if a better settlement layer becomes available technologically, it may be too late to make the move.
In the event that DA does accrue a ton of value and SRs eat the world, great. But there is a very significant risk that this does not happen in my view. It’s incredibly hard to predict what most future applications will need exactly, and how technology such as ZK-aggregated bridging unfolds from here. ERs seem to present upside in any of these uncertain scenarios.
Key Takeaways
A lot of the decision-making here is understandably socially driven. In my opinion though, the practical benefits tip the scale towards enshrining a rollup in Celestia. And of course, SRs can still deploy on Celestia as they wish! In the same way SRs could use Ethereum and ignore the settlement layer, they could ignore Celstia’s enshrined rollup. To not enshrine a rollup appears to be a very expensive form of virtue signaling.
Concluding Thoughts
Sorry just one more podcast quote I liked, back to John on the different rollup stacks:
“Whether or not I expect one to be more prevalent than the other, I wouldn’t say I expect anything, except expect the unexpected. The world is full of surprises and I can’t really predict which one will be used more.”
There’s truth here. I have opinions, and other people have other opinions. But that’s the point – they’re opinions. No stack I described here is strictly better than the others. There are tradeoffs. I expect really smart people to build on everything I described here, and only time will tell what they value most or what accrues value.
And if you still don’t think rollups are cool yet, well honestly it’s not my problem anymore.
Today, stakers cannot withdraw ETH from the Beacon Chain at all. After withdrawals have been enabled, the lockup period for unstaking will be .
Stakers on the Beacon Chain currently earn inflation rewards. After the merge, . The inflation rewards depend on how much ETH is . More staked ETH implies less inflation rewards per validator and vice-versa. Today ~4m ETH is staked, leading to an annual return of 7.8%.
All rollups follow a similar basic architecture and internal logic. Nevertheless, as we saw in the , the single distinction between Optimistic and Zero Knowledge Rollups—how the “review process” works on each—generates a host of downstream differences in security, usability, and EVM compatibility.
On a , a group of undergraduates working with Professor Ed Felten delivered a presentation on the project they had signed up to build: a blockchain-based arbitration system. The objective was to circumvent some of the anticipated scaling challenges of smart contract platforms, and the plan was to design a blockchain that relied on a system of challenges and dispute resolution to lighten the computational workload for traditional miners. “Arbitrum,” as the system was called, would have suffered the same fate as most other promising academic computer science projects, if not for two ambitious PhD students, Steven Goldfeder and Harry Kalodner, who approached Felten a few years later with the idea of building out a robust layer 2 solution based upon the initial concept. Soon thereafter, Felten, Goldfeder, and Kalodner co-founded Offchain Labs and have since shepherded Arbitrum from abstract idea to concrete reality.
Optimism, too, has a history that predates its current form. In mid 2017, Vitalik Buterin and Joseph Poon co-authored a proposing Plasma, an early scaling solution for Ethereum. A group of core Ethereum researchers took up the mantle and formed a non-profit research group to build out the vision. Development stalled in late 2019, as some of Plasma became apparent. Undeterred, three of Plasma’s lead researchers—Karl Floersch, Jinglan Wang, and Ben Jones—decided to pivot toward what seemed to be Plasma’s natural successor, the Optimistic Rollup. They formed Optimism PBC in early 2020.
Source:
On the flip side, the advantage of Arbitrum’s dispute resolution is that it is cheaper in terms of onchain (i.e. on Ethereum) transaction costs. The bite-sized chunk of code that the EVM eventually processes after the completion of the back-and-forth dispute resolution process requires much less gas (in most cases) than it does to re-process the entire transaction onchain.
with a “multi-round” dispute resolution process such as the one Arbitrum uses. In theory, at least, a spammer could stop progress of the rollup by launching a salvo of consecutive challenges that each takes considerable time to resolve. Indeed, this is a problem that would have plagued a previous iteration of Arbitrum.
However, Arbitrum’s updated protocol applies to this problem an elegant solution called “pipelining.” allows network validators to continue processing transactions for final approval even if a previously-processed transaction is under dispute. What this creates is a “pipeline” of recently-processed but yet-to-be-finalized transactions, instead of a bottleneck that prevents the sequencer from processing transactions and network parties from submitting challenges.
Pipelining is possible because anyone monitoring the network can know immediately whether a dispute is valid or invalid even before the dispute resolution process is finished. In essence, validators can operate as if the disputed transaction is already finalized and continue building the chain (i.e. processing transactions) on whichever outcome, or “branch,” is correct. This process, depicted by the diagram below, blunts the force of any would-be spamming attack.
Source:
To very briefly summarize these differences: Optimism’s codebase is comparatively minimalist, while Arbitrum’s is more complex and ambitious; Optimism has in the past indicated that it a MEV Auction approach, while Arbitrum plans to implement a (FSS). Naturally, both of these points of comparison merit their own separate articles. MEV, in particular, is an issue of philosophical contention between the two projects—though at least in the early days after launch, both are expected to use a trusted sequencer model for simplicity.
[3] It’s also worth listening to Ed Felten’s of the pipelining process at ETHDenver last year.
The Block Research was commissioned by and the to create Layer-1 Platforms: A Framework for comparison, which provides a “look under the hood” at seven platforms: Algorand, Avalanche, Binance Smart Chain, Cosmos, Ethereum/Ethereum 2.0, Polkadot, and Solana. We assess their technical design, related ecosystem data, and qualitative factors such as key ecosystem members to get an understanding of how they differ. Having done this analysis, we draw some insights for what the future of the broader smart contract landscape could look like for years to come.
First, we need to agree on how we measure performance. Since times immemorial, new blockchains have thrown around claims about how much more performant they are than Ethereum. It’s an old pastime. You’ll see lots of numbers bandied about and , comparing self-reported TPS (transactions per second). Unfortunately, these TPS numbers usually come from their own marketing materials, which are almost always BS.
In reality, there’s no single agreed-upon way to benchmark TPS. That’s often the case in benchmarking; it’s a messy and fraught field, full of , overfitting / “,” and .
First, performance is always a tradeoff against decentralization. Testnets and devnets, which are highly centralized, can produce incredible numbers compared to what’s possible in mainnet environments. And many mainnets , which squeezes out additional performance.
Instead of token transfers, we looked at one of the top : Uniswap V2, and turned it into a very simple benchmark. If you filled an entire block with Uniswap V2-style trades, how many trades per second would clear?
Uniswap v2 trades per second: 9.19 average, 18.38 max (due to )
Block time average: (PoW, so blocks are mined randomly in a )
Time to finality: (approximate, ETH blocks aren’t truly final)
Assumptions and Methodology: at the 15M gas target, which is what Ethereum achieves at equilibrium with EIP-1559, Ethereum can do 9.19 trades per second; at the it can achieve 18.38 trades per second (but fees would increase exponentially if it stayed here). We used swapExactETHForTokens transaction as a representative on-chain 1-hop trade. Assuming block producers can perfectly stuff a 15M gas limit block with Uniswap trades that cost 123,658 gas each, that means we can get 15M/123,658 = ~121.3 swaps into a single block. If we assume blocks arrive every 13.2 seconds, that means Ethereum processes 121.3/13.2s = ~9.19 Uniswap v2 swaps per second.
trades per second: 24.93 average, 49.86 max (due to )
Block time average:
Time to finality: (Celo uses a PBFT-style protocol that immediately finalizes blocks)
Assumptions: is the representative trade, 10M , and .
trades per second: 47.67 average, 95.33 max (due to EIP-1559)
Block time average:
1. Probabilistic: This is similar to most Ethereum style blockchains where the canonical chain depends on the most work done (heaviest). In Polygon’s case, the finality of the Bor layer (which is the block producer layer) depends on the .
2. Provable: This is similar to Tendermint/IBFT, where the super-majority signs on the canonical chain. This happens on the (which is Polygon’s validators management and state-sync layer), through . These checkpoints are submitted to Ethereum.
Reorgs and forks can happen on the Bor layer but not on Heimdall. Checkpoints are snapshots of the Bor chain state. Once a block is included in a submitted checkpoint, it cannot be reorg’d (unless >=⅓ of the validator set is dishonest). Checkpoints are roughly every 25 minutes.
Assumptions: is the representative trade, , and .
trades per second: 31.65 on average, but due to its elastic block time, at maximum throughput, the to hit 175.68 trades per second. However, sustaining throughput at that level would cause fees to rise exponentially.
Block time average: average (Avalanche is a leaderless protocol with an : blocks can be produced at any time, provided enough minimum fees are paid. The Avalanche C-Chain has had periods where .)
Assumptions: is the representative trade, current .
trades per second: 194.60 (Binance Smart Chain does not use EIP-1559, so this is a flat number)
Block time average:
Time to finality:
Assumptions: is the representative trade, .
trades per second: 273.34
Block time:
Time to finality: (Solana also emits much faster “” but these are only resistant to ~4.7% corruption. Most Dapps accept this threshold instead.)
We first learned that Solana does have something like gas, called compute units (CU), which is defined . From our conversations with validators, most seemed to think Solana validation was “racing against the clock to pack as many transactions as they can within the block time,” but the actual limitation is that each block can only contain .
The per-account limit is . If you follow this 12M account CU limit, a on mainnet, and a cost of , we arrive at a theoretical limit of 273.34 swaps/sec.
To confirm that we were measuring its performance correctly, we decided to put Solana directly to the test with a spam attack. We don’t want to spam mainnet for obvious reasons, so we targeted the Solana devnet. Note that Solana’s devnet runs on a smaller cluster and thus has a faster blocktime than mainnet ( vs mainnet’s 590ms), which will increase its performance compared to mainnet. Given a 380ms block time, we should expect that the devnet should clear 424.40 swaps per second.
In , we managed to land 184 Orca swaps
Granted this is only one test run, so — we encourage you to play with it and share with us your results.
We’re glossing over a ton of details here, and none of this would’ve been possible without the help of our friends at Blockdaemon. If you want to know the juicy details of what it took to perform this (and a deeper dive into Solana internals with some MEV alpha leaks), check out where we get into the gory technical details.
You might look at all this and wonder: but I thought Solana is routinely doing ?
The way block explorers measure Solana’s TPS can be misleading — it counts internal consensus messages as transactions, which no other blockchain does. Roughly is consensus messages. Subtracting those, you’re left with ~600 TPS, of which most of those are Serum trades which are very cheap. So long as enough other contracts are being touched, Solana can also achieve higher performance in production.
Fourth, if you want really high performance now you have to look outside the EVM space. We only benchmarked Solana here, but there are other non-EVM L1s like NEAR and Terra that also achieve higher performance. But like Solana, they don’t get to benefit from the tooling and ecosystem around the EVM. (Though NEAR has the , which is EVM-compatible, and other L1s are trying to develop similar virtualized EVM instances.)
However, the success of the Ethereum blockchain has once again demonstrated the need for ways to improve blockchain scalability. This is especially true for a platform like Ethereum, whose utility comes from its ability to run powered by .
We already know that one solution to the scalability trilemma problem can be to make a system more centralized. This is the approach the has taken and it has certainly paid dividends in a massive increase in user accounts and activity. But increasing scalability while preserving one of blockchain’s most unique characteristics – its decentralized nature – is where the real challenge lies. This is what Layer 2 solutions are trying to accomplish.
, we presented a general overview of the most prominent types of Layer 2 solutions, including state channels, sidechains, Plasma and, of course, rollups. In this piece, we’ll zero in on rollups and will examine some of the most promising projects in that Layer 2 solution category.
At the moment, the optimistic rollup space is shaping up to be a battleground for two main competitors – and . Competition between these two projects is already heating up, with both having already scored some early successes. The two solutions are very similar, with the main difference being the way they generate fraud proof. There are also differences with regards to their compatibility with the Ethereum Virtual Machine (EVM) and Ethereum tooling.
Source:
The first optimistic rollup protocol to gain traction in the blockchain space, Optimism was also supposed to be the first to have a mainnet launch. However, a delay from its originally planned March launch allowed Arbitrum to beat it to market. Nevertheless, the project continues to attract strong interest and even from investment firm Andreessen Horowitz.
We start with Hermez because of a major development that made the headlines earlier this month. According to Hermez is merging with Polygon in a landmark deal worth $250 million. The merger means that Polygon, which is arguably the most popular Ethereum scaling project (though technically not Layer 2 as it’s a sidechain) right now is getting ZK rollup capabilities. We’ll be covering Polygon in detail in a future article.
Next, we have another SNARK-based rollup. The team behind , Matter Labs, describes their project as a scaling and privacy engine, enabling low-gas transfers of Ether and ERC-20 tokens in the Ethereum network. ZKSync’s motto – “rely on math, not on validators” – certainly seems aptly chosen as currently there is only one validator processing batches and generating validity proofs.
Whereas Hermez is working towards supporting – and ZKSync already supports – smart contracts, the next ZK rollup solution, , focuses solely on decentralized exchanges (DEXs) and payment applications. With Loopring anyone can become an operator of their own orderbook DEX or automated market maker (AMM) and take advantage of its ZK rollup technology to achieve high throughput at low gas cost – Loopring claims that its ZK rollup can reach up to 2,025 per second, while the cost per transaction is 100 lower than the same metric on Ethereum. Exchange operators are required to post a large bond (stake) in the protocol’s native LRC token.
Arguably the most interesting aspect of – a Layer 2 scaling and payment protocol developed by StarkWare, is that it uses STARKs (succinct transparent arguments of knowledge). Unlike SNARKs, which require a trusted setup (or a multiparty ceremony) to produce pre-generated keys that are then used to create and verify the proofs, STARKs utilize a method that removes the need for such a setup. That method was pioneered by StarkWare, which continues to be the driving force behind STARK-based technology. The key innovation enabling this is the Turing-complete programming language Cairo. Developed by the StarkWare team, Cairo enables generation of STARK proofs for general computation. The StarkEx protocol is written in Cairo.
At the end of our previous article on Layer 2, we concluded that Layer 2 will play an important part in making Ethereum more scalable and will complement nicely the larger effort () to solve the scalability problem at the Layer 1 level. The same view is shared by many in the blockchain industry and even by some prominent outsiders.
If you want to learn more about this exciting technology, Vitalik’s “” is a great place to start. You can also check out what LimeChain co-founder and blockchain architect George Spasov about some of the most prominent rollup projects and why he believes that the Polygon-Hermez deal is a gamechanger for the industry. Finally, if you want to pursue practical applications of the technology, our knowledgeable consultants and world-class dev team can help you determine how rollups can be utilized to best serve your specific needs. Don’t hesitate to with our experts.
Typically the layer-1 chain (L1) has higher security and liquidity, and the layer-2 (L2) is a new chain wanting to leech security and liquidity from L1. Let's walk through a simple example to understand what that means, for absolute beginners, especially those who are just joining us here in crypto land (hi there 👋 welcome .. WATCH OUT!!!! oh uh, you just got rugged buddy, ops. What? It means you kinda lost your money …woo calm down!... you gonna call who?? What the hell is "The BBB" ??) So you have 100 Dai on the Ethereum blockchain, it says so on your Metamask. But how does Metamask know? It's communicating with the Ethereum p2p network via an ethereum-node-as-a-service provider called . But what does it really mean to have 100 Dai? It means that the Dai contract, which is a piece of software comprised of code and data that live on the Ethereum blockchain, has your address that you see in your Metamask, and next to it the number 100.
A user must store the data necessary for raising disputes. In the case of channels this data is typically signatures from counter-parties attesting to state changes in the channel (e.g. “Alice: I certify paying 10 Dai to Bob”, or “Charlie: I certify moving rook to position H5 in this chessboard at configuration X”).
Specific to plasma: users are vulnerable to (a) massive increase in data that needs to be stored, because data of interest to a user exists as part of a global plasma chain state, not just a counter-party as in channels, and (b) data withholding attacks by the plasma operator (block producer) who may attempt a malicious withdrawal while at the same time withholding the data that users need to raise a challenge. This adds further complications to the withdrawal safety logic on L1.
It wasn't until rollups emerged that these pesky problems were truly solved, by requiring that all the data a user needs to exit be available on L1. This data is updated by the rollup operator every time L2 advances its state. So, L2 execution and L1 data update advance in lock step. You can learn more about rollups by reading these articles: , , , and .
Note 3: Can Bitcoin have a layer-2? No. It and the state necessary to create sophisticated L1 contracts to manage disputes and/or verify validity proofs . Of course (~02:35) that you can use so and so Bitcoin sidechain “without giving up control of your coins”, but that’s simply false advertising.
If rollups are so magical, why would anyone choose to build a sidechain which (a) require additional trust assumptions and (b) have been rejected by the market over the past 7 years anyway ?
Sidechains to because they are easy to spin up. Usually people spin them up to create a pitch deck overnight, raise money from VCs, and dump a token on retail.
Some charlatans will try to , some may even be enough to claim that their sidechain is, sometimes using to create a distraction.
Rollups are all the rage in the Ethereum community, and are for Ethereum for the foreseeable future. But what exactly is this technology, what can you expect from it and how will you be able to use it? This post will attempt to answer some of those key questions.
There are two ways to scale a blockchain ecosystem. First, you can make the blockchain itself have a higher transaction capacity. The main challenge with this technique is that blockchains with "bigger blocks" are inherently more difficult to verify and likely to become more centralized. To avoid such risks, developers can either increase the efficiency of client software or, more sustainably, use techniques to allow the work of building and verifying the chain to be split up across many nodes; the is currently building this upgrade to Ethereum.
The three major types of layer-2 scaling are , and rollups. They are three different paradigms, with different strengths and weaknesses, and at this point we are fairly confident that all layer-2 scaling falls into roughly these three categories (though naming controversies exist at the edges, eg. see ).
See also: and
See also: the , and .
See also: and .
Plasma and channels are "full" layer 2 schemes, in that they try to move both data and computation off-chain. However, means that it is impossible to safely do this for all applications. Plasma and channels get around this by relying on an explicit notion of owners, but this prevents them from being fully general. Rollups, on the other hand, are a "hybrid" layer 2 scheme. Rollups move computation (and state storage) off-chain, but keep some data per transaction on-chain. To improve efficiency, they use a whole host of fancy compression tricks to replace data with computation wherever possible. The result is a system where scalability is still limited by the data bandwidth of the underlying blockchain, but at a very favorable ratio: whereas an Ethereum base-layer ERC20 token transfer costs ~45000 gas, an ERC20 token transfer in a rollup takes up 16 bytes of on-chain space and costs under 300 gas.
ZK rollups, which use validity proofs: every batch includes a cryptographic proof called a ZK-SNARK (eg. using the protocol), which proves that the post-state root is the correct result of executing the batch. No matter how large the computation, the proof can be very quickly verified on-chain.
Harder (ZK-SNARK proving general-purpose EVM execution is much harder than proving simple computations, though there are efforts (eg. ) working to improve on this)
A fraud proof claiming that a batch was invalid would contain the data in green: the batch itself (which could be checked against a hash stored on chain) and the parts of the Merkle tree needed to prove just the specific accounts that were read and/or modified by the batch. The nodes in the tree in yellow can be reconstructed from the nodes in green and so do not need to be provided. This data is sufficient to execute the batch and compute the post-state root (note that this is exactly the same as how verify individual blocks). If the computed post-state root and the provided post-state root in the batch are not the same, then the batch is fraudulent.
Signature: we can use , which allows many signatures to be aggregated into a single ~32-96 byte (depending on protocol) signature. This signature can then be checked against the entire set of messages and senders in a batch all at once. The ~0.5 in the table represents the fact that there is a limit on how many signatures can be combined in an aggregate that can be verified in a single block, and so large batches would need one signature per ~100 transactions.
Sequencer auction: an auction is held (eg. every day) to determine who has the right to be the sequencer for the next day. This technique has the advantage that it raises funds which could be distributed by eg. a DAO controlled by the rollup (see: )
On the existing Ethereum chain, the gas limit is 12.5 million, and each byte of data in a transaction costs 16 gas. This means that if a block contains nothing but a single batch (we'll say a ZK rollup is used, spending 500k gas on proof verification), that batch can have (12 million / 16) = 750,000 bytes of data. As shown above, a rollup for ETH transfers requires only 12 bytes per user operation, meaning that the batch can contain up to 62,500 transactions. At an average block time of , this translates to ~4807 TPS (compared to 12.5 million / 21000 / 13 ~= 45 TPS for ETH transfers directly on Ethereum itself).
Now what if we want to go above ~1000-4000 TPS (depending on the specific use case)? Here is where comes in. The sharding proposal opens up a space of 16 MB every 12 seconds that can be filled with any data, and the system guarantees consensus on the availability of that data. This data space can be used by rollups. This ~1398k bytes per sec is a 23x improvement on the ~60 kB/sec of the existing Ethereum chain, and in the longer term the data capacity is expected to grow even further. Hence, rollups that use eth2 sharded data can collectively process as much as ~100k TPS, and even more in the future.
There are many kinds of rollups, and many choices in the design space: one can have an optimistic rollup using fraud proofs, or a ZK rollup using validity proofs (aka. ZK-SNARKs). The sequencer (the user that can publish transaction batches to chain) can be either a centralized actor, or a free-for-all, or many other choices in between. Rollups are still an early-stage technology, and development is continuing rapidly, but they work and some (notably , and ) have already been running for months. Expect much more exciting work to come out of the rollup space in the years to come.
This is the continuation of where basic ideas of blockchain technology – transaction pool, block chains, and mining, were explained. I’d recommend everyone not familiar with these terms read that post first. The following article is more complicated, with programming terminology and links to the first article.
was written in my blog (in Russian). Many thanks to the for this translation.
All we need is to add the implementation of these conditions to the blockchain. That’s what Ethereum does – this smart contract can be described using few lines of code written in – special language very similar to Javascript.
So this calculation of hashes in bitcoin is realized by calling a set of instructions which returns 0 or 1 depending on the result. In theory, it is possible to add own logic to this set – there are operators of branching, variables and the same kind. No wonder it’s called and is similar to the ancient language of Forth.
Remember I said that in bitcoin you need to solve the task of finding such a hash that starts with N zeros, for example, ... There was Even the interactive search for such hashes.
Therefore, in 2013, a . In addition to the concept of the previous block (“parent”) it introduces the concept of “uncle” (or ommer) block.
Now, Ethereum also uses Proof-of-Work-Mining. The algorithm is called but it’s counting its last days. Ethereum has long been going to switch to a new mining algorithm where nobody will need to buy up video cards and build farms – .
In Ethereum they named their implementation Casper and wrote about it in details in the . Here I do not give a description of Casper because I do not fully understand it myself. If some of the readers could explain it in a few words and write in the comments here that would be very useful.
Let’s try to form a unified picture of the network. Users connect to the network as usual by downloading the application, for example, and can start making transactions.
Now Alex is a master for the production of cute plush sharks like this one (original: ):
Alex creates a smart contract describing a shark-coin, there is even a ready standard for this case – . The smart contract for the token is absolutely banal, it describes the functions of “buy”, “sell”, “transfer” and “balance.” When someone sends a transaction to the contract with money (ETH) inside the smart contract a simple dictionary is supplemented with “this wallet owns so many shark-coins”. This dictionary is stored directly in the block, in the same Ethereum storage that is visible to all comers.
Alex loads it into the blockchain through Ethereum Wallet and can already go to the forums “guys, buy my tokens.” For the hype, someone will even buy a couple, as happened with when the dude decided to collect his money for a new computer but collected tens of grand USD. This is similar to the issue of shares but is not yet a full-fledged ICO, more like donations for virtual reward.
A week after the launch, in the very place where the logic was implemented “to get out and take your share out of the fund.” The essence of the bug was that instead of the address of the recipient of the share, it was possible to use the address of another smart contract which within itself could try again to request a refund before the main contract fixes the first refund. And so it’s recursive to get everything out.
This article is part of the Stablecoin Primer series. If you are interested in reading the other articles, .
Types of stablecoins by
So far, in , , and , we discussed the consumer grade value propositions and uses of stablecoins, such as using stablecoins for cross-border remittances. These were quite straightforward. However, as simple as the concept of a stablecoin may sound, the protocols and mechanisms behind these stablecoins can be quite intricate, requiring certain level familiarity with economics and blockchains. This is still a Primer so my intention is to keep the content as introductory as possible, but we do need to gear up for some complexity.
Two different types of stablecoin users: Consumers and DeFi participants (DeFi =decentralized finance). A consumer is someone who simply wants to buy and hold stablecoins. For example, someone who only uses stablecoins to convert their salary into DAI or USDT to save in dollars could be considered a consumer. Consumers usually interact with stablecoins only via secondary markets such as using a fiat-to-crypto exchanges to buy / sell stablecoins. If you identify as a consumer and are not necessarily curious about the systems behind stablecoins, you do not have to worry about the intricacies of stablecoin protocols — feel free to just skim through the sections until Section 5. A DeFi participant, on the other hand, leverages stablecoin protocols to mint (create) stablecoins and participate in various DeFi activities such as . A range of parties could counted as DeFi participants including but not limited to traders, institutions, crypto exchanges, and DAOs. Understanding how a DeFi participant interacts with a stablecoin protocol is crucial because without them, stablecoins do not exist. If you identify as a DeFi participant or are interested in the nuts and bolts of stablecoins, this section is perhaps relevant and useful for you. Enjoy!
Two different stablecoin users and a stablecoin protocol by
The second design principle that stablecoins optimize for is decentralization, which refers to a stablecoin protocol being governed by many parties and not having a single point of failure. The more decentralized a stablecoin, the more censorship resistant it becomes because decentralization prevents any single party from controlling permissions to the stablecoin. This is especially important for citizens of countries with strong currency controls and censorship where obtaining foreign currencies is not as straightforward — remember the Argentina example in . Another benefit of decentralization is that it makes the monetary policy of a protocol more transparent to its participants. For example, with algorithmic stablecoins, the amount of profit made by the protocol is visible to its participants at all time. This plays a key role in building trust and growing the network of the stablecoin.
The last design parameter that emerges from different stablecoin experiments is capital efficiency, which refers to how effectively the protocol allows a user to utilize their capital. Stablecoins that rely on the backing of other cryptocurrencies are often overcollateralized. This means that, to create (mint) new stablecoins, a user must deposit an amount of collateral (e.g., ether) that is higher in value than the stablecoins they receive in return. For example, a user deposits $200 worth of ether to mint $100 worth of stablecoins. By requiring overcollateralization, these protocols protect their stablecoin from the downward volatility of the backing cryptocurrency. This, however, leads to lending, borrowing, and loans in DeFi, so protocols need to strike a balance between their collateral requirements and capital efficiency.
Overly-simplified stablecoin mission control deck by
Total market cap of stablecoins currently stands at+$186 billion. Source:
Fiat-backed stablecoin issuance by
Definition: For every unit of a fiat-backed stablecoin, there is a corresponding fiat unit in the reserves of the issuing company or protocol. In the example of Tether, for every USDT in circulation, there is $1 in Tether’s centralized reserves at . This is rather a traditional mechanism that leverages the underlying currency’s stability.
Total market cap of fiat-backed stablecoins currently stands at +$155 billion, account for ~85% of the total stableocin market. Source:
Projects: USDT (), USDC ( and Coinbase), BUSD (), TUSD (), TRYB (), USDP (), to name a few…
Decentralization: This is a fully centralized structure as the issuing company acts as the custodian of users’ collateral or fiat currency. Unless the issuing company is regularly audited, this may not be the most transparent structure to the user. Because who knows if an issuer holds all the collateral in their reserves. Critics of fiat-backed stablecoins say that the fact that most of the stablecoins in this category rely on US dollar reserves ends up . Ultimately, US dollar backing introduces systemic (e.g., inflation) and regulatory risk (e.g, KYC requirements, censorship, etc.) to the crypto-economy.
Capital Efficiency: These stablecoins can be expensive to create (mint) and slow to liquidate. Issuers usually charge a for fiat deposits and may take more than a day for redemptions since they traditional fiat settlement channels. Additionally, we may see a scenario where the crypto economy’s capital needs may exceed what fiat money can provide, which would make fiat collateral less efficient. For example, at the height of the Ukraine-Russia war, was only able to offer USDT at a 4% premium due to USDT’s limited supply at the exchange.
Mission control deck for fiat-backed stablecoins by . High stability, low decentralization, medium capital efficiency.
Crypto-backed stablecoin issuance by
Total market cap of crypto-backed stablecoins currently stands at +$13 billion. Source:
Projects: DAI (), MIM (), LUSD (), OUSD (), to name a few…
Decentralization: Crypto-backed stablecoin protocols are governed by decentralized autonomous organizations (DAOs). This means that anyone with the required knowledge can participate in the governance of a crypto-backed stablecoin. For example, participants can vote on the interest rate offered by a protocol or contribute to the development of an algorithm that the protocol leverages (more in Section 4). Additionally, users of a stablecoin can usually inspect the collateralization ratio and the algorithms that define these ratios, making the protocols’ monetary policy as transparent as possible. Unlike fiat-backed stablecoin issuers, crypto-backed stablecoin protocols do not take custodianship of a user’s assets. When a user wants to mint stablecoins, they simply lock their crypto assets to a smart contract as collateral and maintain full control over the smart contract. One criticism regarding crypto-backed stablecoins is that to achieve stability, they tend to rely heavily on , making them vulnerable to the same risks that apply to fiat-backed stablecoins. For example, currently DAI is comprised of centralized stablecoins.
Mission control deck for crypto-backed stablecoins by . High stability, high decentralization, low capital efficiency.
Algorithmic stablecoin issuance by
Definition: Growing in popularity, an algorithmic stablecoin is a cryptocurrency that adjusts its supply deterministically in order to move the price of the token in the direction of a target peg. In other words, these stablecoins achieve stability not via a collateral mechanism but via algorithms that mimic the monetary policy of a central bank (remember from the title?) Basic dynamics of supply and demand apply to algorithmic stablecoins, so at a high-level, these stablecoin protocols rely of open-market arbitrage incentives to increase or decrease the supply of stablecoins to match the target pet. In addition to their stablecoin token, algorithmic stablecoin protocols usually have another indigenous token: the share token (e.g., for Frax Finance). Share tokens are used in stabilizing the price of their stablecoin counterparts as well as governing the protocols.
Total market cap of algorithmic stablecoins currently stands at +$19 billion. Source:
Projects: UST (), FRAX (), USDN (), FEI (), and many other exiting projects like and
Mission control deck for algorithmic stablecoins by . Low stability, high decentralization, high capital efficiency.
So do decentralized stablecoins (including crypto-backed and algorithmic) even matter? I am no judge here but inspired by , I can totally see that only with increased inflation of the US Dollar and increased regulation on dollar-backed stablecoins (as discussed in ), people will transition to decentralized stablecoins. While UST’s and FRAX’s recent market cap growth are bright spots for decentralized stablecoins, for now, numbers tell us that people seem to be just fine using permissioned crypto-dollars.
Stablecoins have been one of my major areas of interest since I got involved in crypto. I first learned of Bitcoin when I was studying abroad in Argentina in 2014. At the time, Argentina was in the midst of a currency crisis that had resulted in widespread inflation, and Argentinian citizens were still reeling from a 2001 market crash that ended with the bank accounts for a year.
Stablecoins are one of the highest convexity opportunities in crypto. They aim to become global, fiat-free, digital cash, so the total addressable market (TAM) is simply that of all the money in the world: . The opportunity for stablecoins is, intrinsically, the largest possible TAM. This vision is larger than that of Bitcoin itself. A fiat-free currency that’s price stable will challenge the legitimacy of weak governments around the world.
The first is to issue IOUs. This is the model used by tokens like and . Here, a centralized company holds assets in a bank account or vault and issues tokens that represent a claim on the underlying assets. The digital token has value because it represents a claim on another asset with some defined value. The problem with this approach is that it is centralized. These tokens require trust in the issuing party– that they actually own the assets being represented and that they are willing to honor the IOUs. This model imposes serious counterparty risk on holders of the token. Tether is the canonical example given the serious concerns that the public has about their solvency and legitimacy.
The second approach is to create stablecoins that are backed by other trustless assets on-chain. This model was pioneered by . It’s also the model used by , , and others (see table below). In this model, the collateral backing the stablecoin is itself a decentralized cryptoasset. In the case of Maker, for example, Maker’s Dai stablecoin is backed by ETH held as collateral in an Ethereum smart contract. This approach has the benefit of being decentralized. The collateral is held trustlessly in a smart contract, so users aren’t relying on any third party to redeem it.
The final approach is the approach, which algorithmically expands and contracts the supply of the price-stable currency much like a central bank does with fiat currencies. These stablecoins are not actually “backed” by anything other than the expectation that they will retain a certain value.
In this model, some initial allocation of stablecoin tokens is created. They are pegged to some asset such as USD. As total demand for the stablecoin increases or decreases, the supply automatically changes in response. While different projects use different methods to expand and contract the stablecoin supply, the most commonly used is the “bonds and shares” method introduced by .
With the seigniorage shares model, supply never actually contracts with finality. Instead, each contraction involves the promise of a future increase in total supply. We’ve provided a basic overview of these mechanics, with some example estimates, . Basecoin attempts to solve the contraction problem by allowing bonds to expire after five years. These instruments are not actually bonds– they are binary options with an indefinite payout date. This means that buyers will likely demand higher interest rates to account for this risk. One issue this creates is that a rapid decrease in demand can lead to a death spiral in the price of bonds. As the system begins printing new bonds in order to take stablecoins out of the supply, the bond queue becomes increasingly large. This increases the time to payout and decreases the likelihood that each bond is paid. As such, the newly printed bonds must be sold for a cheaper price in order to account for the additional risk. As bond prices fall, the number of stablecoins taken out of circulation for each bond sold also falls. This causes the system to have to print more bonds in order to shrink the supply sufficiently. This creates a recursive feedback loop that could make large-scale supply contraction near impossible unless other measures are put in place to prevent it. The asserts that the system is immune to death spirals and explains their methods for preventing them, which include bond expiration and a bond price floor.
Some projects like Carbon modify the seigniorage shares model. In , users can elect to freeze portions of their funds to manage contraction and growth cycles. Some projects issue bonds, but simply pay out new stablecoins to all users, pro rata, when all bonds have been paid and supply must increase still. Each approach to the seigniorage shares model has its own set of challenges.
This fluctuating supply concept, while foreign at first, is rooted in a well known theory of economics: the . It’s also the method used by the Federal Reserve to maintain the stability of the US dollar. The crypto projects adopting the seigniorage shares model are attempting to do what the Federal Reserve does in a decentralized, algorithmic way.
Use a .
The final two challenges facing all stablecoins (most of which haven’t launched yet) are scalability and privacy. Global digital cash must be fast, cheap, and private. That can only occur if the platform it is built upon can scale. It must also be private, for both philosophical and practical reasons. A decentralized stablecoin could never serve as global, digital cash without some guarantee of privacy. While many people don’t immediately think that they care about privacy, businesses, governments, and financial institutions that transact in the stablecoin would certainly need privacy guarantees to protect their business interests, relationships, and more. A completely transparent ledger like that of Bitcoin is not usable for these purposes. Even though Bitcoin addresses are pseudonymous, simple chain analysis can link addresses with known entities with a . This traceability also destroys , an essential feature of digital cash.
Thinking about money by
Why does as we know it, especially in certain parts of the world like Turkey, does not meet all these expected uses?
Last piece in the money puzzle by
This Primer is ultimately a deep dive into stablecoins, with #4 above being the broader focus. However, to have a foundational understanding of the value propositions of stablecoins (whose supply grew 5x year-over-year to reach $181 billion), we first need to look at the problems they address. At the end of the day, like all tools and solutions, stablecoins were invented to address a specific need or problem we have. And if we are viewing stablecoins as an upgrade to the current forms of money, we are required to have an understanding of the issues regarding the latter. Along those lines, of the Primer focuses on fiat money’s inflation, bitcoin’s volatility, and an intro into stablecoins’ prowess.
With this in mind, let’s utilize Vitalik’s that plots in the various types of zkEVMs and plugs in the existing solutions:
The core part of Scroll is their , which is used to prove the correctness of EVM execution on their ZK rollup. They have been building it in tandem with the Privacy and Scaling Explorations (PSE) group at the Ethereum Foundation (who are also working on a ZK-EVM that strives to be fully Ethereum-equivalent).
Nethermind’s project is building a compiler from Solidity to Starkware’s Cairo, which will turn StarkNet into an “EVM-compatible” system through compiling. However, it’s worth mentioning that for now Starkware uses a specialized zkVM with its own domain-specific language (Cairo) that is friendly to building applications on zkVMs.
Stablecoins around the World by
Globally 2020 was the breakout year for stablecoins thanks to the . Stablecoins never stopped growing in popularity though. The total market cap (i.e., circulating supply) of stablecoins currently stands at a whopping . This equals a ~5x growth from the beginning of 2021, and a more than 20x growth from 2020. On top of this, as we started 2022 in a crypto winter fashion with the entire crypto market down from November 2021 all-time highs, in the same period, the stablecoin market cap experienced positive growth (which underlines stablecoins’ use as a hedge against crypto’s volatility — more on this later.)
Market cap growth of stablecoins.
Within the stablecoin market, there are more than 20 different stablecoins with more than $100 million market cap, and the top 5 coins account for +95% of the total market. With ’s USDT stablecoin as the consistent leader at $80 billion market cap, ’s USDC remains to be a strong runner-up at $53 billion market cap. A quick note — given the pace at which the stablecoin market is growing, numbers presented above will likely be outdated soon. The following are great resources to monitor the stablecoin landscape: , , and .
Assessing the potential market size of stablecoins is a tricky subject given how little macro-level predications tend to tell us. With that said, one interesting approach has to do with the demand for the US dollar globally. Thanks to US dollar’s reserve currency status, it has a feature that no other currency has— it is more stable than other fiat monies. So across the globe, people and institutions want to save and transact with the US dollar. Yet, regulators and banks outside the US do not always allow for easy access to US dollars in an effort to protect their own local currencies. This pushes people to look for alternative methods of US dollar exposure, sometimes even through black market. Stablecoins effectively solve this problem because they are very easy to access and mostly US dollar denominated. What this means is that everyone demanding US dollars outside the US can essentially rely on stablecoins to meet their demand. As Ryan Watkins elaborated, this makes the , and that’s quiet scary. This brings us to:
So the goal of anti-stablecoin regulation in the US becomes two-fold. First, the government wants to take a pie from the success of private stablecoin issuers by claiming that these issuers are already functioning as banks. This can be seen in the example of the produced by the PWG from November 2021. In this report, stablecoin issuers are labeled as “systemically important”, the same designation used to describe financial institutions in the wake of the 2008 crisis. Second, by impeding the growth of stablecoins through regulation, the US government strategically aims to limit the most valuable liquidity source of the overall crypto economy. This effectively prevents any cryptocurrency from growing to a large enough market and user base that may threaten the reserve status and dominance of the US dollar.
To further dwell on the second point, you may ask “how can a new cryptocurrency pose a threat to the US dollar when there is already so much demand for the dollar?” This would clearly happen at an edge-case scenario but the US dollar may not be as strong and stable as many imagine it to be. Remember the Consumer Price Inflation discussion from ? There we have established that we may be reaching a tipping point where US dollar’s supply growth may be outpacing tech advancements (and thus the availability of goods and services) which leads to the US dollar’s devaluation. So what happens when people who previously relied on US dollars to transact and save end up seeking an alternative currency that promises more stability?
You may remember from the Rower and Slow Cooker anecdote in the section that the Turkish Lira as a fiat money has not functioned well lately. In fact, the annual inflation in Turkey soared to a in January 2022. Following suit, Turkish Lira against the US Dollar. This came as a result of Turkish Central Bank’s years of unorthodox interest rate cutting policy, money printing to cover debts as well as the exodus of foreign capital from the country. Seeing this, lots of Turks flocked to fiat-to-crypto exchanges to swap their Turkish Liras with stablecoins. Surprisingly, at one point, this made Lira against Tether’s USDT, surpassing the US dollar and Euro.
“Cryptocurrency adoption is high but stablecoin adoption is really high too; lots of businesses operate in USDT ‘’ tweeted as he reviewed his prediction from a decade ago on Argentines’ crypto adoption. Similar to the reasons in Turkey, Argentina has been an inflation-plagued country with the latest inflation levels reaching 30%, nearing its . And just like Turkish people, Argentines have been quite fond of stablecoins. Since they can not rely on traditional banking channels due to the US dollar withdrawal limits and taxation, many Argentines rely on USDT to protect their precious savings. On top of this, thanks to their high level of crypto literacy, Argentines are also fond of Maker’s decentralized DAI stablecoin, which is pegged 1:1 to the US Dollar. According to this on DAI’s adoption in Argentina, while people used to immediately convert their DAI’s to US dollars, more and more started keeping their savings in DAI simply because they trust the crypto channels more than traditional banking channels. Again, a scenario where stablecoins function as a safe haven.
Most recently, we saw how stablecoins can be a safe have during hardest times. With the disastrous impact of the war on Ukraine and Russia’s economies, we saw how innocent people of war immediately moved their wealth into stablecoins. With USDT providing an easy exposure to the US dollar, neither nor had the stomach to watch their savings in their local currencies melt down.
We see product-market fit patterns on the other side of the globe too. Let’s talk about Asia. According to the Chainanalysis 2021 Cryptocurrency Adoption Index , which evaluates adoption based on individual transactions rather than country-level transaction volumes, Vietnam, India, and Pakistan are countries with the highest per-capita cryptocurrency adoption in the world. In these countries, stablecoin activity makes ~30% of the total crypto activity. While the rationale may differ from one country to the other, one thing for sure is that people are increasingly relying on stablecoins for their financial needs. This shows that stability of money in the form of stablecoins speaks to all of us, regardless of where we live.
When it comes to payments, the first company that comes to mind is PayPal. At the beginning of this year, revealed that the company would be working with the regulators to launch their own stablecoin, PayPal Coin. Why was this such a huge deal? Because PayPal has 350 million users. Considering how many of these users would be net-new entrants to the world of crypto thanks to PayPal’s stablecoin, this would be a big move towards not only stablecoins’ but also crypto’s mainstream adoption. Similarly, although we know that Meta’s Diem soon, just try to imagine what would happen if Facebook were to make available their own stablecoin to its 2 billion users worldwide — a new global currency?
With these and so many more product-market fit examples such as that I haven’t even touched on, it’s difficult not to see that stablecoins are here to stay. Stablecoins are meeting people’s money needs, and countries and organizations are taking note of this shift. Then, the question becomes which specific money needs do stablecoins best address? — brings us to the use cases of stablecoins.
So far, we have made the case for stablecoins mostly as an inflation-hedge tool. In the end, inflation is a very urgent problem that affects millions of people’s day-to-day lives, and stablecoins are a strong solution to this problem. But as we saw in the , there are multiple functions of money and a variety of use cases associated with each function. A strong money contender should meet and/or exceed all of them. Taking a look at the emerging use cases of stablecoins reveals all the ways in which stablecoins are providing value to their users, both as an alternative to fiat currencies and as the lifeblood of the crypto economy.
Stablecoin use cases in fiat and crypto economies by
Safe haven from fiat money’s inflation — As we established in the examples of Turkey and Argentina, in inflation-plagued countries the local currency becomes unusable — kind of like . People in such countries can easily trade their savings in their local currency for stablecoins via fiat-to-crypto exchanges.
Peer-to-peer transactions — Stablecoins provide a fast and easy way for two parties to send and receive money from each other. Users can simply send stablecoins from their wallets (e.g., ) to the recipient’s public address.
Payments — Traditional payment channels involve various intermediaries including commercial bank payment networks and card networks. For orchestrating all these parties, payment providers charge a flat fee plus an additional charge for each transaction. Stablecoin based payments, however, completely streamlines the traditional payments process. Relying on blockchain technology, stablecoin payments disintermediate the entire payments process, reduce counterparty risk, and cost much less than traditional channels. See exciting updates from and allowing merchants to take stablecoin payments at very low costs.
Cross-border payments — According to the , sending remittances across borders costs an average of 6.30% of the amount sent. So a user wanting to send $300 via traditional channels would be charged around $19; the same transaction with stablecoins costs less than $1.
Savings alternative to — Central banks may sometimes offer negative interest rates, such as . This means that when people deposit money into their savings account at their banks, they will be charged a fee instead of earning interest income. While this is a measure to disincentivize excessive saving, people who still want to save can rely on stablecoins to preserve their savings.
Money Laundering — Given the ability to conduct any size of transaction globally in a pseudo-anonymous way using stablecoins, it is not difficult to think that money launderers could be charmed by this. While examples of money laundering using stablecoins , increased mainstream adoption should hopefully dwarf this use case.
Safe haven from crypto’s volatility — Before stablecoins existed, crypto investors had to convert their holdings back into fiat currency so that they would avoid losing their wealth due to volatility (e.g., 20 Jan 2022 sell-off). Now, investors can rely on stablecoins to preserve their wealth on-chain and not have to deal with burdensome crypto-to-fiat conversions.
Bootstrap liquidity in DEXs — Decentralized exchanges (DEXs) like Uniswap rely on mechanisms called automated market makers () that algorithmically define clearing prices of assets in their . These exchanges need as much liquidity as possible to execute trades at the desired prices with little price slippage. Stablecoins like USDT and DAI are widely used by DEXs to bootstrap liquidity.
Path to stablecoins by
To calculate: In order to be a part of society and a complex economy, I need to know how to value my productive output and what I can get in return for it. For example, without money, say I am an apple producer, I would need to know the relative price of everything in terms of apples. But what if the thing I want to buy (e.g., pears) has no value in apples simply because their producer does not want apples. Money is a common denominator that allows me to easily evaluate my apple’s relative worth to pears and many other goods and services. We can think of it as an , such as Meters, to measure the worth of things instead of length or weight. As such, this function of money is usually referred to as the unit of account.
Expected functions of fiat money by
Of the characteristics I have summarized above, almost all but one can not be fully achieved by fiat money: scarcity. Scarcity is actually a pretty straightforward concept — anything that has short supply in the World is scarce, harder to obtain, and thus usually valuable in our eyes. To measure money’s scarcity, a good metric to be used is the . This ratio is calculated by dividing the amount of a resource (e.g., money) that already exists (e.g., in reserves) by the amount of that resource produced annually. Basically, Stock divided by Flow. If a material has a high stock-to-flow ratio, this means that a lot of that material already exists and not so much can be produced if wanted.
Gold has a high-stock-to-flow ratio because a lot of gold already exists and its annual production remains relatively stable. Graph source: , Chapter 3; Stock-to-flow by
The seeds of abandoning the Gold Standard were sowed . Prior to this War, each country’s fighting power was limited to how much gold they had in their war chests. During the War however, it was decided that lasting long beats all other survival options. So the ease at which governments could print more paper currency to fund their wartime needs and post-war deficits became too tempting, and far easier than demanding taxation from citizens.
As stated in the , “inflation is a monster that we don’t really totally understand”. He is right. There are different schools of thought in the economic community when it comes to explaining what causes inflation. I will go with the Consumer Price Inflation (CPI) perspective simply because I think it is easier for me to understand as consumers of the economy. (For a more comprehensive perspective, I highly suggest .)
CPI diagram by
Take a caveat here that measuring CPI is a tricky subject leading to questions like: Which basket of goods and services should be tracked? Are there any inherent performance upgrades in these goods that justify price increases? How often should the basket of goods change? And more. Thankfully, there are great resources out there that aim to address these questions such as — this Primer won’t be touching on these areas. For now we are concerned with what causes fiat money’s buying power to decrease.
If money supply grows (blue line), we would expect consumer prices to inflate (orange line). That’s why throughout the accounting history, the orange line and blue line have been closely linked — with a few exceptions. The period from 1995-now is one of these exceptions. During this period, tech advancements in the US made goods and services more accessible and cheaper, meaning the growth in money supply was successfully used in increased production. However, with the recent reports suggesting that is at an historically high level, we may finally be at a point where growth in money supply does not translate to growth in production but in consumer prices.
While there are many variables such as political regime change or climate change that may impact consumer prices, per capita money supply growth is one that is most closely linked. Going back to the detachment from the Gold Standard, we have established that the major fiat currencies are kind of free floating. This means they are not backed by some material that has a high stock-to-flow ratio, but are backed by policies and decisions that are malleable and prone to human error. That is the reason why, in instances like WWI, or more recently the Covid-19 pandemic, quick-result oriented governments had all the freedom to “save” their economies by printing more currencies (e.g., ). This does save the day by allowing short-term debts to be paid. However, in the long term, money supply growth ends up causing people’s wealth accumulated in fiat currencies to run out. With that in mind, it is best to expect that the $20 bill that you put in your bedside table 5 years ago will afford you a few less apples now because the government printed lots of money during that time.
Expected vs. Real functions of fiat money by
History of moving value over the internet shows that the questions and issues regarding fiat money have been a riddle to be solved for a long time. Since the 80s, there have been many iterations over fiat money using different methods. For example, while Chaum’s prioritized anonymity using advanced cryptographic techniques, Jackson and Downey’s prioritized stability and portability by centrally storing and digitally fractionalizing gold. Similarly, bitcoin is one of these iterations, except the fact that it became immensely successful thanks to the fine-tuned set of features it was designed for.
:
The more bitcoin is extracted, the harder it becomes to extract new bitcoins. That’s why the rate of increase of new bitcoins (blue line) and thus bitcoin’s monetary inflation decreases (orange line) over time. This decreasing supply schedule is cool because it tries to mimic the rate at which new gold is mined and ensures a high stock-to-flow ratio for bitcoin.
Expected functions of bitcoin by
Nassim Taleb’s 2018 quote from the foreword of The Bitcoin Standard still holds true. While there are testing out using bitcoin in everyday transactions, I personally don’t use bitcoin when I am buying apples. And I am 99% sure that I am not an outlier. Like the whole crypto market, bitcoin remains to be a volatile coin, it even a few times every year. It’s difficult to predict where bitcoin will stand in a few days because it . On the other hand, we know one simple fact that, to be able to use something as a medium of exchange, we want it to have a stable value in the short-medium term. In that sense, while major fiat currencies like the US dollar lose purchasing power over a longer timeline, they function well as mediums of exchange. I can be reassured that, with a $20 bill, I can buy the same amount of apples this week and the next (if apple availability remains unchanged). Additionally, the volatility in bitcoin’s price undermines its functionality as a unit of account — just imagine what it would be like if 1 Meter meant a different length tomorrow.
So after 10 years of operation and a , bitcoin is yet to achieve its full potential as the first organic peer-to-peer electronic cash. We just don’t use our valuable bitcoin investments as a medium of exchange nor a unit of account, yet.
Expected vs. Real functions of bitcoin by
This doesn’t mean that bitcoin is not revolutionary though. In addition to its fixed supply, bitcoin’s anonymity, decentralized governance, and fast & cheap settlement capabilities all make it an advanced form of money. However, for now, the last step in bitcoin’s quest to pose a real threat to fiat money seems to be stability. While whether this will ever be achieved remains to be a trillion dollar question, one thing we know is that bitcoin’s volatility lowered over the years with . This perhaps makes trust and time two variables that the stability equation needs to be solved for any cryptocurrency. Given the urgency of solving this stability riddle (and astronomical returns that may come), the best minds in the crypto world are currently innovating on this front.
So how do stablecoins achieve stability? Peg — they peg their value to a currency (e.g., Turkish Lira in ) or a certain price (e.g., based on a basket of goods) by leveraging a variety of mechanisms that ensure minimum variance around this peg. To this day, there have been many stablecoin experiments testing the most optimal pegging mechanism. With each experiment acting as a feature update over the previous, these experiments optimize for a set of parameters while trading off for others, and thus experiments continue.
1) Fiat-backed stablecoins, 2) Crypto-backed stablecoins, and 3) Algorithmic stablecoins. While fiat-backed stablecoins rely on a widely used reserve mechanism, algorithmic stablecoins depend on game theoretic coordination of their backers. Types of stablecoins by
Vitalik gave us the amazing . I present to you The Complete Guide to Rollups.
Ok it’s not actually complete, but it’s a great meme so I’m stealing it. This report only analyzes the design space of rollups on Ethereum and Celestia. I strongly recommend my recent for background.
I cover the two I’m most familiar with, but there are actually many other teams building here. , , and among others are also in the rollup stack game. Tezos in fact will likely be the first to ship , and Polygon Avail is incredibly similar to Celestia architecturally.
Note possibilities such as Validiums and using separate layers for DA and settlement also exist. Any settlement layer could be used so long as it accepts an attestation from Celestia that the data has been made available.
I’ve written previously about the importance of (fees and other forms of MEV). Assets which are relied upon for economic security need a high value staked. Revenue provides the fuel to craft attractive monetary policy (real yield, low and predictable inflation, etc.). Value capture → good monetary policy → monetary premium → high economic security. Fees and other MEV kickstart step 1.
Rollups’ L1 costs today are primarily calldata. Even for ZKRs, Polygon estimates that posting transaction data to Ethereum will represent (which is largely calldata) with ~10% for proofs. With EIP-4844 potentially coming as soon as the Shanghai hard fork next year, those costs plummet. It increases DA throughput and implements isolated EIP-1559 fee markets for Ethereum’s DA layer and settlement layer. An oversupply of DA means fees hit the floor. Later on, danksharding would make DA even more abundant. Rollups with any reasonable activity will significantly increase those profit margins.
On top of these fees paid to L1, rollups charge surge pricing for L2 gas fees as needed and costs to cover rollup operator expenses. A more detailed analysis can be found .
– an append-only log of transaction batches submitted by the sequencer
– log of proposed state roots which proposers assert to be the result of each transaction in the CTC
These overhead costs may be higher than you expected – there are still many inefficiencies to be ironed out in coming months. Optimizations such as will significantly improve data compression, and it will drop . Gas costs per batch could drop from to . . L2 blocks will instead be saved to Ethereum using a non-contract address, greatly reducing the on-chain footprint and minimizing gas costs.
Similarly, which will be significantly reduced by .
Tying everything together, the below illustration from a by depicts rollup value flows:
He also recently spoke on the topic .
Where that value capture goes from here is a point of debate. Many in the Ethereum and Celestia communities believe DA will eventually be incredibly valuable. Dankrad recently gave his views in a I hosted between him and John Adler:
Lastly, native smart contract execution can also provide ETH stakers with more value than Celestia – ETH can be used to earn additional yield as a productive asset. In particular, is an innovative solution which will allow for “re-staking” of ETH. ETH stakers will be able to subject their stake to additional slashing conditions. They would secure new applications looking to leverage ETH’s economic security, and their fee revenue would accrue to ETH (very similar to Cosmos’ concept of interchain security accruing value to ATOM).
If Celestia’s business model of taking a smaller cut on a bigger pie is successful, note the percentage value capture can also matter. If the base layer only captures a miniscule amount of value relative to the value it secures, security leverage could get uncomfortable. Maybe you’re comfortable with $1 billion staked securing $20 billion of value (), but are you comfortable with $100 billion? $200 billion? There’s no correct answer. A base layer which is economically self-sufficient even without rollup revenue does not have this concern.
13 bytes per transaction would be a meaningful optimization from where we sit today, but it’s possible. For example, Polygon Hermez expects that they’ll eventually need only 14 bytes per transaction. At the current danksharding spec of 1.3 MB/s, you get the nice round . Note most rollups are currently nowhere near 13 bytes per transaction, and you can only get this low by posting state diffs as ZKRs are able to. If you post full transaction data as ORUs do it’s much higher, and even some ZKRs will choose to post full transaction data. It’s quite optimistic that every rollup will run near the lower bound of DA efficiency.
Note the $0.01/tx I assume here is only for DA. This does not include other L2 transaction fee considerations. Valuable real estate (e.g., a DeFi hub rollup) could certainly charge a premium, but cheaper state charges lower execution fees. This could even be enough buffer to clear .
Simplified L1 Ethereum MEV will look like this with in-protocol PBS (or in the interim):
R1 and R2 have coordinated sequencers and they share the same DA layer. This allows for atomic cross-chain MEV. Each rollup could submit blocks whose acceptance is contingent upon the other. For example, only accept R1 B1 (which includes S1) if R2 B2 (which includes S2) is completed, and vice versa for R2’s conditions. , a rollup could submit two blocks to its DA layer, one which is the base case, and another which is only to be included if some condition on another rollup is met. The atomicity introduced here could be quite powerful, and it will be interesting to see how it impacts the dynamics of MEV value accrual.
Ethereum runs Gasper for consensus. So even if >⅓ of stake is faulty, Ethereum keeps chugging along retaining liveness. However, this ledger cannot finalize without the requisite votes. If the faulty >⅓ simply fails to vote, they will be out until the protocol is eventually able to finalize again.
Data withholding attack – Finalize a block and never make the data behind it available. Data withholding is a , and as such cannot be slashed in-protocol. Social coordination is needed to slash the attacker out-of-protocol via hard fork. This is likely the worst attack for rollups. You wouldn’t be able to submit a fraud proof for an invalid ORU, and you wouldn’t be able to recreate state for a ZKR. Thankfully, we have data availability sampling (DAS) nodes. They simply check if the data was made available, so even if consensus signs off on the block they know to reject it.
If you want to witness the Great Modular War of our times, please read . Pop it open on , and spend a whole day sifting through it. The vast majority of this report (and honestly most of my crypto knowledge) is sourced from me taking notes watching Twitter mayhem.
This is . Russ put radio on the internet. Don’t be like Russ.
the most concise definition of rollups which I’ll use for this report. Rollups are blockchains that post their blocks to another blockchain, and inherit the consensus and DA of that blockchain.
Fraud (ORUs) or validity (ZKRs) proofs then confirm the validity of the state transitions. Operators post a bond in the L1 smart contract to publish blocks. If the block or proof was invalid, their bond can be slashed and the transactions will not be finalized. This can take over a week for ORUs, but validity proofs finalize instantly.
These are still smart contracts. You don’t have to look far to see scary bridge hacks (which have largely been bugs, not compromised keys or economic attacks). Rollups aren’t immune. The contract is a very special trust-minimized two-way bridge, but it’s still a bridge. You slip a typo in your code and ya boy here rugs your bridge, .
That’s why are holding tight to their upgrade keys for now. There’s often a multisig committee with an upgrade notice period, and in others the code can even be changed arbitrarily by the team without notice.
Bartek recently gave an awesome rollup FUD speech on this topic and L2Beat’s linked .
Great conversation from the major rollup teams on their thoughts around decentralizing.
StarkNet has some pretty cool plans as a tangible example here:
You’ve probably heard ERs tossed around on Twitter, but don’t actually know what they are because nobody’s ever written about them extensively (except for , who helped me a ton for this section). I got you.
The ambitious . There’s already a team at the Ethereum Foundation led by Barry Whitehat building a zkEVM. Note this is a massive undertaking that will take several years.
Justin will have to redo his because so much ETH is gonna get burned.
No exception here for the case against ERs. Here’s a of Ethereum devs, Celestia devs, and Ethereum shitposters making good cases on both sides. There’s plenty of disagreement here. Some worry that ERs do this to SCRs:
The argument posits that ERs would inherently compete with the SCRs that Ethereum is trying to foster, hurting its credible neutrality. SCRs would feel like “second-class citizens.” It could “” This is a similar line of thinking as the debate over Cosmos Hub minimalism. This interesting balance of base layer functionality/minimalism is something that as well.
Conversely, that ERs could protect Ethereum’s credible neutrality by preventing any commercial rollup team from becoming too dominant. Say “Rollup Labs A” becomes too successful, and their rollups crowd out everyone else. Having such a reliance on one private team could be a serious concern to Ethereum’s credible neutrality and network effects. A neutral balance from ERs could alleviate this.
This is the in the past:
I don’t go quite so far as Ali’s concern that Ethereum rollups will break away, though :
If rollups are economically rational actors, they have a :
Vitalik hit on a very important topic in his . Ethereum is entering its most rapid period of innovation. Lots of goodies planned in the next few years. However, layering on complexity means more risk, and it’s getting tougher even for Ethereum devs to keep up with the rate of innovation (tryna do my part here!). While Ethereum doesn’t need to ossify completely, it will eventually pump the brakes a bit:
My favorite is that ERs are the public sector and SCRs are the private sector. SCRs remain the home of innovation and most rollup activity. ERs are the slow and steady predictable base layer for them to operate on top of, providing the utmost security. Both coexist harmoniously. ERs add far more value to SCRs than any marginal activity they cannibalize.
Maybe some of you have a private jet, but most of us fly commercial. I need to split the bill with everyone else on board. That’s kinda what is.
If I want to deposit $100,000 USDC into Yearn, paying $10 is fine. But paying 10% on a $100 deposit makes no sense. Instead, an L2 can take 1000 people who want to deposit $100 into Yearn, bundle them all together, and just make one L1 deposit! All without the user ever paying an L1 gas fee – you just pay your $0.01 share of the fee. In fact, this is exactly what from zkSync.
. They would develop a smart contract to aggregate L2 user deposits, then deposit them on L1. The contract would distribute aTokens from Aave L1 to StarkNet users according to their proportion in the pool. The contract would aggregate users’ aTokens and deposit them to L1 Aave when redeeming their collateral.
dAMMs were first . I’ll summarize their design here. However, note this is an area of active research with open design questions. There are no live implementations, and future designs could look quite different from this one.
Much of this comes back to Vitalik’s famous . TLDR here, trusted pairwise bridges mean a 51% attack on the connected chain = your bridge gets rugged (and many bridges are actually far worse). Similar principles apply if you settle to Ethereum but post data elsewhere (e.g., in a Celestium). If Celestia gets majority attacked here, you’re out of luck again.
This is the biggest wild card here in how strong network effects will be for different layers. The reality is we’re incredibly early days in understanding how rollups will communicate with each other in the future, and this can have a meaningful difference in how these stacks play out. There’s a lot that’s theoretically possible, a lot of unanswered questions, and a lot of tough engineering problems. For example, is working on some of the most exciting bridging possibilities for ZKRs. We may see atomic composability between ZKRs in the future.
If ERs are one end of the rollup spectrum, sovereign rollups (SRs) are all the way on the other end. , but I think they’re cool.
because the correct rollup chain is decided by an L1 smart contract. SR nodes instead decide the correct chain for themselves. They check for DA on Celestia, then they locally verify the fork-choice rule.
For reference, Arbitrum uses an IVG. Optimism’s initial plan was to run single-round fraud proofs, but they’ve since decided to pursue an IVG as well. You can see a recent overview of their new design .
The Stanford research team recently like Celestia. You can watch an overview . Remember Celestia accepts any raw information, it doesn’t check for “invalid” transactions. It’s a “dirty” ledger. Rollups themselves determine the validity of their chain. But there are some issues here which have since been discovered.
SRs are likely to use single round fraud proofs (this is what Cevmos plans to use). These operate on a much weaker assumption because light clients can receive the fraud proof from an honest challenger and keep passing it along. Light clients no longer sit between the challenger and responder while maintaining a connection. You just need to receive the fraud proof from anyone, then you can check it yourself. Note that single round fraud proofs can introduce higher latency though, as they’re slower to execute. They can also have a larger than IVGs.
SRs add additional bridge design considerations, but trust-minimized or trusted bridges are still possible with other Celestia rollups. You can read more details about some of the specific design possibilities in , much of which I will summarize here.
I saved the best for last. All-to-all bridging with linear complexity using validity proof aggregation. If this works out, the following idea may be a game-changer for rollup topologies which completely flips the appeal of SRs. Credit to and from who only very recently pulled this idea together.
Verifying the validity of the settlement rollup is overhead-minimized for the recursive rollups on top if the settlement rollup is fraud/validity provable. For example, intends to be a settlement rollup which is a simplified single-round fraud-provable EVM.
– For example, the base layer and both settlement rollups now build their own individual economic security without pooling it together in one place. This makes it far more difficult to build up security in any part of the stack, creating weak links. Additionally, restricted settlement rollups are likely to accrue little value (they’re more like a bridge than a typical general-purpose rollup).
You have here to represent Celestia’s token elsewhere.
TLDR – I believe enshrining a general-purpose ER is far better from a tech PoV, and arguably unfavorable from a social PoV. I favor the tech in this scenario, but remember
Thanks to , , , , , , , and for their review and insights.
https://insights.deribit.com/market-research/stability-elasticity-and-reflexivity-a-deep-dive-into-algorithmic-stablecoins/
Benjamin Simon, Dec 2020
In “Seigniorage Shares,” Sams puts forth a similar model with a similar justification, but with an important twist. Instead of a “rebasing” currency, in which changes to the money supply are distributed pro rata across all wallets, Sams’s system consists of two tokens: the supply-elastic currency itself and investment “shares” of the network. Owners of the latter asset, which Sams calls “seigniorage shares,” are the sole receptors of inflationary rewards from positive supply increases and the sole bearers of the debt burden when demand for the currency falls and the network contracts.
Astute crypto observers will recognize that Ametrano’s “Hayek Money” and Sams’s “Seigniorage Shares” are no longer academic abstractions. “Hayek Money” is nearly identical to Ampleforth, a protocol that launched in 2019 and rocketed in July 2020 to a fully-diluted market capitalization of over $1 billion. More recently, Sams’s “Seigniorage Shares” model has, to varying degrees, served as a foundation for Basis, Empty Set Dollar, Basis Cash, and Frax.
The questions before us now are no different from those confronting readers of Ametrano’s and Sams’s papers six years ago: Can an algorithmic stablecoin truly achieve long-term viability? Will algorithmic stablecoins always be subject to extreme expansionary and contractionary cycles? Which vision of an algorithmic stablecoin is more compelling: a simple rebasing model or a multi-token “seigniorage” system (or something else entirely)?
On all of these questions, the jury is still out, and it will likely be some time before a broad consensus emerges one way or another. Nevertheless, this article seeks to explore some of these fundamental issues, both from first principles reasoning and by drawing on some empirical data from recent months.
Algorithmic stablecoins are a world unto themselves, but before diving in, it’s worth taking a step back and surveying the broader stablecoin landscape. (Readers who are already well-acquainted with stablecoins might skim or skip this section.)
USDT remains the dominant stablecoin, but it is far from the only game in town. Broadly speaking, we can divide stablecoins into three categories: collateralized by U.S. Dollars, over-collateralized by multi-asset pools, and algorithmic(1). Our focus in this article is on the last category. However, it is important to note the benefits and drawbacks of stablecoins in the other categories, since understanding these tradeoffs will enable us to sharpen the value proposition of algorithmic stablecoins.
All of which brings us to algorithmic stablecoins. An algorithmic stablecoin is a token that adjusts its supply deterministically (i.e. using an algorithm) in order to move the price of the token in the direction of a price target(2). At the most basic level, an algorithmic stablecoin expands its supply when it is above the price target and contracts when it is below.
Each stablecoin model has its tradeoffs. Investors that care little about centralization will see no problem with USDT and USDC. Others will find that capital inefficient over-collateralization is a worthwhile price to pay for a permissionless, decentralized currency with a hard peg. However, for those who are not satisfied with either of these options, algorithmic stablecoins represent an enticing alternative.
For algorithmic stablecoins to be viable in the long term, they must achieve stability. This mandate is particularly difficult for many algorithmic stablecoins to fulfill because of their inherent reflexivity. Algorithmic supply changes are intended to be counter-cyclical; expanding the supply ought to reduce price, and vice versa. In practice, however, supply changes often reflexively amplify directional momentum(4), especially for algorithmic models that do not follow the “seigniorage shares” model by separating the stablecoin token from the value-accruing and debt-financing token(s).
Indeed, if we think even more carefully about what it would take for an algorithmic stablecoin to achieve long-term stability, we stumble upon an apparent paradox. In order to achieve price stability, an algorithmic stablecoin must expand to a market cap large enough that buy and sell orders do not cause price fluctuations. However, the only way for a purely algorithmic stablecoin to grow to a large enough network size is through speculation and reflexivity, and the problem with highly-reflexive growth is that it is unsustainable, and contraction is often equally reflexive. Hence the paradox: the larger the network value of the stablecoin, the more resilient it will be to large price shocks. Yet only highly-reflexive algorithmic stablecoins—those that are prone to extreme expansionary/contractionary cycles—have the potential to reach large network valuations in the first place.
A similar paradox of reflexivity holds for Bitcoin. In order for it to be viable for increasingly more people and organizations, it must increasingly grow more liquid, stable, and accepted. BItcoin’s growth in these characteristics over the years have allowed it to become accepted by first dark web participants, then wealthy technologists and more recently, traditional financial institutions. At this point, Bitcoin has gained a hardiness from being deep in its reflexive cycle, which is a path that algorithmic stablecoins will also need to follow.
But would it? Imagine for a moment if Ampleforth were to shed its as-yet “sticky” nature and fully transfer price volatility into supply volatility such that the price per AMPL would be mostly stable. Would this “mature” Ampleforth actually be an ideal candidate for a transactional base money?
Here we come upon the crux of the issue—and the central flaw with Ampleforth’s design. Even if the price of AMPLs were to reach $1, the purchasing power of an individual’s AMPL holdings would change on the path to reaching $1. Back in 2014, Robert Sams articulated this exact problem with respect to Ametrano’s Hayek Money:
Price stability is not only about stabilising the unit-of-account, but also stabilising money’s store-of-value. Hayek money is designed to address the former, not the latter. It merely trades a fixed wallet balance with fluctuating coin price for a fixed coin price with fluctuating wallet balance. The net effect is that the purchasing power of a Hayek Money wallet is just as volatile as a Bitcoin wallet balance.
Ultimately, Ampleforth’s simplicity—its straightforward single-token rebasing model—is a bug, not a feature. The AMPL token is a speculative vehicle, one that rewards holders with inflation when demand is high and forces holders to be debt financiers when demand is low. As such, it is difficult to see how AMPL can both serve this speculative purpose and achieve the stability that is the sine qua non of stablecoins.
Robert Sams’s “Seigniorage Shares” vision never became reality, but a new class of algorithmic stablecoin projects has recently emerged that shares many of its core ingredients.
At first glance, ESD’s mechanism design appears to be a hybrid between Basis and Ampleforth. Like Basis (and Basis Cash), ESD utilizes bonds (“coupons”) in order to finance protocol debt, which must be purchased by burning ESD (thus contracting supply) and can be redeemed for ESD once the protocol goes into expansion. Unlike Basis, however, ESD does not have a third token that claims inflationary rewards when the network expands after it has paid off its debt (i.e. after coupons have been redeemed). In place of this third token, ESD holders can “bond” (i.e. stake) their ESD in the ESD Decentralized Autonomous Organization (DAO) to receive a pro rata share of each expansion, similar to an Ampleforth rebase(7).
Crucially, unbonding ESD from the DAO requires a “staging” period, in which ESD tokens are temporarily “staged” for 15 epochs (5 days), neither tradable by their owner nor accruing inflationary rewards. ESD’s staging model thus functions similarly to Basis Cash Shares, as both bonding ESD to the DAO and purchasing Basis Cash Shares presuppose risk (liquidity risk for ESD; price risk for BAS) with the potential for future inflationary rewards. Indeed, although ESD uses a two-token model (ESD and coupons) instead of Basis Cash’s three-token model, the net effect of ESD’s staging period is that ESD becomes a de-facto three-token system, with bonded ESD as an analogue to Basis Cash Shares(9).
Clearly, the multi-token design contains many more moving parts than Ampleforth’s single-token rebasing model. Nevertheless, this added complexity is a small price to pay for the potential stability it provides.
Put simply, the upshot of the design that ESD and Basis Cash employ is that the reflexivity inherent in the system is contained, while the “stablecoin” part of the system is (somewhat) insulated from market dynamics(10). Speculators with risk appetite can bootstrap the protocol during contraction in exchange for future benefit from expansion. But users who simply want to own a stablecoin with steady purchasing power can, at least in theory, hold BAC or ESD without buying bonds, coupons, shares, or bonding their tokens to a DAO. This non-rebasing quality has the added benefit of composability with other DeFi primitives. Unlike AMPL, BAC and (non-bonded) ESD can be used as collateral or lent out without having to take into account the complex dynamics of constant, network-wide supply changes(11).
Nevertheless, even if multi-token algorithmic stablecoins are superior to their single-token peers, there is no guarantee that any of these algorithmic stablecoins will be sustainable in the long run. Indeed, the underlying mechanism design of algorithmic stablecoins precludes any such guarantee, since, as mentioned above, the stability of algorithmic stablecoins is ultimately a reflexive phenomenon grounded in game theoretic coordination. Even for protocols like ESD and Basis Cash that separate out the transactional, stable-purchasing-power token from the value-accrual and debt-financing token(s), the stablecoin token will only remain stable so long as there are investors who are willing to bootstrap the network when demand falls. The moment that there are no longer enough speculators who believe that the network is resilient, the network will no longer be resilient.
The speculative nature of purely algorithmic stablecoins is inescapable. Recently, however, a couple of fledgling protocols have emerged that attempt to rein in the reflexivity of algorithmic stablecoins by utilizing partial asset collateralization (“fractional reserves”).
The insight here is simple. Haseeb Qureshi is correct in his observation that, “fundamentally, you could say that the ‘collateral’ backing Seignorage Shares is the shares in the future growth of the system.” Why not then supplement this speculative “collateral” with actual collateral to make the system more robust?
Under the new system, the ESD protocol would target a 20-30% reserve ratio(13), denominated initially in USDC. These reserves are funded in part by the protocol itself, which sells ESD on the open market when ESD is above a certain price target, and also by ESD holders who wish to unbond from the DAO (they must make a deposit to reserves). These USDC reserves are then used to stabilize the protocol during contraction by automatically purchasing ESD until the minimum reserve requirement is reached.
Frax, which is yet to launch, is an even more elegant attempt to create a partially-collateralized algorithmic stablecoin. Like Basis Cash, Frax consists of three tokens: FRAX (stablecoin), Frax Shares (governance and value-accrual token), and Frax Bonds (debt financing token). However, unlike all other algorithmic stablecoins discussed thus far, FRAX can always be minted and redeemed for $1, meaning that arbitragers will play an active role in stabilizing the price of the token.
Effectively, dynamic collateralization acts as a stabilizing counter-cyclical mechanism, enabling the Frax protocol to blunt the deleterious effects of extreme reflexivity if needed. But it also allows the protocol to remain open to becoming fully un-collateralized in the future, if the market so chooses. In this sense, Frax’s dynamic collateralization mechanism is “agnostic.”
Neither Frax nor ESD v2 is live, so it remains to be seen whether either will succeed in practice. But in theory at least, these hybrid, fractional reserve protocols are promising attempts to marry reflexivity with stability, while still remaining more capital efficient than over-collateralized alternatives like DAI and sUSD.
Algorithmic stablecoins are fascinating monetary experiments, and their success is anything but assured. Although Charlie Munger’s maxim always rings true—“show me the incentive and I’ll show you the outcome”—these protocols have a game-theoretic complexity that is difficult to fully capture from a priori reasoning alone. Moreover, if past crypto market cycles are any indication, we should be prepared for these dynamics to play themselves out in ways that belie rational expectations.
Disclosure: the author may hold positions in the tokens mentioned in this article.
(1) As we shall see later on, some new stablecoin protocols are breaking down these categories. (2) Usually, this price target is one dollar. However, algorithmic stablecoins (and multi-asset collateralized stablecoins, in fact) have the flexibility of being able to target not just the value of a certain fiat currency, but also baskets of currencies, baskets of consumer goods (e.g. the Consumer Price Index), and even other cryptocurrencies. (3) As we shall see later on, however, a new generation of algorithmic stablecoins is experimenting with partial-asset collateralization. (4) In its most simplistic form, the reflexivity manifests like this: During the expansionary phases, non-holders see that large supply expansions have made token-holders wealthy, so they enter the game and buy, pushing up the price and creating further supply expansion, until the spring uncoils and the same reflexivity leads to a sharp unraveling with the same sort of vicious feedback loop (but in reverse). (5) Of course, one could argue that either the U.S. Dollars themselves, or the alternative forms of crypto collateral, also require their own kind of “collective belief.” (6) If one were to object that this is not a fair comparison, we could instead compare Ampleforth’s first 67 days to Empty Set Dollar’s 67 days. During those first 67 days for Ampleforth, there were only four days without a rebase. The other 63 days saw positive or negative supply changes, meaning that the TWAP was out of the price target range for over 94% of the time during that early period. If one is still unsatisfied, we can turn to Ampleforth’s most recent 67 days, which had 59 positive or negative rebases (88%). Any way you slice it, ESD has performed better, at least in terms of price stability. (7) Two additional important details about ESD’s design: the first is that ESD’s supply expansions are capped at 3%, which helps to blunt extreme reflexivity; Ampleforth, by contrast, had a string of ~20% rebases in July. The second is that ESD also incentivizes Liquidity Providers to provide liquidity to the USDC/ESD pool on Uniswap, and a portion of each supply expansion is disbursed to LPs. LPs also have similar requirements for “un-bonding,” but their “staging” period is significantly shorter. (8) The staging period for unbonding ESD from the DAO is currently set at five days and will be considerably longer in ESD v2. (9) It should be noted, however, that the staging requirements for bonding and unbonding ESD from the DAO create added “time risk” and illiquidity for this de facto third token, neither of which exist for Basis Shares. (10) The way that rebases work in the single-token design makes the network highly-reflexive in practice. But if we take Basis Cash as a comparison, the reflexivity is significantly blunted: when demand for BAC goes up, that demand is reflected in the size of the supply increase during a protocol expansion, similar as it does with Ampleforth. However, Basis Shares holders have a right to claim the newly-created BAC supply, which means that Basis Shares become more valuable as demand for BAC rises. However, an increase in the Basis Shares price does not, on its own, stimulate more demand for BAC (unlike with Ampleforth, where the supply increase reflexively encourages other people to buy in so they can earn “rebase rewards”—on this, see footnote 4). Nonetheless, as Andrew Kang has insightfully pointed out to me, Basis Cash’s use of a “pool 2,” where BAC is used to “farm” Basis Shares, ultimately re-creates the same reflexive effects while the Basis Shares token is still being distributed, since an increase in the price of Basis Shares does make BAC more valuable as an asset that is used to farm Basis Shares. (11) Thanks to Hasu for reminding me of this important point. (12) Credit to Angelo Min for helping me to conceptualize and articulate this argument. (13) Here, the denominator is circulating ESD.
https://medium.com/coinmonks/stablecoin-primer-section-4-stablecoin-projects-28b509624165
Osman Sarman, Apr 2022
“As simple as the concept of a stablecoin may sound, the protocols and mechanisms behind these stablecoins can be quite intricate.”
Hopefully, the two mental models from Section 3 reduce some complexity by aligning what to expect from the meat of this section where we analyze stablecoin projects one by one.
The format for each stablecoin project is as follows. First, I provide a brief intro of the protocol and follow that with the tokens that are critical to the protocol. Then, I analyze the details of the protocols using the design principles from Section 3, and end the analysis by presenting some key metrics as well as some specific use cases for each stablecoin. I call this format “Shallow-dive” — the information you will find is more detailed than what you would find on the surface (e.g., via Bloomberg.) But since this is a Primer, we are only diving to the shallow depths of each stablecoin so will not need a too much of a diving experience (also it’s not like there aren’t enough deep-dives out there already.) Let’s go!
Fiat-backed stablecoins:
USDC by Circle and Coinbase (coming soon)
Crypto-backed stablecoins:
OUSD by Origin Dollar (coming soon)
Algorithmic stablecoins:
FRAX by Frax Finance (coming soon)
FEI by Fei Protocol (coming soon)
UXD by UXD Protocol (coming soon)
So now the question is — what needs to be true for stablecoins to reach their full potential of being adopted by billions of people? Since I am not in the crystal gazing business, I do not have an easy answer to this. I do, however, have a simplifying approach that I call: the chain of adoption.
The chain of adoption
Think of this as a flow chart, where each layer represents a set of requirements, and only if all requirements in a layer are met, the adoption cycle can continue to the next layer. So that’s the high level - let’s talk specifics.
Layer 3 — Extrinsic factors: Now this is where we are approaching in the chain of adoption. Yes, the ultimate goal of any stablecoin is to be stable, and yes, the majority of the stablecoins have intricate mechanisms behind them to achieve this complicated task. But, mechanism design is only a part of the story and stablecoins depend on certain extrinsic factors to reach mainstream adoption as well. While not an exhaustive list, extrinsic factors boil down to the below areas:
Infrastructure: While blockchains like Avalanche and Terra boast about their TPS limits (i.e., 50,000 and 10,000 respectively), these limits have not been battle tested at scale. If stablecoins will be used by billions of people, we need a robust infrastructure in place.
As you can tell, I have not explicitly touched upon the specific risks pertaining to stablecoins. Each type has its own risks and a good way to assess the antifragility and long term stability of a stablecoin is via how well that stablecoin is progressing on the chain of adoption.
This is an exciting journey and I hope this Primer sparked the littlest interest in you about stablecoins. We are living at a time when money as we know it is changing. Each stablecoin experiment, whether it fails or persists, can be seen as a step forward in the search for better money. The stablecoin innovation is foundational, making it one of the few technological advancements that may touch every single person’s life in the World.
Osman
https://medium.com/hackernoon/stablecoins-designing-a-price-stable-cryptocurrency-6bf24e2689e5
Haseeb Qureshi, Feb 2018
A useful currency should be a medium of exchange, a unit of account, and a store of value. Cryptocurrencies excel at the first, but as a store of value or unit of account, they’re pretty bad. You cannot be an effective store of value if your price fluctuates by 20% on a normal day.
This is where stablecoins come in. Stablecoins are price-stable cryptocurrencies, meaning the market price of a stablecoin is pegged to another stable asset, like the US dollar.
It might not be obvious why we’d want this.
Bitcoin and Ether are the two dominant cryptocurrencies, but their prices are volatile. A cryptocurrency’s volatility may fuel speculation, but in the long run, it hinders real-world adoption.
Businesses and consumers don’t want to be exposed to unnecessary currency risk when transacting in cryptocurrencies. You can’t pay someone a salary in Bitcoin if the purchasing power of their wages keeps fluctuating. Cryptocurrency volatility also precludes blockchain-based loans, derivatives, prediction markets, and other longer-term smart contracts that require price stability.
And of course, there’s the long tail of users who don’t want to speculate. They just want a store of value on a censorship-resistant ledger, escaping the local banking system, currency controls, or a collapsing economy. Right now, Bitcoin and Ethereum can’t offer them that.
The idea of a price-stable cryptocurrency has been in the air for a long time. Much cryptocurrency innovation and adoption has been bottlenecked around price-stability. For this reason, building a “stablecoin” has long been considered the Holy Grail of the cryptocurrency ecosystem.
But how does one design a stablecoin? To answer that question, we first have to deeply understand what it means for an asset to be price-stable.
Of course, you can’t just decide an asset should be valued at a certain price. To paraphrase Preston Byrne: a stablecoin claims to be an asset that prices itself, rather than an asset that is priced by supply and demand.
This goes against everything we know about how markets work.
But this is an incomplete analysis.
The reality is, any peg can be maintained, but only within a certain band of market behavior. For different pegs the band might be wider than others. But it’s straightforwardly true that within at least some market conditions, it’s possible to maintain a peg. The question for each pegging mechanism is: how wide is the band of behavior it can support?
The question for any peg then is four-fold:
How much volatility can this peg withstand? (Namely, downward selling pressure)
How expensive is it to maintain the peg?
How easy is it to analyze the band of behavior from which it can recover?
How transparently can traders observe the true market conditions?
To summarize, an ideal stablecoin should be able to withstand a great deal of market volatility, should not be extremely costly to maintain, should have easy to analyze stability parameters, and should be transparent to traders and arbitrageurs. These features maximize its real-world stability.
These are the dimensions along which I’ll analyze different stablecoin schemes.
So how can you design a stablecoin?
The more stablecoin schemes I’ve examined, the more I’ve realized how small the space of possible designs actually is. Most schemes are slight variations of one another, and there are only a few fundamental models that actually work.
At a high level, the taxonomy of stablecoins includes three families: fiat-collateralized coins, crypto-collateralized coins, and non-collateralized coins. We’ll analyze each in turn.
If you want to build a stablecoin, it’s best to start with the obvious. Just create a cryptocurrency that’s literally an IOU, redeemable for $1.
You deposit dollars into a bank account and issue stablecoins 1:1 against those dollars. When a user wants to liquidate their stablecoins back into USD, you destroy their stablecoins and wire them the USD. This asset should definitely trade at $1 — it is less a peg than just a digital representation of a dollar.
This is the simplest scheme for a stablecoin. It requires centralization in that you have to trust the custodian, so the custodian must be trustworthy. You’ll also want auditors to periodically audit the custodian, which can be expensive.
But with that centralization comes the greatest price-robustness. This scheme can withstand any cryptocurrency volatility, because all of the collateral is held in fiat reserves and will remain intact in the event of a crypto collapse. This cannot be said for any other type of stablecoin.
A fiat-backed scheme is also highly regulated and constrained by legacy payment rails. If you want to exit the stablecoin and get your fiat back out, you’ll need to wire money or mail checks — a slow and expensive process.
Pros:
100% price-stable
Simplest (a big virtue!)
Less vulnerable to hacks, since no collateral is held on the blockchain
Cons:
Centralized — need a trusted custodian to store the fiat (otherwise vulnerable to brick and mortar theft)
Expensive and slow liquidation into fiat
Highly regulated
Need regular audits to ensure transparency
Say we don’t want to integrate with the traditional payment rails. After all, this is crypto-land! We just reinvented money, why go back to centralized banks and state-backed currencies?
If we move away from fiat, we can also remove the centralization from the stablecoin. The idea falls out naturally: let’s do the same thing, but instead of USD, let’s back the coin with reserves of another cryptocurrency. That way everything can be on the blockchain. No fiat required.
But wait. Cryptocurrencies are unstable, which means your collateral will fluctuate. But a stablecoin obviously shouldn’t fluctuate in value. There’s only one way to resolve this catch-22: over-collateralize the stablecoin so it can absorb price fluctuations in the collateral.
Say we deposit $200 worth of Ether and then issue 100 $1 stablecoins against it. The stablecoins are now 200% collateralized. This means the price of Ether can drop by 25%, and our stablecoins will still be safely collateralized by $150 of Ether, and can still be valued at $1 each. We can liquidate them now if we choose, giving $100 in Ether to the owner of the stablecoins, and the remaining $50 in Ether back to the original depositor.
But why would anyone want to lock up $200 of Ether to create some stablecoins? There are two incentives you can use here: first, you could pay the issuer interest, which some schemes do. Alternatively, the issuer could choose to create the extra stablecoins as a form of leverage. This is a little subtle, but here’s how it works: if a depositor locks up $200 of Ether, they can create $100 of stablecoins. If they use the 100 stablecoins to buy another $100 of Ether, they now have a leveraged position of $300 Ether, backed by $200 in collateral. If Ether goes up 2x, they now have $600, instead of the $400 they’d otherwise make.
Fundamentally, all crypto-collateralized stablecoins use some variant of this scheme. You over-collateralize the coin using another cryptocurrency, and if the price drops enough, the stablecoins get liquidated. All of this can be managed by the blockchain in a decentralized way.
We neglected one critical detail though: the stablecoin has to know the current USD/ETH price. But blockchains are unable to access any data from the external world. So how you can you know the current price?
Crypto-collateralized coins are a cool idea, but they have several major disadvantages. Crypto-collateralized coins are more vulnerable to price instability than fiat-collateralized coins. They also have the very unintuitive property that they can be spontaneously destroyed.
If you collateralize your coin with Ether and Ether crashes hard enough, then your stablecoin will automatically get liquidated into Ether. At that point you’ll be exposed to normal currency risk, and Ether may continue to fall. This could be a dealbreaker for exchanges — in the case of a market crash, they would have to deal with stablecoin balances and trading pairs suddenly mutating into the underlying crypto assets.
The only way to prevent this is to over-collateralize to the hilt, which makes crypto-collateralized coins much more capital-intensive than their fiat counterparts. A fiat-backed cryptocurrency will require only 100K collateral to issue 100K stablecoins, whereas a crypto-collateralized coin might require 200K collateral or more to issue the same number of coins.
Pros:
More decentralized
Can liquidate quickly and cheaply into underlying crypto collateral (just a blockchain transaction)
Very transparent — easy for everyone to inspect the collateralization ratio of the stablecoin
Can be used to create leverage
Cons:
Can be auto-liquidated during a price crash into underlying collateral
Less price stable than fiat
Tied to the health of a particular cryptocurrency (or basket of cryptocurrencies)
Inefficient use of capital
Most complexity
As you get deeper into crypto-land, eventually you have to ask the question: how sure are we that we actually need collateral to begin with? After all, isn’t a stablecoin just a coordination game? Arbitrageurs just have to believe that our coin will eventually trade at $1. The United States was able to move off the gold standard and is no longer backed by any underlying asset. Perhaps this means collateral is unnecessary, and a stablecoin could adopt the same model.
Okay, but how could you ensure the currency’s trading price? Simple — you’re issuing the currency, so you get to control the monetary supply.
But what if the coin is trading too low? Let’s say it’s trading at $0.50. You can’t un-issue circulating money, so how can you decrease the supply? There’s only one way to do it: buy up coins on the market to reduce the circulating supply. But what if the seignorage you’ve saved up is insufficient to buy up enough coins?
Seignorage Shares says: okay, instead of giving out my seignorage, I’m going to issue shares that entitle you to future seignorage. The next time I issue new coins and earn seignorage, shareholders will be entitled to a share of those future profits!
In other words, even if the smart contract doesn’t have the cash to pay me now, because I expect the demand for the stablecoin to grow over time, eventually it will earn more seignorage and be able to pay out all of its shareholders. This allows the supply to decrease, and the coin to re-stabilize to $1.
This is the core idea behind Seignorage Shares, and some version of this undergirds most non-collateralized stablecoins.
If you think Seignorage Shares sounds too crazy to work, you’re not alone. Many have criticized this system for an obvious reason: it resembles a pyramid scheme. Low coin prices are buttressed by issuing promises of future growth. That growth must be subsidized by new entrants buying into the scheme. Fundamentally, you could say that the “collateral” backing Seignorage Shares is shares in the future growth of the system.
Clearly this means that in the limit, if the system doesn’t eventually continue growing, it will not be able to maintain its peg.
Perhaps that’s not an unreasonable assumption though. After all, the monetary base for most world currencies have experienced nearly monotonic growth for the last several decades. It’s possible that a stable cryptocurrency might experience similar growth.
But there’s no free lunch in economics. Seignorage Shares can absorb some amount of downward pressure for a time, but if the selling pressure is sustained for long enough, traders will lose confidence that shares will eventually pay out. This will further push down the price and trigger a death spiral.
The most dangerous part of this system is that it’s difficult to analyze. How much downward pressure can the system take? How long can it withstand that pressure? Will whales or insiders prop up the system if it starts slipping? At what point should we expect them intervene? When is the point of no return when the system breaks? It’s hard to know, and market participants are unlikely to converge. This makes the system is susceptible to panics and sentiment-based swings.
Non-collateralized stablecoins are also vulnerable to a secular decline in demand for crypto, since such a decline would inevitably inhibit growth. And in the event of a crypto crash, traders tend to exit to fiat currencies, not stablecoins.
These systems also need significant bootstrapping of liquidity early on until they can achieve healthy equilibrium. But ultimately, these schemes capitalize on a key insight: a stablecoin is, in the end, a Schelling point. If enough people believe that the system will survive, that belief can lead to a virtuous cycle that ensures its survival.
With all that said, non-collateralized stablecoins are the most ambitious design. A non-collateralized coin is independent from all other currencies. Even if the US Dollar and Ether collapse, a non-collateralized coin could survive them as a stable store of value. Unlike the central banks of nation states, a non-collateralized stablecoin would not have perverse incentives to inflate or deflate the currency. Its algorithm would only have one global mandate: stability.
This is an exciting possibility, and if it succeeds, a non-collateralized stablecoin could radically change the world. But if it fails, that failure could be even more catastrophic, as there would be no collateral to liquidate the coin back into and the coin would almost certainly crash to zero.
Pros:
No collateral required
Most decentralized and independent (not tied to any other cryptocurrency or to fiat)
Cons:
Requires continual growth
Most vulnerable to crypto decline or crash, and cannot be liquidated in a crash
Difficult to analyze safety bounds or health
Some complexity
Stablecoins are critical to the future of crypto. The differences between these designs are subtle, yet matter immensely.
But after having looked at many of these, my primary conclusion is that there is no ideal stablecoin. Like with most technologies, the best we can do is choose the set of tradeoffs that we’re willing to accept for a given application and risk profile.
The best outcome then, is not to try to pick winners early, but rather to encourage the many stablecoin experiments to bear their fruit in the marketplace.
If crypto has taught us anything, it’s that it’s very hard to predict the future. I suspect there are many more variations on these schemes waiting to be unearthed. But whichever stablecoins win in the long run, they’ll almost certainly build on one of these fundamental designs.
Thanks to Rachel Diane Basse, Ivan Bogatyy, and #BlackLivesMatter
https://www.bis.org/publ/arpdf/ar2021e3.htm
BIS, Jun 2021
Key takeaways
Central bank digital currencies (CBDCs) offer in digital form the unique advantages of central bank money: settlement finality, liquidity and integrity. They are an advanced representation of money for the digital economy.
Digital money should be designed with the public interest in mind. Like the latest generation of instant retail payment systems, retail CBDCs could ensure open payment platforms and a competitive level playing field that is conducive to innovation.
The ultimate benefits of adopting a new payment technology will depend on the competitive structure of the underlying payment system and data governance arrangements. The same technology that can encourage a virtuous circle of greater access, lower costs and better services might equally induce a vicious circle of data silos, market power and anti-competitive practices. CBDCs and open platforms are the most conducive to a virtuous circle.
CBDCs built on digital identification could improve cross-border payments, and limit the risks of currency substitution. Multi-CBDC arrangements could surmount the hurdles of sharing digital IDs across borders, but will require international cooperation.
Digital innovation has wrought far-reaching changes in all sectors of the economy. Alongside a broader trend towards greater digitalisation, a wave of innovation in consumer payments has placed money and payment services at the vanguard of this development. An essential by-product of the digital economy is the huge volume of personal data that are collected and processed as an input into business activity. This raises issues of data governance, consumer protection and anti-competitive practices arising from data silos.
This chapter examines how central bank digital currencies (CBDCs) can contribute to an open, safe and competitive monetary system that supports innovation and serves the public interest. CBDCs are a form of digital money, denominated in the national unit of account, which is a direct liability of the central bank.1 CBDCs can be designed for use either among financial intermediaries only (ie wholesale CBDCs), or by the wider economy (ie retail CBDCs).
To set the stage, the first section discusses the public interest case for digital money. The second section lays out the unique properties of CBDCs as an advanced representation of central bank money, focusing on their role as a means of payment and comparing them with cash and the latest generation of retail FPS. The third section discusses the appropriate division of labour between the central bank and the private sector in payments and financial intermediation, and the associated CBDC design considerations. The fourth section explores the principles behind design choices on digital identification and user privacy. The fifth section discusses the international dimension of CBDCs, including the opportunities for improving cross-border payments and the role of international cooperation.
The overriding criterion when evaluating a change to something as central as the monetary system should be whether it serves the public interest. Here, the public interest should be taken broadly to encompass not only the economic benefits flowing from a competitive market structure, but also the quality of governance arrangements and basic rights, such as the right to data privacy.
It is in this context that the exploration of CBDCs provides an opportunity to review and reaffirm the public interest case for digital money. The monetary system is a public good that permeates people's everyday lives and underpins the economy. Technological development in money and payments could bring wide benefits, but the ultimate consequences for the well-being of individuals in society depend on the market structure and governance arrangements that underpin it. The same technology could encourage either a virtuous circle of equal access, greater competition and innovation, or it could foment a vicious circle of entrenched market power and data concentration. The outcome will depend on the rules governing the payment system and whether these will result in open payment platforms and a competitive level playing field.
Central bank interest in CBDCs comes at a critical time. Several recent developments have placed a number of potential innovations involving digital currencies high on the agenda. The first of these is the growing attention received by Bitcoin and other cryptocurrencies; the second is the debate on stablecoins; and the third is the entry of large technology firms (big techs) into payment services and financial services more generally.
By now, it is clear that cryptocurrencies are speculative assets rather than money, and in many cases are used to facilitate money laundering, ransomware attacks and other financial crimes.4 Bitcoin in particular has few redeeming public interest attributes when also considering its wasteful energy footprint.5
Stablecoins attempt to import credibility by being backed by real currencies. As such, these are only as good as the governance behind the promise of the backing.6 They also have the potential to fragment the liquidity of the monetary system and detract from the role of money as a coordination device. In any case, to the extent that the purported backing involves conventional money, stablecoins are ultimately only an appendage to the conventional monetary system and not a game changer.
However, the network effects that underpin big techs can be a mixed blessing for users. On the one hand, the DNA loop can create a virtuous circle, driving greater financial inclusion, better services and lower costs. On the other, it impels the market for payments towards further concentration. For example in China, just two big techs jointly account for 94% of the mobile payments market.7 Authorities have recently addressed concerns about anti-competitive practices that exclude competitors in associated digital services such as e-commerce and social media.8 This concentration of market power is a reason why authorities in some economies are increasingly turning to an entity-based approach to regulating big techs, as a complement to the existing activities-based approach.9
These costs are not immediately visible to consumers. Charges are usually levied on the merchants, who are often not allowed to pass these fees directly on to the consumer. However, the ultimate incidence of these costs depends on what share of the merchant fees are passed on to the consumer indirectly through higher prices. As is well known in the economics of indirect taxation, the individuals who ultimately bear the incidence of a tax may not be those who are formally required to pay that tax.10 The concern is that when big tech firms enter the payments market, their access to user data from associated digital business lines may allow them to achieve a dominant position, leading to fees that are even higher than those charged by credit and debit card companies currently. Merchant fees as high as 4% have been reported in some cases.11
Related to the persistently high cost of some digital payment options is the lack of universal access to digital payment services. Access to bank and non-bank transaction accounts has improved dramatically over the past several decades, in particular in emerging market and developing economies (EMDEs).12 Yet in many countries, a large share of adults still have no access to digital payment options. Even in advanced economies, some users lack payment cards and smartphones to make digital payments, participate in e-commerce and receive transfers (such as government-to-person payments). For instance, in the United States, over 5% of households were unbanked in 2019, and 14% of adults did not use a payment card in 2017. In France, in 2017, 13% of adults did not own a mobile phone.13 Lower-income individuals, the homeless, migrants and other vulnerable groups are most likely to rely on cash. Due in part to market power and low expected margins, private PSPs often do not cater sufficiently to these groups. Remedies may necessitate public policy support as digital payments become more dominant.
The foundation of the monetary system is trust in the currency. As the central bank provides the ultimate unit of account, that trust is grounded on confidence in the central bank itself. Like the legal system and other foundational state functions, the trust engendered by the central bank has the attributes of a public good. Such "central bank public goods" underpin the monetary system.14
The central bank plays four key roles in pursuit of these objectives. The first is to provide the unit of account in the monetary system. From that basic promise, all other promises in the economy follow.
The third function is to ensure that the payment system works smoothly. To this end, the central bank provides sufficient settlement liquidity so that no logjams will impede the workings of the payment system, where a payment is delayed because the sender is waiting for incoming funds. At times of stress, the central bank's role in liquidity provision takes on a more urgent form as the lender of last resort.
The central bank's fourth role is to oversee the payment system's integrity, while upholding a competitive level playing field. As overseer, the central bank imposes requirements on the participants so that they support the functioning of the payment system as a whole. Many central banks also have a role in the supervision and regulation of commercial banks, which are the core participants of the payment system. Prudential regulation and supervision reinforce the system. Further, in performing this role, central bank money is "neutral", ie provided on an equal basis to all commercial parties with a commitment to competitive fairness.
Compared with wholesale CBDCs, a more far-reaching innovation is the introduction of retail CBDCs. Retail CBDCs modify the conventional two-tier monetary system in that they make central bank digital money available to the general public, just as cash is available to the general public as a direct claim on the central bank.
From the public interest perspective, the crucial issue for the payment system is how the introduction of retail CBDCs will affect data governance, the competitive landscape of the PSPs and the industrial organisation of the broader payments industry. In this connection, the experience of jurisdictions with a long history of operating retail FPS provides some useful lessons. Central banks can enhance the functioning of the monetary system by facilitating the entry of new players to foster private sector innovation in payment services. These goals could be achieved by creating open payment platforms that promote competition and innovation, ensuring that the network effects are channelled towards a virtuous circle of greater competition and better services.16
Box III.A
Project Helvetia – exploring the use of wholesale CBDCs
Rules and standards that promote good data governance are among the key elements in establishing and maintaining open markets and a competitive level playing field. These can yield concrete economic benefits. The 2020 BIS Annual Economic Report drew a contrast between "walled gardens", where users are served in a closed proprietary network, and a public town square in which buyers and sellers can meet without artificial barriers. In return for access to all buyers, the sellers must stick to the standards set by the public authorities with a view to promoting the virtuous circle of greater participation and better services.
Box III.B
APIs and the industrial organisation of payments
APIs ensure the secure exchange of data and instructions between parties in digital interactions. Through encryption, they allow only the parties directly involved in a transaction to access the information transmitted. They accomplish this by ensuring proper authentication (verifying the credentials of the parties involved, eg from a digital ID, as discussed further in a later section) and authorisation (which specifies the resources a user is authorised to access or modify). Crucially, APIs can be set up to transmit only data relevant to a specific transaction. For example, a bank may provide an API that allows other banks to request the full name of the holder of a specific account, based on the account number provided. But this API will not allow the querying bank to retrieve the account holder's home address or transaction history. Insofar as APIs provide strong security features, they can add an additional layer of security to interactions.
A key benefit of APIs is that they enable interoperability between different providers and simplify transactions. For example, many large financial institutions or big techs possess valuable consumer data, eg on payment transactions. By allowing other market participants to access and analyse data in order to develop and improve their products, APIs ensure a level playing field. This promotes competition and delivers benefits to consumers. An example is "open banking", which allows third-party financial service providers to access transaction and other financial data from traditional financial institutions through APIs. For example, a fintech could use banks' transaction data to assess credit risk and offer a loan at lower, more transparent rates than those offered by traditional financial institutions.
APIs thus securely connect otherwise separate bank and non-bank payment service providers, benefiting consumers through cheaper services. Such APIs are a key enabler of interoperability between payment systems – relevant for both FPS and CBDC-based systems.
Well designed CBDCs and FPS have a number of features in common. They both enable competing providers to offer new services through a range of interfaces – including in principle via prepaid cards and other dedicated access devices, as well as services that run on feature phones. Such arrangements not only allow for lower costs to users, but also afford universal access, and could thus promote financial inclusion.
Moreover, as the issuers of CBDCs and operators or overseers of FPS, central banks can lay the groundwork for assuring privacy and the responsible use of data in payments. The key is to ensure that governance for digital identity is appropriately designed. For both CBDC and FPS, such designs can incorporate features that support the smooth functioning of payment services without yielding control over data to private PSPs, as discussed above in the context of APIs. An open system that gives users control over their data can harness the DNA loop, breaking down the silos and associated market power of incumbent private firms with exclusive control over user data.
Nevertheless, a CBDC allows for a more direct form of settlement, eliminating the need for intermediary credit and hence simplifying the architecture of the monetary system. An example of the potential benefits, to be discussed in a later section, is the potential to address the high costs and inefficiencies of international payments by extending these virtues of greater simplicity to the cross-border case.
At a more basic level, CBDCs could provide a tangible link between the general public and the central bank in the same way that cash does, as a salient marker of the trust in sound money itself. This might be seen as part of the social contract between the central bank and the public. CBDCs would continue to provide such a tangible connection even if cash use were to dwindle.
Ultimately, whether a jurisdiction chooses to introduce CBDCs, FPS or other systems will depend on the efficiency of their legacy payment systems, economic development, legal frameworks and user preferences, as well as their aims. Based on the results of a recent survey, payments safety and financial stability considerations (also in the light of cryptocurrencies and stablecoins) tend to weigh more heavily in advanced economies. In EMDEs, financial inclusion is a more important consideration.19 Irrespective of the aims, an important point is that the underlying economics concerning the competitive landscape and data governance turn out to be the pivotal factors. These are shaped by the central bank itself.
Vital to the success of a retail CBDC is an appropriate division of labour between the central bank and the private sector. CBDCs potentially strike a new balance between central bank and private money.20 They will be part of an ecosystem with a range of private PSPs that enhances efficiency without impairing central banks' monetary policy and financial stability missions. Central banks and PSPs could continue to work together in a complementary way, with each doing what they do best: the central bank providing the foundational infrastructure of the monetary system and the private PSPs using their creativity, infrastructure and ingenuity to serve customers.
Equally important is the long-term impact on innovation. Banks, fintechs and big techs are best placed to use their expertise and creativity to lead innovative initiatives, and integrate payment services with consumer platforms and other financial products. Central banks should actively promote such innovations, not hinder them.
Most fundamentally, a payment system in which the central bank has a large footprint would imply that it could quickly find itself assuming a financial intermediation function that private sector intermediaries are better suited to perform. If central banks were to take on too great a share of bank liabilities, they might find themselves taking over bank assets too.22
For these reasons, CBDCs are best designed as part of a two-tier system, where the central bank and the private sector each play their respective role. A logical step in their design is to delegate the majority of operational tasks and consumer-facing activities to commercial banks and non-bank PSPs that provide retail services on a competitive level playing field. Meanwhile, the central bank can focus on operating the core of the system. It guarantees the stability of value, ensures the elasticity of the aggregate supply of money and oversees the system's overall security.
However, as households and firms hold direct claims on the central bank in a retail CBDC, some operational involvement of the central bank is inevitable. Exactly where the line is drawn between the respective roles of the central bank and private PSPs depends on data governance and the capacity for regulation of PSPs.
An important aspect of any technical system for a CBDC is that it embodies a digital ledger recording who has paid what to whom and when. The ledger effectively serves as the memory of all transactions in the economy.25 The idea that money embodies the economy's memory means that a key design choice is whether a CBDC should rely on a trusted central authority to maintain the transactions ledger, or whether it is based on a decentralised governance system. In both a hybrid and an intermediated architecture, the central bank can choose to run the infrastructure to support record-keeping, messaging and related tasks, or delegate these tasks to a private sector provider.
Assessing the merits of each approach is an area of ongoing research. These studies also cover novel forms of decentralisation enabled via distributed ledger technology (DLT, see glossary). So-called permissioned DLT is envisioned in many current CBDC prototypes. In the process of updating the ledger of payment records, such permissioned DLT systems borrow concepts from decentralised cryptocurrencies, but remedy the problems due to illicit activity by allowing validation only by a network of vetted or permissioned validators.
Permissioned DLT designs may have economic potential in financial markets and payments due to enhanced robustness and the potentially lower cost of achieving good governance, as compared with systems with a central intermediary. However, such resilience does not come for free, as an effective decentralised design that ensures the right incentives of the different validators is costly to maintain. On balance, a trusted centralised design may often be superior, as it depends less on aligning the incentives of multiple private parties.26
These design choices will also have a bearing on the industrial organisation of the market for payments. They will determine the requirements for data governance and privacy, as well as the resultant DNA loop and market structure.
In the hybrid CBDC model, the central bank would have access to the full record of CBDC transactions. This would lead to a competitive level playing field among private PSPs, but comes at the expense of a greater concentration of data in the hands of the central bank itself. Additional data governance requirements may be needed in such cases, as we discuss below.
An intermediated CBDC model would have economic consequences that are similar to those of today's retail FPS. These are based on an open architecture in which PSPs retain an important role in protecting customer data. In such systems, APIs ensure interoperability and data access between PSPs (see Box III.B above), thereby avoiding closed networks and walled gardens. Instead, PSPs would operate customer wallets as a custodian, rather than holding deposit liabilities vis-à-vis the users of the payment system. This would simplify the settlement process. Further, a level playing field ensures that network effects would facilitate a virtuous cycle of greater user participation and lower costs through competition and private sector innovation.
However, any CBDC architecture faces issues of data governance. The risks of data breaches would put an additional onus on the institutional and legal safeguards for data protection. This consideration also applies to today's conventional payment system, in which PSPs store customer data. Yet data privacy and cyber resilience take on added importance in a system with a CBDC, especially on the part of the issuing central bank. To address these concerns, CBDC designs can incorporate varying degrees of anonymity, as discussed in the next section.
On top of these considerations, an economic design which limits a CBDC's footprint would also ensure that its issuance does not impair the monetary policy transmission process. Instead, interest-bearing CBDCs would give central banks an additional instrument for steering real activity and inflation.30 If changes to the policy rate were directly passed through to CBDC remuneration, monetary transmission could be strengthened. There has also been discussion about the use of CBDCs to stimulate aggregate demand through direct transfers to the public. Rather than the use of the CBDC per se, the key challenge for such transfers is to identify recipients and their accounts.31 In any case, as CBDCs would coexist with cash, users would have access to either instrument, and it is unlikely that deeply negative interest rates would prevail, or that CBDC would materially change the effective lower bound on monetary policy rates.
Overall, a two-tiered architecture emerges as the most promising direction for the design of the overall payment system, in which central banks provide the foundations while leaving consumer-facing tasks to the private sector. In such a system, PSPs can continue to generate revenue from fees as well as benefiting from an expanded customer base through the provision of CBDC wallets and additional embedded digital services. A CBDC grounded in such a two-tiered system also ensures that commercial banks can maintain their vital function of intermediating funds in the economy. Both hybrid and intermediated models give central banks design options for sound data governance and high privacy standards. In either system, CBDCs could be supported by policy tools so that any unintended ramifications for the financial system and monetary policy could be mitigated.
Effective identification is crucial to every payment system. It guarantees the system's safety and integrity, by preventing fraud and bolstering efforts to counter money laundering and other illicit activities. Sound identification is further required to ensure equal access for all users.
To ensure access and integrity in today's financial system, bank and non-bank PSPs verify identity. When customers open an account, PSPs often demand physical documents, eg passports or driving licenses. For cash, small transactions are anonymous and largely unregulated for practical reasons, but identity checks apply to high-value payments. Despite these measures, identity fraud is a key concern in the digital economy.32 These considerations suggest that a token-based CBDC which comes with full anonymity could facilitate illegal activity, and is therefore unlikely to serve the public interest.33
Identification at some level is hence central in the design of CBDCs. This calls for a CBDC that is account-based and ultimately tied to a digital identity, but with safeguards on data privacy as additional features. A digital identity scheme, which could combine information from a variety of sources to circumvent the need for paper-based documentation, will thus play an important role in such an account-based design. By drawing on information from national registries and from other public and private sources, such as education certificates, tax and benefits records, property registries etc, a digital ID serves to establish individual identities online.34 It opens up access to a range of digital services, for example when opening a transaction account or online shopping, and protects against fraud and identity theft.
At one end of the spectrum are systems that rely exclusively on private parties to verify identity. Big techs such as Google or Facebook, and Alibaba or Tencent in China have developed their own digital IDs that are required for many of their services, including payment apps (panel 1). In some cases, consortiums of private firms provide a harmonised ID that works across multiple providers (panel 2). For example, yes® will allow customers of Germany's savings and cooperative banks to use their online banking details as a digital ID. The main drawback of purely private IDs is that they are limited to the specific network for which they are designed, and hence may lead to silos and limited interoperability with other services.
Some countries follow models based on public-private partnership. In one variant, market-driven collaboration is guided by principles set out by the authorities (panel 3). For instance, a consortium of banks in Sweden developed the BankID solution, which allows users to authenticate themselves for payments and government services. Similar solutions are offered in Denmark, Finland and Norway.
Proceeding one step further are systems in which the private and official sector develop a common governance framework and strive for interoperability between their services, as seen in France or the Netherlands (panel 4). Government-led solutions represent the furthest-reaching model (panel 5). These allow administrative databases to be linked up, further enhancing the functionality and usefulness of digital ID. For example, Estonia provides every citizen with a digital identity that allows access to all of the country's e-services. In Singapore, the SingPass platform provides a digital identity linked to individuals' biometrics (facial recognition and fingerprints). The Kenyan Huduma Namba system brings together information from various sources and allows access to a range of public services.
These risks underline that, while identification (based on a unique digital ID) is crucial for the safety of the payment system and transactions in a CBDC, there is a countervailing imperative to protect the privacy and safety of users. Beyond theft, the combination of transaction, geolocation, social media and search data raises concerns about data abuse and even personal safety. As such, protecting an individual's privacy from both commercial providers and governments has the attributes of a basic right. In this light, preventing the erosion of privacy warrants a cautious approach to digital identity.
Consequently, it is most useful to implement anonymity with respect to specific parties, such as PSPs, businesses or public agencies. CBDC designs can allow for privacy by separating payment services from control over the resulting data. Like some FPS, CBDCs could give users control over their payments data, which they need only share with PSPs or third parties as they decide (eg to support a credit application or other services). This can protect against data hoarding and abuse of personal data by commercial parties. Such designs can also prevent access by the central bank and other public authorities, while still allowing access by law enforcement authorities in exceptional cases – similar to today's bank secrecy laws. In addition to the issue of who can access data, governance issues need to be addressed with respect to who holds the data. Concentration of data in the hands of a single entity puts an additional premium on the institutional and legal safeguards for data protection.
In recognition of these data governance issues, some CBDC designs aim to safeguard anonymity through additional overlays, even for account-based CBDCs. One proposal is to ensure the anonymity of small-value transactions by issuing vouchers which are maintained by a separate data registrar that issues them up to some limit in the user's name. Another approach, considered in the case of China's e-CNY, is to shield the identity of the user by designating the user's public key, which is issued by the mobile phone operator, as the digital ID. The central bank would not have access to the underlying personal details.35
Overall, these developments suggest that the most promising way of providing central bank money in the digital age is an account-based CBDC built on digital ID with official sector involvement. Digital ID could prove more efficient than physical documents, opening up many ways of supporting digital services in general. One size would not fit all in the choice of digital identification systems, as different societies will have different needs and preferences. A recent referendum in Switzerland illustrates this. While voters did not object to a digital ID in general, they rejected the proposal for one provided by the private sector.36 The foundational, public good nature of digital ID suggests that the public sector has an important role to play in providing or regulating such systems.
Such concerns around potential harmful spillovers associated with currency substitution are not new. So-called dollarisation refers to the domestic use of a foreign currency in daily transactions and financial contracts, as well as the associated macroeconomic implications. Dollarisation, a long-running theme in international finance, is widespread in some economies.
However, the effective design of CBDCs based on digital ID and implemented as an account-based system can be expected to largely eliminate such risks. The potential for a foreign CBDC to make deep inroads into the domestic market, or to take off as a "dominant" global currency, is likely to be limited. For example, for China's account-based e-CNY to circulate widely in another jurisdiction, both the issuing central bank (the People's Bank of China), and to a large extent also the central bank of the receiving jurisdiction would need to accept this situation. The issuing central bank would need to recognise a foreign user's digital ID as that of a bona fide member of the CBDC network. The idea of paper currency circulating in the black market is thus an inaccurate analogy to how a CBDC would operate. In this sense, CBDCs have attributes that are very different to those of cash, even though both are direct claims on the central bank.
More broadly, it is important to bear in mind the dictum that the payment system does not exist in a vacuum. Payments mirror underlying economic transactions. The existence of a payment need reflects the economic transaction between the payer and the payee, for instance, a tourist from China who is shopping at a department store in a foreign holiday destination. Since issuing central banks would retain control over cross-border usage, they could restrict non-residents' access to their currency to certain permitted transactions only. This might reduce the risk of volatile flows and currency substitution in recipient economies. Such restrictions would resemble existing rules governing how non-residents can open a bank account outside their home country.
Not only issuing, but also recipient economies have policy tools to address the concerns of digital currency substitution. In particular, robust legal tender provisions can ensure that the use of the national currency is favoured in domestic payments.
The cross-border use of account-based CBDCs will require international cooperation. One challenge relates to the use of digital ID information outside the originating country. The issuing authority or user may not be willing to provide this information to countries that may have different data protection regulations. ID systems may be not fully interoperable. Indeed, even within a jurisdiction, ID documents may be issued by several different public authorities, sometimes with limited coordination between them. As a supranational digital ID would require unprecedented concentration of an individual's information, it would be politically fraught. However, a supranational digital ID scheme would not be necessary for cross-border cooperation on CBDCs.
Instead, international efforts towards mutually recognising national ID credentials are a more promising approach. A G20 roadmap for cross-border payments has given impetus to cooperative efforts in several directions, complementing the standard-setting efforts among central banks in the BIS Committee for Payments and Market Infrastructures.40 One building block involves fostering KYC and sharing information on identity across borders. Another involves reviewing the interaction between data frameworks and cross-border payments, and yet another involves factoring an international dimension into CBDC design.41
Such cooperation could form the basis for robust payment arrangements that tackle today's challenges head-on. Of particular promise are multi-CBDC (mCBDC) arrangements that join up CBDCs to interoperate across borders. These arrangements focus on coordinating national CBDC designs with consistent access frameworks and interlinkages to make cross-currency and cross-border payments more efficient. In this way, they represent an alternative to private sector global stablecoin projects.42
The greatest potential for improvement is offered by the third model, a single mCBDC system that features a jointly operated payment system hosting multiple CBDCs (bottom panel). FX settlements would be payment-versus-payment (PvP) by default, rather than requiring routeing or settlement instructions through a specific entity acting as an interface. Facilitating access and compatibility through such a system could benefit users through improved efficiency, lower costs and wider use of cross-border payments.
The potential benefits of these arrangements increase with the degree of harmonisation and technical alignment. Each would require increasingly intertwined identification schemes, but in all cases, ID would remain at a national level. Enhanced compatibility (model 1) might require some coordination of digital ID schemes across payment areas, such that the same necessary information could be used in each case to comply with AML/CFT requirements. Interlinked CBDCs (model 2) would have to rely on some common cross-border standard for identity schemes. An example is an approach that maps heterogeneous schemes to a shared template. Single mCBDC systems (model 3) could be built on similar standards. Yet even in this model, with a single, jointly operated mCBDC system, a single ID system would not be needed; it would be sufficient for participating jurisdictions to recognise one another's IDs. Making the most out of CBDCs in cross-currency transactions thus requires international cooperation.
Central banks around the world have embarked on developing mCBDC arrangements in close collaboration to foster more efficient cross-border payments. A prime example is the "mCBDC Bridge" project of the BIS Innovation Hub and its partner central banks in China, Hong Kong SAR, Thailand and the United Arab Emirates (model 3). This project explores how CBDCs could help to reduce costs, increase transparency and tackle regulatory complexities in payments.
Current and planned cross-border CBDC projects show that the future of the international financial system rests on upgrading it for the digital age. Different mCBDC arrangements might contribute towards this goal, but their detailed architecture will depend on the specific features of domestic CBDC systems. Even though payment system design is primarily a domestic choice, new technologies and models of cooperation will make it feasible to overcome the challenges faced by previous projects to interlink payment systems across borders.
Central banks stand at the centre of a rapid transformation of the financial sector and the payment system. Innovations such as cryptocurrencies, stablecoins and the walled garden ecosystems of big techs all tend to work against the public good element that underpins the payment system. The DNA loop, which should encourage a virtuous circle of greater access, lower costs and better services, is also capable of fomenting a vicious circle of entrenched market power and data concentration. The eventual outcome will depend not only on technology but on the underlying market structure and data governance framework.
Central banks around the world are working to safeguard public trust in money and payments during this period of upheaval. To shape the payment system of the future, they are fully engaged in the development of retail and wholesale CBDCs, alongside other innovations to enhance conventional payment systems. The aim of all these efforts is to foster innovation that serves the public interest.
CBDCs represent a unique opportunity to design a technologically advanced representation of central bank money, one that offers the unique features of finality, liquidity and integrity. Such currencies could form the backbone of a highly efficient new digital payment system by enabling broad access and providing strong data governance and privacy standards based on digital ID. To realise the full potential of CBDCs for more efficient cross-border payments, international collaboration will be paramount. Cooperation on CBDC designs will also open up new ways for central banks to counter foreign currency substitution and strengthen monetary sovereignty.
Access: as used in this chapter, this means the access of households and businesses to payment services (see "financial inclusion").
Account-based CBDC: a type of CBDC tied to an identification scheme, such that all users need to identify themselves to access it.
Efficiency: the efficiency of payments refers to low costs, and in some cases also to the speed, quality and transparency of payments.
Federated digital ID: a digital identity system in which an individual's personal identity is stored in several distinct identity systems, while allowing for interoperability and authentication across systems and external applications.
Payment service provider (PSP): an entity that may issue payment instruments or provide retail payment services. This can include commercial banks and non-bank financial institutions.
Retail (or general-purpose) CBDC: a CBDC for use by the general public.
Token-based CBDC: a type of CBDC secured via passwords such as digital signatures that can be accessed anonymously.
https://deathereum.substack.com/p/chasing-stability-a-stablecoin-deepdive
Deathereum, May 2021
This essay is broken down into three parts.
Part I - Intro to Stablecoins + Origins and History of Money
Part II - Desired Traits in a Stablecoin + Deep Dive on Various Designs
Part III - Conclusion: The Best Decentralized Stablecoin Design
Thirteen years ago, Satoshi Nakamoto kickstarted what is arguably the most notable attempt at creating a censorship resistant, decentralized monetary network (preceded by DigiCash, BitGold, B-money). While Bitcoin, the poster boy of cryptocurrencies, has attracted trillions of dollars of inflow into the system and remains the ideological benchmark for an independent monetary system, it continues to experience significant bouts of price volatility rendering it unlikely to ever offer that which is fundamental to any currency or money - stability. It’s deterministic and capped supply causes it to resemble gold more closely than it does money.
Beyond merely providing an entry point, stablecoins’ use cases are several: (1) dollarization of currencies in severely inflationary countries (2) savings / yield generation (3) frictionless, borderless payments (4) usage in credit markets and leverage trading (5) safe haven assets for traders and investors and (5) fee tokens for on-chain economies.
In summary, the stability of stablecoins refers to the preservation of purchasing power within a native economy, no different from stability of fiat money. The simplest and most common way for stablecoins to do so is by piggy-backing on a fiat currency, but as we’ll see later this isn’t the only way.
Before I jump into that, it would be useful to briefly look at the origins and evolution of money.
Any item or object that facilitates transactions between two or more parties can be considered money. Long before ‘modern’ money was conceived, humans relied on the barter system - a direct exchange of goods between two consenting parties. This works well when each party has something to offer that the other wants. But what if one doesn’t? The transaction would fail. Now imagine some commodity (silver, gold etc.) that everyone considers valuable and knows can be used to trade away in the future. Any seller would be willing to accept such commodity as payment even if he doesn’t intend to use it himself. This indirect exchange mechanism invalidates the need for coincidence of wants, giving money its primary function - a medium of exchange.
The second drawback of barter is that it becomes difficult for any party to value their goods in terms of another good because direct exchange rates may not exist for every single potential trading pair. One might know how many sacks of rice he can fetch by trading away his cow, but not how many chickens he could get unless he knows the exchange rate between chickens and rice. If instead a single common denominator was available to measure the value of all goods in the market, trade would become far more efficient. Thus, money serves as a unit of account.
Now imagine you own a cow that produces 8 gallons of milk a day, more than you can consume yourself. Unless you sold the excess milk today, you would lose some value due to spoilage. You would be better off trading your milk for a more durable good that can be used later. Thus, money as a store of value allows one to preserve surplus productivity for future use or monetization.
All money must fulfill the 3 functions above but merely doing so doesn’t constitute a useful money. Some goods are better at being money than others. To illustrate, crypto traders and investors in the pre-stablecoin days would hold and denominate their portfolios in BTC as it was considered a better store of value than the US Dollar. All other cryptocurrencies would be quoted in terms of BTC (unit of account). Though Bitcoin was designed to serve as a digital, peer-to-peer form of cash (medium of exchange it is not good money as its value tends to appreciate fast. If an asset is expected to increase in value over time, holders are less likely to use it for transacting. Good money, on the other hand, is widely acceptable, durable, transportable, fungible, divisible and stable. On the surface, fiat money checks all boxes.
Note: That money evolved from the failures of the barter system is a tale as old as time, popularized and brought into economic literature by Adam Smith.
Anthropologists have since contested these claims stating that no evidence exists to suggest that a formal barter economy ever existed. While they’re perhaps right, the Smithian version does help illustrate the utility of money, even if it doesn’t necessarily tell us the true origin of it.
In the days of barter, any commodity could be used as money but societies naturally gravitated towards a smaller subset of commodities that had the most marketability, such as gold and silver which were deemed to have intrinsic value. This form of ‘hard money’ allowed parties to transact efficiently and trustlessly. Economists also refer to this form as ‘outside money’ - one person’s asset that doesn’t represent another’s liability, i.e, if represented on an aggregated balance sheet of the world, its value would be net positive. Such money requires no enforcement of value and hence no trust.
Precious metals were often deposited with trusted parties like goldsmiths, who would then issue a paper receipt acknowledging a depositor’s claim to the assets.
Eventually, people realized it was more convenient to trade these receipts, rather than carry metals around. Because these receipts were redeemable for real assets, they were implicitly accepted as money. The practice of representing claims on hard money as ‘paper money’ / currency notes led to the monetary system known as the ‘Gold Standard’.
The Gold Standard restricted a country’s Central Bank from issuing currency in excess of the value of its gold reserves. A commitment to the standard was a means to prevent inflation. The Bretton Woods Conference in 1944 adopted the US Dollar as the international reserve currency and the Gold Standard was used within each of the adopting nations as an exchange rate pegging mechanism between Central Banks. In 1971, the USA defaulted on its promise of redeeming US Dollars for Gold and abandoned the Gold Standard altogether. This is how the present ‘pure fiat’ system was born.
“Legitimacy by brute force - someone (e.g. Central Government) convinces everyone that they are powerful enough to impose their will and resisting them will be very hard. This drives most people to submit because each person expects that everyone else will be too scared to resist as well.”
“Legitimacy by continuity - if something was legitimate at time T, it is by default legitimate at time T+1.”
The main drawback of fiat money isn’t that it lacks intrinsic value. It’s that there aren’t any restrictions on a Central Bank’s ability to print money. The implication of switching to a pure fiat regime is that Central Banks can print money as they please and Commercial Banks can further multiply the money in circulation through fractional reserve banking. What happens when the quantity of money increases without an increase in actual output of goods and services? Every unit of money is less valuable. In other words, purchasing power of money reduces. Doesn’t seem very stable, does it?
Stablecoins hold the promise of a transparent alternative to money. One type in particular - algorithmic stablecoins not backed by assets - can theoretically combine the essence and scalability of fiat money while removing the risks of arbitrary money printing. Let’s take a look.
2) Trustlessness
3) Scalability
4) Simplicity
The scope of this evaluation, however is restricted to stability, trustlessness, scalability and simplicity as my primary objective is to conclude on what I believe is the most decentralized and scalable stablecoin design.
[1] The BitShares whitepaper refers to decentralization as all parties in the system having equal status with no special privileges and democratic governance.
Trustlessness refers to removing a party’s ability to default/withhold assets and not needing contractual obligations to transact - this directly aligns with ‘Decentralization’ embedded in the Stablecoin Trilemma.
[2] I will provide brief explanations of each protocol’s information mechanism but refrain from making judgements.
[3] While desirable, privacy-enabling mechanisms at present are limited to proposed integrations of existing stablecoins with Tornado Cash to allow anonymous minting (E.g. FRAX) and stablecoins built on privacy-enabled networks (E.g. SILK on Secret Network).
At a high level, stablecoins can be classified on the basis of peg, collateral and the core mechanism employed. Below is a taxonomy along with examples.
PEG - Fiat, Commodity, Index, Floating
Fiat - Price is pegged to a real currency (E.g. USDC, agEUR TerraKRW)
Commodity - Price is pegged to the value of a unit of commodity (E.g. PAXG linked to price of 1 troy ounce of gold)
Index - Price tracks some index of assets/commodities/currencies (E.g. Volt pegged to CPI, Saga pegged to IMF’s SDR)
Floating - Price freely fluctuates within a range of values as a function of supply, demand and incentives, similar to how exchange rates between currency pairs fluctuate (E.g. RAI, FLOAT)
COLLATERALIZATION
Type of collateral/reserves
Fiat - Stablecoins are 1:1 tokenized versions of US Dollars, Euros etc. deposited with intermediaries
Crypto - Stablecoins backed by on-chain crypto assets like ETH and BTC.
Commodity - Stablecoins backed by off-chain physical assets like Gold, Oil, Real Estate.
None - Stablecoins backed by a protocol’s native token (UST<>Luna, Silk<>Shade) or nothing at all (ESD, BAC)
Quantum of reserves
Excess - Reserves/Collateral greater than 100% of a stablecoin’s market cap. (Maker’s DAI, Liquity’s LUSD, Abracadabra’s MIM)
Full - Market cap of stablecoin is 100% backed by assets (USDC, UXD, Lemma’s USDL)
Partial - Backing is less than 100%. (FRAX, Sperax’s USDs).
None - Backed either by nothing (NuBits, AMPL, ESD) or endogenous capital (UST)
MECHANISM
Redemption of reserves - Stablecoins can be surrendered/burned in exchange for reserve assets (USDC, Fei, Float)
Seigniorage shares (2 token and 3 token models) - Stablecoins which have an associated equity and/or debt token that absorbs price volatility arising from excess/shortfall of stablecoin demand (BAC, ESD, NuBits, UST)
Collateralized debt positions (CDP) - Issuance of stablecoin loans against excess collateral (BitUSD, Dai, MIM)
Rebase - Coin supply changes algorithmically to achieve price control (AMPL)
In this section we will look at the various methods/tools available to protocols to influence price and maintain a stablecoin’s peg. Although they may appear different from one another, the underlying mechanics are the same.
The market price of an asset moves up when the intensity of demand exceeds intensity of supply at a given price point. The inverse is true for price to move down. Intensity here means that buyers are willing to pay more than the current price (when demand > supply at current price) or sellers are willing to receive less than the current market price (when supply > demand at current price) to get their respective orders filled. Phrased differently, price moves up when market buy orders eat up sell limit orders and price moves down when market sell orders eat up buy limit orders.
So how do we ensure price doesn’t get pushed above or below the peg? By maintaining buy walls and sell walls.
Buy walls are large quantities of buy orders at $1 (or whatever the desired peg is) which absorb sell pressure. Sell walls are large quantities of sell orders at $1 which absorb buy pressure. By ensuring adequate buy pressure at or slightly below the peg, and sell pressure at or above the peg, price does not change. With this in mind, let’s look at some of the most common ways to achieve liquidity around the peg.
SM 1 - Reserve Redemptions and Arbitrage
a) Some or all of the protocol reserves are directly redeemable by surrendering stablecoins. E.g. If 1 unit of LUSD is trading at <$1, a stablecoin holder/arbitrageur can exchange this for $1 worth of collateral, thus providing a floor price of $1 for 1 LUSD.
In general, the more the reserves available for redemption, the more the reliability of the peg.
b) Any user can mint 1 unit of a stablecoin by placing at least $1 of collateral in the protocol. When the stablecoin trades at >$1, users can mint new coins for $1 each and sell them in the market for >$1, pocketing the difference, until price falls to $1.
The above supply adjustments can be initiated either by users or designated actors.
SM 2 - Interest rates & Fees
a) SM 2.1 Deposit rates - A protocol can offer holders a savings rate to incentivize locking up or staking of the stablecoin to reduce the circulating supply. By increasing rates, more users are expected to buy and lock up stablecoins, pushing the price up. By decreasing rates or removing interest altogether, sell pressure is expected to push the price down. (E.g. Dai’s DSR - Dai Savings Rate)
b) SM 2.2 Borrowing rates - In case of CDP stablecoins (explained in the Mechanism section), increasing or decreasing borrowing rates incentivizes borrowers to either close out existing loans (thereby reducing stablecoin supply) or create new ones (increasing stablecoin supply). (E.g. Dai’s Stability Fee)
c) SM 2.3 Redemption Fee - When stablecoins are redeemed for collateral, a variable fee can be charged to penalize any redemption that could harm the peg.
SM 3 - Open market operations
Much like Central Banks print money and acquire assets from the open market, the system expands by minting new stablecoins to buy other crypto assets. This increase in supply drives the price of stablecoins down. During a contraction phase where price is below the peg, these assets are sold to buyback and burn stablecoins, thus reducing the supply and pushing the price up. (E.g. Celo USD)
SM 4 - Shares, Bonds and Binary Options
a) SM 4.1 - Shares - Share tokens are volatility absorbing tokens in the seigniorage shares model. When stablecoin price is above the peg, new coins are minted and given to holders who burn a portion of their share tokens. When supply needs to decrease to push the price back up to peg, stablecoin holders who surrender/burn their stablecoins are compensated by freshly minted share tokens. (E.g. Terra’s $Luna)
b) SM 4.2 - Bonds & Binary Options - A variation of the original seigniorage shares model includes a bond token. When stablecoin supply needs to decrease, bond tokens with a face value of $1 are sold at a discount in exchange for stablecoins, thereby reducing supply and pushing the stablecoin’s price up towards the peg. Bond tokens represent a promise to compensate bond holders in the future with more stablecoins during an expansion phase. When price of the stablecoin exceeds $1, new units are issued to bond holders. (E.g. Basis Cash’s $BAB)
Some ‘bond’ based models impose an expiry date on bond tokens, implying that bonds expire valueless if an expansion phase fails to arise prior to expiry. These are essentially binary options. (E.g. ESD’s $ESDS)
SM 5 - Target Prices (in Floating Peg models)
The protocol derives a periodically changing Target Price for the stablecoin through an algorithm taking into account various factors. Target prices are used for redemption of reserves. Differences between the Market Price and Target Price are arbitraged, causing a price convergence.
The previous section covered the types of stablecoin designs and the peg stability mechanisms available. We now apply this knowledge to evaluate some of the noteworthy attempts at building stablecoins.
Classification of Stablecoins Covered
h/t: Amani Moin, Emin Gün Sirer, and Kevin Sekniqi of Ava Labs for the framework/ nomenclature
USD Pegged / Crypto Collateral / Over Collateralized / Reserve Redemption
As one of the longest surviving stablecoins with a market cap in excess of $8B, DAI is widely considered DeFi’s native stablecoin. It is primarily a borrowing protocol that allows users to a) get leveraged exposure to crypto assets and b) get liquidity for their crypto assets while maintaining exposure and without triggering a taxable event.
DAI is a CDP-based stablecoin. A user deposits any pre-approved collateral into the protocol and can borrow newly minted DAI at a Collateralization Ratio of
101%-175% (higher CR for higher volatility collateral and lower for USD- stablecoins). DAI can then be used buy more crypto or sold on the open market. The borrower may close their loan and retrieve their collateral by repaying the borrowed DAI along with interest (stability fee).
If the collateral drops in price resulting in the borrowed amount breaching a specified CR % (Liquidation Ratio), the loans are liquidated and the borrower loses a part of his or her collateral to liquidators, plus a liquidation penalty.
The liquidation system relies on the following to work
Keepers - Actors who facilitate liquidations by taking over the DAI-denominated debt and some collateral from defaulting borrowers.). Additionally, they arbitrage DAI when it trades away from the peg.
Oracles - The system receives collateral prices from Oracle Feeds (whitelisted actors), which are then medianized to weed out extreme values. Medianized values pass through the Oracle Security Module which effects a 1-hour delay before prices are published. This allows sufficient time to introduce Emergency Oracles to prevent malicious attacks on main Oracles.
Dai Savings Rate (covered in SM 2.1) - This reduces supply of DAI only in the short term, so it is works better in conjunction with other mechanisms.
Historically, demand for DAI has outweighed supply, causing DAI to go above peg. In such scenarios, reducing DSR (to increase supply) would only be partially useful since it cannot go below 0.
Stability Fees (SM 2.2) - Like the DSR, Stability Fees have short term/limited impact on DAI’s price. Additionally, the effectiveness also depends on other factors such as overall market sentiment and other stablecoin yield opportunities. For example, if higher yields are available elsewhere, users may still borrow at higher stability fees, negating the expected decrease in supply.
Borrower Actions - When DAI is below its peg, users who borrowed DAI (vault creators) at the peg may be incentivized to buy cheaper DAI to close the loan. However, this is unlikely to be a strong enough incentive if leverage is the primary objective of borrowing since the upside potential from leverage is uncapped while gains from repaying DAI are capped (DAI trading below peg is likely when market sentiment is positive and speculators wish to move from stables to crypto). On the flipside, when DAI is above the peg, it may seem that users can borrow DAI to sell it above peg expecting price to revert to $1, locking in a profit. However, there are two issues here (a) this is not an instantaneous arbitrage loop and requires arbitragers to expose themselves to the collateral’s directional risk while they wait for DAI to return to peg and (b) the arbitrageurs will need to rebuy the same number of DAI from the market to close their CDP, likely causing an increase in DAI’s prices. Thus, reliance on vault creators provides temporary stabilization at best.
Emergency Shutdown - This is Maker’s last-resort mechanism which suspends core borrowing functions and gives both vault creators and DAI holders an exit to underlying collateral. The logic behind this mechanism is that expectation of an impending shutdown prevents DAI from trading too far away from the peg.
However, the effectiveness of this depends on how much confidence arbitragers have that a shutdown will be triggered. In the absence of clear, objective triggers for initiating a shutdown, it becomes difficult for arbitragers to assess the probability and consequently, the risk-reward potential and time value of money.
DAI Peg Security Module (SM 1) - Until the Peg Security Module (PSM) was introduced in the December 2020, DAI was reliant on the mechanisms explained above to maintain stability. DAI’s stability suffered from 4 major issues:
No redemption of reserves - When DAI was below peg, unless an emergency shutdown was initiated, it was impossible for DAI holders to redeem 1 DAI for $1 of collateral from the system, even though fully backed, preventing arbitragers from pushing the price back up
Liquidations broke DAI’s peg to the upside - Since keepers bidding for collateral liquidation require DAI, it adds upward pressure on DAI’s price.
Reduced supply from vault closures - During a market crash, collateral values collapse, prompting vault creators to close vaults to prevent liquidation. Thus, DAI’s supply is reduced when it is most needed for stability.
Arbitrage loop kicks in only when DAI >$1.5 - Without strong stability mechanisms, DAI could theoretically go up to $1.5. When DAI >$1.5, an arbitrager would be willing to place $1.5 of collateral in the vault (at 150% CR), mint and sell DAI for >$1.5 and pocket the difference, repeating the process till DAI is back to $1.5.
Dai’s PSM solved all the problems above. By allowing any user to swap USDC<>freshly minted DAI on a 1:1 basis without opening a vault, it enabled an instantaneous closed arbitrage loop. Since the launch, DAI has maintained its peg well (refer chart below). However, this has come at the cost of centralization risks. By allowing USDC, a stablecoin issued by a centralized financial institution, into the system Maker has sacrificed decentralization.
Oracle - Covered under ‘Liquidation’
Maker has differentiated itself from other stablecoin protocols by building their oracles in-house.
A sharp drop in prices of collateral assets and inability to conduct auctions.
Coordinated oracle attacks - i) Whitelisting of malicious feed providers by colluding MKR holders ii) Manipulated price feeds by colluding feed providers
Malicious governance proposals
Scalability - Theoretical limit of DAI supply is ~67% of aggregate market cap of all collateral assets due to Collateralization Ratio of ~150%.
Primary demand for Dai comes from leverage - Not competitive compared to DEXs which offer 2x-125x leverage, making it less likely that crypto lending demand will drive significant supply expansion
Stability - 3 - Thanks to DAI’s PSM, its ability to maintain peg has vastly improved. Making full-redeemability available (like Liquity) at all times would further improve resilience.
Trustlessness - 2 - DAI has significant (>50%) exposure to USDC, a centralized stablecoin. This adds systemic risk to Maker
Scalability - 2 - While minimum CR is ~150%, actual CR may be 200-250% to ensure all loans are safe from market crashes. This is inefficient use of capital.
Simplicity - 3 - Dai is simple to use for the average DeFi user but management of vaults requires constant monitoring.
Disclaimer: Ratings are an expression of the potential of each design choice, rather than a comment on the quality of execution of those design choices or the current viability. The guiding principle here is ‘Cannot fail > Should not fail’.
To illustrate, UXD’s/Lemma’s models require deep futures markets on DEXs which may not exist today to support a widely adopted stablecoin. A Scalability rating of 3 (on 4) implies that high scalability can be achieved in the future without compromising on the design choice. In contrast, DAI is comparatively more scalable today, however, in the long run it cannot scale as efficiently as 100% collateralized stablecoins without changing its design.
USD Pegged / Crypto Collateral / Over Collateralized / Reserve Redemption
Liquity is a CDP-based borrowing protocol that shares many similarities with Maker (DAI). Liquity differentiates itself from Maker through the following:
one-time borrowing fee of 0.5-5% and 0% interest rate
lower collateralization ratio (CR) requirement of 110% on individual vaults (troves) and protocol level CR of 150%
direct redeemability of 1 LUSD for $1 of collateral at any time a more efficient liquidation mechanism
immutability of contracts and governance-free
The liquidation system relies on the following:
Stability Pool (SP) - In the Maker system, liquidators bid for a defaulting borrower’s collateral in exchange for DAI. Liquity achieves the same thing through a SP, a readily available pool of LUSD funded by Stability Providers (LUSD holders) who stand to gain a share of the liquidated collateral. When the CR of a trove falls below 110% (liquidation ratio), the outstanding debt of the trove is repaid using LUSD from the SP. Stability Providers make a profit equal to the difference between the loan amount and the value of collateral received (0-9.9%).
Oracles - The system uses Chainlink’s ETH-USD oracle to trigger liquidations. Under specific circumstances, it uses Tellor’s ETH-USD oracle instead.
Redistribution - When the SP is empty, the outstanding debt and collateral of the liquidated trove are transferred and allocated to other trove owners in proportion to their respective collateral balances. Although this results in the CR of the recipient troves decreasing, the collateral gained is in most cases higher than the debt inherited, making it profitable.
Reserve Redemptions and Arbitrage (SM1) - When LUSD trades at <$1, any arbitrager can buy 1 LUSD from the market and redeem it for $1 of ETH. This loop repeats until the peg is regained. This mechanism provides a hard floor of $1 to LUSD.
When LUSD trades at between $1 and $1.1, arbitrageurs must deposit $1.1 of ETH and expose themselves to the collateral’s directional risk while they wait for DAI to return to peg.
However, when LUSD is >$1.1, the arbitrageur can deposit $1.1 of ETH and instantly mint and sell LUSD for >$1.1, repeating the process till LUSD drops to $1.1. Thus, LUSD has a hard ceiling price of $1.1 and not $1.
Borrower Actions - The same logic explained in Dai’s third stability mechanism (Borrower Actions) applies to LUSD too.
Issuance Fees* (SM2.2) - A one-time borrowing fee that is similar to and replaces Dai’s Stability Fee. Like Stability Fees, Issuance Fees have short term/limited impact on LUSD’s price. Additionally, the effectiveness also depends on other factors such as overall market sentiment and other stablecoin yield opportunities. For example, if higher yields are available elsewhere, users may still borrow despite higher Issuance fees, negating the expected decrease in supply.
Redemption Fees* (SM 2.3) - Liquity has a variable fee that increases when the rate of redemption increases. Fees may prove useful when LUSD trades above
$1 as it disincentivizes unnecessary redemption. However, when LUSD is below $1, redemption fees need to be carefully managed to avoid a situation where high redemption fees make arbitrage unprofitable.
*Issuance Fee % and Redemption Fee % are based on the same function (base rate
+ 0.5%). The base rate is dynamic and based on the rate at which minting/redemption occurs.
Oracles - Covered under ‘Liquidation’
In Maker, the liquidation engine acts as a single barrier when price of collateral starts to crash, taking on the obligations of ensuring healthy liquidations without expected losses. Liquity uses game theory to shift partial responsibility to trove owners.
LUSD redemptions are settled against troves with the lowest CR. This poses a risk of liquidation to any trove that is close to 110% CR.
These 2 mechanics force trove users to ensure that they are not the least collateralized participants and that the protocol CR is at least 150%
During a rapid crash in collateral value, trove owners will look to buy LUSD to repay their loans and avoid liquidation. This can drive price of LUSD above $1 (as with Maker’s Black Friday 2020 crisis). Since LUSD is endogenously generated within the system, additional stablecoins cannot be procured easily to repay loans.
If the CR drops below 100% in a flash crash and liquidation is facilitated through the Stability Pool, collateral gained by Stability Providers is less than LUSD spent, resulting in a net loss.
In a flash crash, demand for LUSD could push the price up beyond $1. As the price of LUSD approaches $1.1, it becomes less profitable for Stability Providers since the maximum potential gain is 10%, causing LUSD to be withdrawn.
Although the protocol uses 2 of the most trustworthy oracle providers - Chainlink/Tellor - they have been known to fail.
Liquity’s governance-free design makes it impossible to rectify contract errors. This means thinking about edge case fragility scenarios are critical since there is no ‘failsafe’.
Due complexity of trove management, growth of LUSD supply will come from leverage demand rather than stablecoin demand.
Stability - 3 - Unlike Maker, Liquity doesn’t have a PSM to ensure 2 way 1:1 swaps. Instead it has a hard peg on the downside (through arbitrage incentives) but a soft peg on the upside - a range of $1 to $1.1 (explained above)
Trustlessness - 4 - ETH is the only accepted collateral and there are no custodians involved.
Scalability - 2 - Due to a better liquidation system, it is more capital efficient than Maker but less than fully-collateralized stablecoins.
Simplicity - 3 - Simple to use for the average user but management of vaults requires constant monitoring.
USD Pegged / Fiat Collateral / Fully Collateralized / Reserve Redemption
USDT and USDC are the 2 largest stablecoins with market caps of $83B+ and
$48B+ respectively. They are substantially similar with a few noteworthy differences.
While USDT is promoted by Tether, an unregulated entity with opaque shareholding and operating structures, USDC is backed by Coinbase and Circle, two regulated, public US financial institutions.
Redemptions and Arbitrage (SM1) - Both USDT and USDC rely on institutions to arbitrage peg breaks.
Not censorship-resistant (risk of regulatory clampdowns) - Since fiat collateral is stored in commercial bank accounts, risk of attachment of assets is non-trivial
Redemption of stablecoins for underlying assets is available only to institutions (exchanges, hedge funds etc.). When peg is broken to the downside, only mode of exit for retail users is selling to other retail/market makers at a loss.
Not permissionless redemption - Institutions rely on contractual agreements for enforcing redeemability of stablecoins. Even so, Tether and Circle have discretionary power to delay redemptions.
Tether holds only 30-40% of its reserves in cash and cash equivalent assets. The rest are held in the form of debt assets (loans, commercial papers, money market funds, bonds etc.). This presents liquidity and credit risks. In comparison, USDC is fully backed by cash and short-term US Treasuries.
Tether’s reserves are held in undisclosed banks, possibly exposing it to concentration risks. Circle on the other hand maintains its reserves across multiple custodians ensuring there is no single point of failure that can affect the entire system.
Trades - Observable from DEXs and CEXs.
Stability - 4 - Both have managed to maintain their pegs over time better than other crypto collateralized/ non-collateralized stablecoins.
Trustlessness - USDT 1, USDC 2 - While both require trust assumptions, USDC fairs marginally better due to limitations on single party exposure.
Scalability - 3 - Only $1 of collateral required per stablecoin.
Simplicity - 4 - Straightforward design, easy to acquire due to level of retail and institutional adoption.
USD Pegged / Crypto Collateral / Fully Collateralized (subject to volatility) / Reserve Redemption
Fei Protocol is the pioneer of Protocol Controlled Value (PCV). Users sell crypto assets to the protocol in exchange for Fei. The protocol’s treasury consists of
decentralized crypto assets such as ETH and LUSD. While currently over- collateralized, Fei can become under collateralized due to market volatility. In such cases, any PCV shortfall in fulfilling redemptions is compensated with newly minted Tribe tokens (similar to how FRAX redemptions are partly paid in USDC and FXS)
Peg Stability Module (SM1) - Similar to DAI, FEI’s PSM allows any user to swap DAI<>FEI around a tight price band. Minting Fei with DAI is incentivized through zero fees, while redemptions attract a 10bps fee. The PSM combined with other Fei liquidity pools enable the classic arbitrage loop.
In v1 of the protocol, Fei relied on 2 mechanisms for peg stability:
Direct incentives - FEI swaps/redemptions took place in a Uniswap LP that penalized/rewarded users for selling/buying Fei below the peg. As the price drifted further away from the peg, the penalty increased. While sound in theory, these incentives failed - panic-stricken users sough to salvage what value they could rather than wait for the peg to be regained. This caused the peg to drop as low as $0.5.
PCV Reweights - When the FEI/ETH liquidity pool is unbalanced, the protocol withdraws all its liquity from the pool, uses some of the ETH to buy FEI to rebalance the pool, and then resupplies PCV to the pool in the balanced ratio, and burns the excess FEI that is left.
Both the mechanisms above were phased out and replaced by the PSM.
The protocol uses the following oracles
Chainlink DAI/FEI - This is used in the PSM. The PSM only relies on this oracle to ensure that DAI-FEI is trading within an acceptable range (0.975- 1.025). If it is, the mint/swap occurs on 1:1 basis. This way, Fei piggybacks on DAI’s peg.
Chinlink PCV asset/USD - This is only used to calculate the USD values of PCV assets to determine the system’s collateralization ratio.
Although the protocol has a collateralization ratio of 230%+, ~75% of PCV is comprised of volatile assets (mostly ETH). Sharp market downturns can lead to redemption of FEI for ETH and DAI. In the last week of Jan ‘22, ETH’s price dropped by ~28%, this led to a FEI supply reduction of ~$293M (~40%).
Fei cannot achieve both reserves stability and scalability without the FEI/DAI PSM. Without the PSM, Fei would need to be over-collateralized at all times
Stability - 4 - Since the launch of its PSM, Fei has maintained its peg with a variance of 30-60 bps. It may not have the tightest peg, but it is stable enough for general use.
Trustlessness - 2 - Even though the reserves are fully comprised of crypto assets, the governance mechanism concentrates power in the hands of TribeDAO and the Fei team. (Explained under Risks and Limitations)
Scalability - 3 - Fei is scalable because of the Fei/Dai PSM. In the absence of a PSM, Fei supply cannot be scaled without compromising reserve stability. Increasing supply during a market uptrend puts Fei’s reserves at risk in subsequent downtrends.
Simplicity - 4 - Minting and redeeming Fei is easy.
USD Pegged / Stablecoin Collateral / Fully Collateralized / Meta Stablecoin (Reserve Redemption)
GYD (yet to be launched) is a meta stablecoin i.e, a stablecoin that’s 100% backed by other stablecoins. GYD can be minted by depositing any of the accepted stablecoins into the protocol. The idea is to build a stablecoin reserve that shields itself from all DeFi risks - price, censorship, regulatory, counterparty, oracle and governance. This is achieved through segregation of reserve assets into isolated vaults such that failure of one doesn’t impact the others.
Reserve Redemptions and Arbitrage (SM1) - It relies on closed loop arbitrage when GYD >$1 or when GYD <$1 and reserve ratio ≥100%.
However, when reserves are <100%, the system uses a bonding curve for redemptions that provides decreasing redemption quotes for GYD. The redemption quotes ensure that all circulating GYD can be redeemed sustainably. This serves to disincentivize selling under the peg. While the exact mechanics are unknown at this point, they seem similar to the Direct Incentives employed by Fei initially.
Integration of CDPs - As a secondary mechanism, the protocol will give users the option to borrow against collateral, like Maker. It hopes that during a depegging event (<$1), borrowers will help defend the peg by repaying their debts cheaply.
The protocol intends to use Chainlink oracles but adds its own consistency checks
Triangulated Price Feeds - Unlike most protocols that directly use Chainlink/Tellor oracles, Gyro aims to construct the reference price of reserve assets by collecting on-chain information from highly liquid AMMs and DEXs, applying a set of consistency checks and cross-referencing them with information obtained through Chainlink oracles. This proposed design requires fewer trust assumptions compared to other protocols.
Balancer LP Tokens - The protocol uses its own method to calculate the individual values of assets in the LPs and aggregates them, instead of summing the units of each asset and multiplying by the target price ($1).
Governance
Although the protocol follows a PCV model, similar to FEI, it does not suffer from the same governance risks as Fei (Covered under Risks of Fei). The protocol also pioneered the concept of Optimistic Approval which Fei forked. Here, Gyro Dollar holders have the right to veto any decisions taken under the Optimistic Approval framework - thus giving them protection from malicious actions. But in the Fei protocol, Fei holders have no such right, instead they are at the mercy of TribeDAO members.
Conditional Cashflows - Gyro seeks to align the interest of governers and the protocol. Cashflows generated from operations flow as rewards to governance token tolders at a future date provided the protocol remains healthy. This is similar to underperformance/misconduct clawbacks.
The protocol uses its reserves to provide liquidity between GYD and other stablecoins. Liquidity pools are created in a way that assets with the same underlying risk factors are grouped together to prevent contagion and other uncorrelated assets being drained from reserves in the event one fails. For example, a liquidity pool may contain USDC and USDT along with GYD. If USDT fails for some reason, users may deposit USDT and drain USDC and GYD from the pool. In such an event, the protocol’s loss is restricted by insulting other reserve assets (like Dai, Liquity etc).
Algorithmic Pricing - The protocol has 2 AMM modules
Primary AMM - This AMM uses a bonding curve to set the mint/redemption rates taking into account the velocity of supply changes.
Secondary AMM - These are similar to regular AMMs but allow configurability of customized trading curves (i.e., the curve via which price impact manifests). The shape of the curve is determined by configuring the amount of liquidity available at each price point. At launch, the parameters of the AMMs (price range and liquidity distribution) will be chosen upon creation of the pool and will remain static. For a future version, the team is considering a dynamic version where the curve shape adjusts to market conditions (mainly the PAMM’s), but it requires significant work to do this safely.
Gyro’s oracle design is theoretically more robust than any other protocol’s. This significantly reduces trust assumptions and failures due to errors.
Unlike other stablecoins which stand clearly on either side of the decentralization spectrum, Gyro doesn’t discriminate between centralized and decentralized assets. Rather, its model is based on the belief that any risk cannot be eliminated, only minimized through diversification.
Due to its composite design, GYD exposes itself to risks with relatively higher probability but lower magnitude of impact compared to other fully-backed stablecoins, which might protect themselves from specific risks but remain exposed to others. Gyro’s design choice follows the portfolio theory model by exposing itself to the risks of each constituent stablecoin, yet minimizing the potential impact of individual failures.
Stability - 4 - The protocol is designed to be 100% collateralized by other stablecoins and uses arbitrage incentives.
Trustlessness - 3 - The protocol is intended to operate trustlessly with innovative governance mechanisms, however it exposes itself to some centralized stablecoins.
Scalability - 3 - $1 of collateral per stablecoin.
Simplicity - 4 - Gyro allows users to mint GYD by depositing any of the accepted stablecoins.
*Based on information publicly available.
USD Pegged / Delta-neutral Crypto Collateral / Fully Collateralized / Reserve Redemption
UXD is a yield-generating stablecoin on Solana testnet that claims to solve the Stablecoin Trilemma by being stable, decentralized and capital efficient. The mechanism is simple and elegant. The protocol issues 1 UXD for every $1 of crypto collateral deposited into the reserves. Unlike other crypto collateralized stablecoins like Dai and Fei which require over-collateralization to shield themselves from volatility in reserves, UXD instantly hedges every Dollar’s worth of crypto deposited with a delta-neutral position (E.g, 1 SOL (long) deposited by a user is paired with -1 SOL Perpetual (short) on a decentralized exchange. Any adverse price movement in the collateral is offset by an equivalent gain in the hedged position, and vice versa.
Lemma Finance employs a similar design, but on Arbitrum mainnet.
Reserve Redemptions and Arbitrage (SM1) - Both rely on closed loop arbitrage.
Yield Generation - Since both protocols are exposed to short positions on their reserve assets, they are exposed to funding fees. As funding fees in crypto markets have been predominantly positive (on a long-term basis), the protocols generate a ‘yield’ on their reserves which are passed on to users. However, their yield distribution mechanisms differ
In UXD, all UXD holders’ balances are increased automatically, giving everyone an equal share of yield. However, in Lemma only users who stake USDL benefit from the yield. This gives USDL stakers access to leveraged yield (If half the users stake and the others don’t, stakers will be entitled to the yield generated on the total reserves (attributable to both stakers and non-stakers). Losses if any, are entirely borne by the USDL stakers. But if there are no USDL stakers, the funding payments will uniformly eat into all users’ balances.
Insurance Fund - UXD has an insurance fund (~$50M) that was initially seeded by capital raised by the team. This fund is used to smoothen out variations in funding yield over time and backstop any losses occurring due to exploits/under- collateralization. If the insurance fund is depleted, the protocol auctions off UXP tokens to refill the fund.
In contrast, Lemma has an insurance fund which takes a 30% share of the yields and is backstopped by the LEMMA token. The fund is not used to absorb negative funding rates. Instead, it is used to cover losses from black swan events such as collateral losses from forced settlement of positions etc.
UXD’s sources price information from Mango DEX which in turn uses Pyth and Switchboard oracles.
Lemma uses a Chainlink oracle
Exposure to negative funding for a prolonged period - Although historical data may show otherwise, it is possible that a prolonged period of bearish market sentiment causes funding rates to turn and stay negative. In these times, the protocols would lose money. To avoid this situation, UXD protocol diverts a portion of the yield generated during periods of positive funding to the insurance fund so as to provide a sustainable yield to UXD holders when funding rates are negative.
Erosion of insurance fund - If the insurance fund is drained out and negative funding eats into the reserves, the protocol will become under-collateralized.
In contrast, USDL stakers receive the full yield when funding is positive and bear the losses when funding is negative. Further, if some portion of the stakers unstake their USDL, the leveraged exposure of the remaining stakers increases.
UXD’s automatic yield distribution mechanism can affect composability with other DeFi apps and liquidity pools.
The biggest risk to UXD’s/USDL’s ability to scale is shallow derivative DEX markets. Illiquidity may affect the protocols’ ability to enter/exit positions or may result in forced settlement leaving collateral unhedged. Further, if demand for UXD/USDL grows, the protocols may become one of the largest holders of ‘short positions’ on DEXs, causing funding fees to turn negative. A bet on UXD or USDL is thus a bet on Solana/Arbitrum’s DeFi ecosystem expanding multi-fold.
UXD’s reserve fund is at times deployed on centralized exchanges by transferring funds to the founder’s personal CEX account. This is done when funding rates are negative on DEXs while positive on CEXs to arbitrage the spread.
Apart from the protocols’ own smart contract risk, reliance on integrations with external DEXs expose them to potential bugs in their codes. Any exploits/code failures can lead to loss of protocol assets. (Such losses are borne by their insurance funds)
Although both reserves comprise uncensorable assets, exposure to any DEX that settles all P&L positions with USDC presents a short-term, transitory risk. However, there are rebalancing mechanisms to minimize exposure.
Stability - 4 - The protocols can not only achieve price stability, but offer inflation- beating yields to preserve purchasing power.
Trustlessness - 4 - The reserves offer the trustlessness of crypto assets without the volatility.
Scalability - 3 - They can theoretically scale better than other crypto collateralized stables due to a wider choice of collateral which would ordinarily be too risky as reserve assets.
Simplicity - 4 - Straightforward minting and guaranteed redemption.
CPI Pegged / Delta-neutral Crypto Collateral / Fully Collateralized / Reserve Redemption
Volt is designed to be an inflation-resistant stablecoin whose price starts at $1 and tracks the CPI-U index movement every month. The CPI adjusted value of VOLT represents the target rate, or the rate at which VOLT can be minted or redeemed by the user.
VOLT has two issuance mechanisms - (1) by deposit of FEI in the VOLT/FEI PSM and (2) through opening a CDP on Fuse Pool 8. VOLT is issued based on the price displayed in the VOLT Price Oracle, instead of using VOLT’s market price to prevent risk of CDP-liquidation.
Volt’s reserves are always over-collateralized and is expected to increase over time due to accrual of fees and yields in excess of inflation. Reserves can be segregated into core assets (stablecoins) used to yield farm and fully back VOLT and non-core assets comprising stablecoins, ETH, and other high-risk tokens which have the potential to increase protocol buffer/equity.
Peg Stability Module (SM1) - Similar to FEI and DAI PSMs.
Liquid Governance - VOLT introduces the concept of Liquid Governance, a mechanism through which Protocol Controlled Value (PCV) is redirected/allocated to yield generation avenues in a market-driven manner. Holders of the VCON (governance) token can pledge their tokens in a Fuse Pool and borrow a the PCV assets (in proportion to their VCON holding) for deploying in whitelisted avenues. These token holders borrow at the rate of inflation + a 2% reserve fee. Any excess return generated flows to the borrower. Any loss of capital is reimbursed to the PCV.
Oracle - The VOLT Price Oracle is fed monthly CPI data by the Chainlink CPI oracle. This price is used to facilitate minting and redemption.
Since the price of VOLT automatically moves in line with CPI, it creates pressure on the system to achieve sustainable yield even when market conditions don’t support it, thus increasing the risk at which yield is generated.
Supply of VOLT is constrained by the availability of good yield farming opportunities, so it is less responsive to VOLT demand.
VOLT has partnered with Maple Finance to lend its assets to whitelisted borrowers on their platform. These are USDC loans secured through contractual agreements with the borrowers. Relying on off-chain contractual protections introduces trust assumptions in a ‘decentralized stablecoin’ ecosystem.
Due to its ever-increasing price, VOLT is not a borrowable asset. It serves as a store of value, but it cannot function as a standard of deferred payment, thus limiting its demand potential and scalability.
Volt’s PCV model puts Volt holders at the mercy of its governance token holder (similar to Fei)
Stability - 3 - The peg’s resilience will significantly depend on the protocol’s ability to generate yield and track the CPI movements. In the absence of protocol equity to cover the shortfall, VOLT will trade at a discount to the oracle price since reserves will be inadequate to fully back each VOLT.
Trustlessness - 2 - The only available yield generating avenue is Maple Finance. Assets are lent as under-collateralized loans, basis contractual rights. Further, Volt holders do not have guaranteed access to underlying assets of the protocol
Scalability - 3 - Lack of composability + Limited use case as a stablecoin ‘wrapper’
Simplicity - 4 - VOLT is easy to understand with a clear value proposition.
Pegged to IMF’s SDR initially, then floating / Basket of Fiat Collateral / Full to Partial Collateralization / Fractional Reserve
“Reduction of the reserve ratio reflects increased confidence in the SGR token. Indeed, our model only decreases the reserve ratio when more SGR tokens have been bought, indicating increased demand and trust. When the reserve ratio is
reduced, SGR value no longer stems entirely from the backing reserve. SGR gains its own value as an independent means of exchange and store of value, and therefore our model allows SGR market cap to exceed the amount of money in Sögur’s reserves.”
For comparison, look at this -
“Although there's no predetermined timeframes for how quickly the amount of collateralization changes, we believe that as FRAX adoption increases, users will be more comfortable with a higher percentage of FRAX supply being stabilized algorithmically rather than with collateral.
The protocol only adjusts the collateral ratio as a result of demand for more FRAX and changes in FRAX price”
Sogur started off as a stablecoin pegged to and backed by a basket of assets, mirroring the IMF’s SDR (a basket of 5 currencies). The plan was to start with a peg to instill trust in the system and reduce the collateralization over time as a function of increasing demand, indicating trust and growing intrinsic value independent of reserves. When demand grew, the pricing algorithm would reduce the reserve ratio (by increasing the SGR price for the same amount of reserves) and vice versa. The system generates bid and ask prices at which it facilitates minting and redemption of SGR for reserves.
To clarify, under-collateralization did not mean that a portion of the reserves were siphoned off by the team or other parties. Rather, since the pricing module consistently increases the minting price of SGR when demand is high, previously minted SGR is valued at the new price (or marked to market), thus resulting in under-collateralization. A more relatable example is Ohm, whose bonding price increases with demand.
(Side note: Ohm = Sogur - floor price redeemability + rebasing rewards (funded by difference between bonding price and 1 DAI floor).
After launch, SGR had no pegs but traded on the secondary markets in a range determined by the bid and ask prices from the pricing module.
Stability - 2 - As collateralization reduces when demand increases, the protocol is more prone to bank runs before it can achieve widespread adoption.
Trustlessness - 1 - Treasury assets are real world currencies held in bank accounts.
Scalability - 4 - Supply is not constrained by collateral
Simplicity - 2 - Floating pegs are more nuanced and less intuitive compared to fiat-pegged stablecoins.
Floating Peg / Crypto Collateral / Over Collateralized / Reserve Redemption
RAI is a stablecoin backed by ETH. As in Maker, users deposit ETH (in SAFES, like Maker vaults) and borrow RAI. Its primary use case is to act as a volatility-dampened version of ETH to be used as collateral in lending protocols or as reserve assets in DAO treasuries. RAI revalues/devalues itself based on the its market price movements caused by interactions between the SAFE owners and RAI holders. It has a dynamic target price known as the Redemption Price.
Redemption Price & Redemption Rates (SM 5)
At inception, the protocol starts with an arbitrary redemption price. Once the Market Price deviates, the protocol sets a per second Redemption Rate (or the rate of change of redemption price) that increases/reduces the Redemption Price. The price movements are expected to occur as follows:
If MP is more than RP >> Redemption Rate turns negative to reduce RP >> RAI supply goes up because (a) borrowing power increases (debt is taken in RAI but valued in USD terms) and (b) RAI holders sell in anticipation of MP going down >> MP goes down.
If MP is less than RP >> Redemption Rate turns positive >> RAI demand goes up because (a) debt becomes more expensive as MP increases so SAFE owners repay loans and (b) speculators expect RAI’s MP to increase >> MP goes up.
The redemption rate works similar to DAI’s stability fee. But this design is more effective because when price needs to go down, stability fees cannot become negative, whereas redemption rates can keep decreasing till the market price reacts.
The protocol uses a liquidation mechanism that is practically the same as Maker’s but with a few additions
Liquidation Protection - SAFE users have the option of creating a safety pool (or ‘Saviour’) containing Uniswap LP tokens or ETH. When the SAFE falls below the liquidation ratio, the liquidator is forced to first add collateral from the user’s
Saviour to the SAFE. This is an innovative feature, unique to Rai, that allows SAFE users to avoid liquidation without sacrificing capital efficiency.
Dampened Oracle Prices - Explained below
The system receives collateral prices from Oracle Feeds (Governance whitelisted actors), which are then medianized to weed out extreme values.
Medianized values pass through the Oracle Security Module (which effects a 1 hour delay before prices are published) and into a Dampened Security Module that limits the value change between 2 consecutive price feeds updates.
The system also has alternatives to the Governance whitelisted Oracle Feeds - Chainlink and Uniswap TWAP.
Governance minimized - The team intends to hand over control of all key components in a phased manner. The end goal is to automate most system parameters such that changes can’t be made without deploying new contracts.
Liquidation Protection - Covered under Liquidation.
The protocol is complex for the average Joe. A stablecoin needs to be intuitive and not over-engineered to have mass appeal and adoption. Complexity limits use case to traders / speculators and as a reserve asset for DAOs (allows ETH exposure without excess volatility)
Borrowing RAI is like borrowing in a foreign currency - principal is exposed to exchange rate volatility
Treasury is susceptible to crashes in ETH’s price.
Fundamentally, RAI is intended to be DeFi’s reserve asset/stablecoin. For this to hold true, everyday users must not evaluate RAI in USD terms. However, this is in contrast to the core stabilization mechanism which involves comparison with USD. RAI (or any other unpegged stablecoin) is likely to achieve mass adoption only when the comparison with USD ends (1 RAI = 1 RAI). That Rai is soft pegged to $3 begs the question ‘what does it really achieve by pegging itself to $3 instead of $1?
Unpredictability of rate setter feedback mechanism - only inputs are known, output is unpredictable. Even careful parameterization with extensive simulation cannot reliably predict results. The success of the protocol is contingent on the mechanism being able to influence market prices in a timely manner.
Stability - 3 - RAI has maintained a stable value around $3 since inception. However, considering its low daily trading volumes ($1M~5M) and the experimental nature of the target rate feedback mechanism it is difficult to assess whether the mechanism breaks under heavy stress.
Trustlessness - 4 - ETH is the only accepted collateral
Scalability - 2 - (a) The protocol only accepts ETH as collateral (b) Collateralization requirement is considerably more onerous than in Maker since SAFE owners need to manage situations where the price of ETH goes up while RAI’s goes down (this will have a dual impact on the LTV ratio)
Simplicity - 2 - RAI is a more complex version of DAI and is likely to restrict usability to speculators and DAOs.
Floating Peg / Non-Collateralized / Rebase
AMPL is a rebasing stablecoin backed by nothing. It adjusts its supply in response to changes in demand. What this means is that if the price of AMPL goes up by 20%, the system algorithmically reduces the number of coins for all holders by 20%. This means that as long as one doesn’t sell any of their AMPL, their ‘shareholding’ (% terms) does not get diluted.
Let’s say you and I start with 1 AMPL each worth $1 each. Total combined value of AMPL = $2
Next, I manage to sell 1 AMPL for $2 to Elon. You now have 1 AMPL worth $2 and the Elon has 1 AMPL worth $2. Total combined value of AMPL = $4
Seeing that the price per AMPL has increased to $2, the system targets a price of $1 per AMPL by increasing its supply. In this case, your 1 AMPL previously worth $2, is replaced by 2 AMPLs worth $1 each, while the value of holdings remains the same.
Basically, if the price of AMPL goes up by X%, the supply of AMPL reduces by X%. Notice that the rebase mechanism has no impact on the value of your holdings. It simply replaces a static coin balance with a static coin price.
But if you started with $1 and now have AMPL worth $2, where did the increase come from? Because I sold 1 unit if AMPL for $2, all other units are valued at $2, increasing the market capitalization of AMPL. The increase is unrealized profit that exists only on paper, unless you sell it in the market for $2. What if I’d sold the AMPL for $0.5? You would now have $0.5 (0.5 AMPL x $1 after rebase). Volatile much?
Since the price of AMPL is always rebased to around $1, we measure stability based on the total market cap.
As one can see from above, AMPL’s market cap has been anything but stable. Funnily, AMPL maxis claim that it fulfils all the functions of a stablecoins but that it isn’t one.
Of all the attempts at creating a crypto-native form of money, Ampleforth is one of the most obfuscated designs, coming second only to Ohm.
Stability - 1 - AMPL is nowhere close to being considered stable. It’s a volatile asset masquerading as a stablecoin.
Trustlessness - 3 - In its initial version, Ampleforth was a centralized network with the team having the ability freeze tokens, blacklist addresses and upgrade contracts unilaterally. These rights were subsequently relinquished to the community.
Scalability - NA - In theory, AMPL is infinitely scalable. But that’s meaningless as supply is irrelevant in a rebasing token.
Simplicity - 1 - AMPL serves no purpose other than being a Bitcoin-wannabe / speculator’s hobby.
USD-pegged / Non-Collateralized / Seigniorage Shares
Basis Cash is an unbacked algo stablecoin that relies on a 3 token seigniorage model to maintains its peg. BAC is pegged to 1 USD, BAS is the dividend yielding share token of the system used to support expansion and BAB is the debt or bond token used to support contraction.
ESD is an unbacked algo stablecoin which uses the 2-token seigniorage model. The key difference from Basis Cash is that ESD has a bond token (ESDS) but not a share token. Instead, by staking ESD (stablecoin), a holder of ESD is entitled to a share for future expansions in supply.
Basis Cash
Contraction
When BAC is <$1, the protocol issues bonds (BAB) at a discount. Bonds entitle buyers to future increases in supply when BAC trades above $1 in exchange for surrendering BAC held by them today. BAB issuance leads to a decrease in BAC supply and increase in its price. BAB buyers bid on how much BAC they are willing to give up per BAB.
Expansion
When BAC trades at >$1, the protocol mints new BAC. The increase in supply causes downward pressure on price until it reaches equilibrium at $1. New issuances of BAC are first distributed to BAB holders on a FIFO basis. Once all BAB holders have been paid, excess BAC is distributed to BAS holders proportionally.
ESD
ESD uses a similar expansion and contraction model as Basis Cash with the following differences:
ESDS bonds expire 30 days from purchase, while BABs don’t. This ensures that the bond queue doesn’t become too long. On the flipside, this will warrant a higher discount for buyers due to the risk of losing the entire purchase amount.
Supply increases are first paid out to ESDS holders and thereafter split between ESD stakers and ESD LP stakers.
The hardest part about maintaining a peg is stimulating demand when the peg breaks below $1. It requires users to believe that the peg will be regained and take action to reinforce this belief in others. But as is the rule with financial markets, incentives create action. Stronger the incentive, more predictable the outcome. The biggest drawback of bond-based stablecoins is that hope is a pre- requisite for bond buyers - they must believe in the peg - and that is never a good strategy. If some users buy bonds and realize that the peg isn’t being regained, it deters other potential buyers. In order to create a stronger incentivize, more bonds need to be issued per stablecoin. Further, as more bonds are issued, the probability of late bond buyers being able to benefit from a future expansion reduces, requiring even greater incentives in the form of lower bond prices. This cycle perpetuates till bond prices crash to zero. When this happens, it is mathematically impossible to further reduce the supply of stablecoins. On the other hand, stablecoins with an inbuilt redemption arbitrage loop have a significantly easier time mainting peg since the existence of an exploitable profit opportunity is incentive enough to garner required demand.
The bond system is susceptible to manipulation by whales in the following ways:
(i) suppressing the stablecoin price to cause expiration of bonds
(ii) drive stablecoin price above $1 to cash out on bond yields and drive it back under $1 to repeat.
(iii) not selling new BAC supply during expansion faces to keep the price of BAC >$1 in order to collect further yields.
Whale moves are a deterrent to participation of smaller users in stability management. In contrast, in reserve redemption models, any user can exploit arbitrage opportunities without taking on speculative risks.
Stability - 1 - Both protocols rely on profit-driven actors to maintain stability in the absence of reserves. However, these incentives are weak and require some amount of speculation. Further, ESD has a weaker design than BAC for 2 reasons
o the stablecoin is merged with the share token - as a result, the ESD’s fair value will always be >$1 since holders expect to benefit from future supply increases.
o In Basis, BAC/DAI LPs continue to earn LP incentives even during contraction phases. In contrast, ESD/USDC LPs only earn LP rewards when ESD > $1. As a result, ESD is prone to liquidity withdrawals during contraction, further weakening the peg.
Trustlessness - 4 - The tokens themselves are trustless, even if they are unbacked by reserves.
Scalability - 4 - As long as there’s enough demand for the stablecoin to push the price above $1, new supply can be created.
Simplicity - 2 - A stablecoin protocol should be easy enough for the average user to understand, including how stability is maintained.
USD-pegged / Non-Collateralized / Seigniorage Shares
UST is the native stablecoin of the Terra ecosystem. It is collateralized by LUNA, a volatile asset endogenous to the Terra ecosystem. It uses arbitrage incentives to enable a 1:1 mint/redeem function between UST and Luna.
Users can mint UST by burning $1 of LUNA and redeem by burning it for $1 of LUNA. As LUNA gets burnt in the process of minting, two things happen:
The number of LUNA in circulation reduces, so every other LUNA increases in value (call this a base increase since ‘Market Cap / Circulating Supply’ changes.
Demand for UST is perceived as growth of the Terra ecosystem, and hence estimates of future cash flows increase. This increases the market cap of LUNA which is the ‘equity share’ of Terra (this is a speculative increase)
Arbitrage Incentives [SM1] - Enabled by minting/redemption of $1 LUNA for every 1 UST, instead of using exogenous collateral as reserves. This mechanism works only as long as the market perceives LUNA to have value. When there is loss of confidence, UST is subject to the same downward spiral risk that stablecoins with bond tokens are subject to (i.e the amount of LUNA minted to redeem 1 UST increases with every redemption when LUNA’s price falls. In anticipation of this, concerned LUNA holders will front run this dilution and dump their holdings, causing additional sell pressure).
External Backstops - In a recent move, Terra Form Labs introduced an emergency backstop mechanism in the form of BTC reserves to absorb UST remption pressure. Instead of redeeming 1 UST for $1 of LUNA, users would redeem it for $0.98 of BTC. As we’ve seen over the past week, reserves can help support the peg, but only until they last.
On-chain Redemption Spreads - The protocol’s market swap function limits the amount of UST that can be redeemed for LUNA on 1:1 basis. Redemptions in excess of this limit will increase the slippage exponentially. This mechanism can only slow down the rate of redemption temporarily. When the shutters are lifted, users flee once more. Additionally, the mechanism requires frequent parameter changes to ensure inadequate on-chain liquidity for market swaps doesn’t lead to larger spreads and, as a consequence, depegging.
Terra relies on its validators to periodically vote on the exchange rates of LUNA and other Terra assets. Votes are tallied and the final exchange rates are determined by votes clustered around the weighted median value. All validators who have voted within a narrow band from the weighted median are rewarded and those who vote away from the median incur a penalty.
Like every uncollateralized algo stablecoin, UST requires demand for it (and consequently LUNA) to be ever-expanding. The system fails when demand falls and UST redemptions lead to sell pressure on LUNA.
UST is not as trustless as one would think. Under the guise of preventing bank runs, the protocol choaks the redemption function by introducing swap spreads (as redemptions exceed a specified threshold, users get less than $1 of LUNA for every 1 UST). However, this mechanism only slows down the pace of bank runs, it doesn’t prevent them.
Seigniorage models backed by endogenous collateral, like Terra, by design benefit early investors of the governance token. Late participants are left holding the bag.
Stability - 1 - Death spiral risks eventually materialize. History has taught us that on several occasions. The next uncollateralized algo stablecoin is no different.
Trustlessness - 2 - The protocol makes it extremely simple for a user to enter the system but hard to exit for anyone other than the earliest users. Further, Terra’s reliance on investors/market makers to support the peg made it impossible for anyone to evaluate when the music would stop.
Scalability - 4 - Supply is not constrained by collateral
Simplicity - 4 - Terra was able to achieve widespread retail and merchant adoption due to its advertised 20% yield. But the apparent simplicity combined with shilling by investors and influencers without highlighting hidden complexities and risks of the protocol has left the average user worse off.
USD-pegged / Collateralized (Centralized Stablecoins) / Full to Partial Collateralization / Fractional Reserve + Seigniorage
FRAX is a stablecoin that fuses the fractional reserve concept pioneered by Sogur (Saga) with the classic seigniorage shares model.
Unlike Sogur which reduces the collateralization ratio (CR) by increasing the mint/redemption price of SGR , Frax reduces it by gradually taking away a larger share of the collateral deposited by FRAX holders and transferring that value to holders of its governance token, FXS.
Frax’s goal is to provide a highly scalable, decentralized, algorithmic money but the team has chosen to prioritize adoption growth over decentralization. The is understandable since decentralization without adoption is pointless. However, in its present form it is a wrapped version of USDC/USDT since >70% of its reserves are in centralized stablecoins (USDC, USDT) and Dai (which itself is heavily reliant on USDC).
To mint, a user gives the smart contract $1 worth of USDC and FXS (if Collateralization Ratio is 70%, $0.7 USDC and $0.3 FXS). FXS is then burned to accrue value to FXS holders.
To redeem, the user deposits 1 FRAX and gets back $1 worth of USDC and FXS (similar to minting). FXS is minted by the protocol. Redeemer can sell FXS instantly for $.
Reserve Redemption & Arbitrage [SM1] - When FRAX > $1, arbitrageurs mint new FRAX for 1 USDC and sell them. When FRAX <$1, arbitrageurs can buy cheap and redeem for $1 (collateral + FXS)
Dynamic Collateralization - Frax implements a dynamic system which increases the Collateralization Ratio when FRAX is trading below $1. However, this is a weak solution that cannot respond fast enough to handle a panic-driven peg break and works when only a fraction of FRAX holders want to exit to the underlying collateral. It is impossible for FRAX to maintain its peg in a bank run scenario where all FRAX holders look to exit. In any case, FRAX has a floor value that is determined by the CR. If CR is 80%, arbitrage incentives ensure that the peg doesn’t deviate below $0.8 for an extended period of time.
Oracles - Frax combines Chainlink’s ETH-USD TWAP with Uniswap’s FRAX-WETH pool balance data to derive the FRAX-USD price.
Calling FRAX a fractional stablecoin is in my view a misnomer. What FRAX does to stablecoin holders is partially take away hard value (USDC) and give speculative value in return (FXS). The hard value is then transferred to FXS holders and adds zero value to the protocol. The ‘fractional’ portion of backing could instead be deployed for Market Operations and generate yields. To draw a parallel to tradfi banking, merely increasing deposits of a bank doesn’t increase its valuation, it must happen through value creation. Similarly, increase in FRAX supply shouldn’t increase FXS valuation through value extraction from FRAX holders.
Another erroneous claim by FRAX is that it is capital efficient. Something can be considered capital efficient only when it requires less capital inflow to provide the same level of security/guarantee, which isn’t true since the protocol collects $1 from FRAX minters. Replacing a stable asset for a volatile & questionable IOU (FXS) not only doesn’t improve capital efficiency but it degrades the quality of the stablecoin itself.
Fractional reserve models similar to FRAX (unlike Sogur) are directly in conflict with the interests of stablecoin holders. The approach is in stark contrast to Fei which seeks to incentivize its governance token holders only for being over-collateralized, promoting risk management. Instead, FRAX takes the opposite route and rewards FXS holders through under-collateralization.
Stability - 3 - While it is prone to bank runs, FRAX will have a floor value that prevents it from going to zero. The CR is currently ~85%, which gives FRAX a floor price of $0.85.
Trustlessness - 2 - While the protocol itself is permissionless and gives FRAX holders unrestricted access to underlying reserves, it is predominantly backed by centralized stablecoins and prone to counterparty/censorship risks.
Scalability - 3 - Contrary to claims, FRAX is no more scalable than a fully- collateralized stablecoin. As long as a FRAX requires $1 to mint, it cannot be considered fractional.
Simplicity - 3 - Frax can be easily minted with single assets. During redemptions, users need to liquidate the FXS received to obtain the full value of 1 FRAX.
USD-pegged / Crypto collateralized (BTC, ETH, Celo) / Seigniorage Shares
cUSD is the native stablecoin of the Celo Blockchain. It is based on the 2 token seigniorage model, substantially similar to Terra-Luna.
UST is backed entirely by Luna, whereas cUSD is backed by a combination of Celo (54%), BTC (24%), ETH (17%) and Dai (6%)
UST relies on arbitrageurs to defend the peg through 1:1 swaps between LUNA and UST. In contrast, Celo uses a Constant Product function to enable swaps. It has one wallet each for Celo and cUSD and the balances of each wallet are initialized every time a new Celo-cUSD oracle price is received.
The protocol’s initial design as per the whitepaper sought to maintain peg stability by executing Open Market Operations like the US Fed. When cUSD > $1, the protocol mints new cUSD, and uses them to buy other assets such as BTC/ETH which go into Celo’s reserves. When cUSD is <$1, the protocol uses its reserve assets (BTC/ETH) to buyback cUSD from the market. This gives Celo the ability to defend its peg independently and retain the profits of arbitrage to build the reserves.
Another positive side effect of the model is that excess demand for cUSD is most likely to occur when overall market sentiment is bearish. In such a scenario, cUSD is exchanged by the protocol for other crypto assets when downside risk of acquiring volatile assets is significantly reduced. These surplus reserves can be realized when market sentiment turns bullish.
However, this mechanism was abandoned. My guess is that it is difficult for the protocol to determine exactly how much supply needs to be expanded/contracted to regain the peg. This leads to overshooting or undershooting the peg. In contrast, letting arbitrageurs profit from price deviations ensures that expansion/contraction stops when there is no profit to be made.
Oracles - Celo’s oracle contract receives data from governance-whitelisted data reporters. To prevent price manipulation by malicious parties, the oracle contract uses a medianizer.
To the extent it is not collateralized by exogenous assets like BTC, ETH and DAI, the protocol is susceptible to a bank run risk, although with a lower negative impact.
Nearly 50% of the reserve are managed off-chain and manually.
Stability - 2 - Bank run risk exists but cUSD will have a floor price determined by the level of exogenous collateralization.
Trustlessness - 2 - Reserves are partially managed off-chain
Scalability - 4 - Supply is not constrained by collateral
Simplicity - 4 - Users can swap in and out of cUSD through the Mento dApp.
So what is the stablecoin that best captures the ethos of Bitcoin while balancing the realities of the present and what should this look like?
A significant majority of stablecoins today are fiat-pegged, specifically to the US Dollar. Despite fiat currencies being slaves to political ambitions and prone to high inflation, the fact is that we do not yet have an alternative form of money that can work at a comparable scale and that limits its usefulness. While the idea of an independent monetary system with its own base money, free of central banking is attractive, the path to the end state is not easy. As I see it, this can happen in one of two ways.
One is a revolution - a swift and massive shock to or collapse of the global economic networks arising from complete loss of trust in and between centralized power structures such as governments and banks, that gives us a blank slate to start with. The Nixon shock of 1971 is a good example. The US abandoning the Gold Standard gave birth to the fiat money system. If not for this event, the idea of money without assets to back it would have sounded ludicrous to most, yet today we accept it without question.
The second is an evolution - a slower, multi-decade journey (even longer, possibly) of consensus building that involves a gradual loss of trust in the current system and a growing movement in pursuit of an ideology. Bitcoin and DeFi are examples of this.
If the former were to happen, an alternative will emerge organically and adoption will be quick, for there is little to resist. Personally, I believe the latter is the more likely path and alternative models will evolve by retaining elements of the old system and replacing them one at a time, over time. In the context of pegs, the deeply ingrained frame of reference $1=$1 (or any fiat currency for that matter) is a means to gather adoption. Floating peg stablecoins like Rai are novel experiments but they will find it hard to gather meaningful traction as long as users continue to think and denominate their portfolios in fiat terms.
By pegging to a fiat currency, we significantly lower the barriers to crypto and stablecoin adoption.
The benefits of collateralization are obvious. A stablecoin holder can sleep peacefully knowing that it will always retain its intrinsic value even if the market perceives it to be less valuable. But collateralized stablecoins have a supply cap constrained by the value of collateral available. Like with the Gold Standard, money supply cannot scale with demand. This ‘capital inefficiency’ led to the creation of unbacked algorithmic stablecoins, which increase or decrease supply in a rule-based manner in response to changes in demand. All such experiments till date, without exception, have been victims of the death spiral.
These stablecoins are extremely reflexive and fragile. They work well when demand is growing, but crash and burn when growth shrinks. To make matters worse, it is impossible to measure the system’s resilience and derive confidence when the peg breaks slightly, inducing market-wide panic and sell offs.
One may be tempted to draw parallels between algorithmic stablecoins and fiat currencies and ask why the former cannot work.
Monopolistic regulatory powers combined with the fact that a nation’s citizens have no other alternatives (other than foreign currencies which suffer from the same fundamental issues) is why the current system mostly works. The only other scenario where this form of ‘money by consensus’ can work in an unregulated manner is in a tight knit community bound by social structures where bad actors are ostracized for violations.
For algorithmic stablecoins to work, accountability is necessary. Blockchains inherently make accountability challenging since identities are pseudonymous and changeable. Until we figure out a blockchain-native solution along the likes of credit scoring and social scoring, collateralization is the way to go.
This is a no-brainer. A decentralized stablecoin needs to be backed by censorship resistant assets that guarantee permissionless access to end users. By using crypto as collateral, protocols and users are resistant to censorship or counterparty risks.
So, we’re looking for a fiat-pegged, crypto collateralized, decentralized stablecoin. The most popular ones today are Dai, LUSD and Fei.
Maker’s Dai is over-collateralized by ETH. Although, it has achieved significant scale and resilience, it has come at a cost. It’s core peg stability mechanism uses USDC as collateral - a big detour from its original vision. Dai falls short because it is subject to censorship risks. Additionally, the need to maintain a collateral buffer of ~200% (minimum is 150%) renders it capital inefficient.
Liquity (LUSD), like DAI is over-collateralized by ETH. However, it requires collateralization of around ~150% (minimum is 110%). It is undeniably the most decentralized stablecoin today because (a) it requires zero governance (b) ETH is the only accepted collateral (c) contracts cannot be upgraded and (d) access to the protocol is managed by third-party front-end providers. Stability-wise, it’s resilient to peg breaks below $1, but can theoretically go up to $1.1.
Fei is over-collateralized by crypto assets which are owned by the protocol, primarily ETH. It has a strong peg design, aided by its Peg Stability Module and allows 1:1 minting and redemption between Fei and Dai. However, the PCV model in its current form puts stablecoin holders at the mercy of TribeDAO since the latter controls the assets. Morever, since Fei’s reserves are subject to high volatility, over- collateralization is required for the protocol to stay resilient.
None of these simultaneously excel at stability, trustlessness and scalability. That said, I do believe each of them have their place in DeFi and will continue to grow
In my quest to find the most decentralized stablecoin design, I stumbled upon two extremely interesting projects - Lemma Finance and UXD Protocol - which share the same core design choice, a stablecoin backed by delta-neutral crypto reserves.
The ingenuity of this design lies in its ability to combine
(a) capital efficiency of full-collateralization stablecoins
(b) censorship-resistance of crypto assets without the volatility
(c) a USD-peg without exposure to the US Dollar and
(d) stability through simple arbitrage incentives.
Of course, this comes with its own set of risks (covered in this essay). Fortunately, these are not fundamental design risks of a permanent nature. Rather, they are implementation and execution risks that can be mitigated as the protocols evolve and DeFi grows.
The UXD/Lemma model is, in my view, the most elegant innovation in stablecoin design we’ve seen over the past few years, that has the potential to achieve what Bitcoin was supposed to.
Having looked at most stablecoin designs, there are some design choices and principles that appeal to me. As DeFi protocols become more composable and scalable, and oracle infrastructure improves, a stablecoin that combines that best features from a variety of others will be achievable. Here’s what I would want to see in.
No seigniorage model - Governance tokens must accrue value only through transaction fees and yield on reserves, not through speculation. This ensures a clear alignment of interests between governance token holders and stablecoin holders. Volatility in price of governance token cannot impact the value of stablecoins.
Reserve redemption mechanism - Guaranteed exit to holders at $1 of real value, without the downward spiral risks associated with full/partial seigniorage models. For a protocol to build users’ trust, it must make entry easy and exits easier.
Supply expansion - Open Market Operations by the protocol to print new stablecoins and acquire assets from the market to build reserves. As an alternative, the protocol can have a dynamic mint fee that results in sharing of arbitrage profit between arbitrageurs and the protocol. While neither is perfect, the latter is more effective in achieving price equilibrium swiftly. These require well-tuned oracles to prevent overshooting/undershooting.
Supply contraction - Arbitrage incentives to push price up to $1
Delta-neutral liquid-staked reserves - (i) Collateral will consist of liquid staked crypto (stETH, stSOL etc.) which generate risk-free native yields (ii) Collateral representing 100% of the stablecoin’s market cap is perfectly hedged. No risk of under-collateralization.
Surplus reserve and rebalancing - Surplus reserves left unhedged since these assets are likely to be acquired by the protocol when crypto prices are depressed (increased stablecoin demand), allowing for upside potential when markets recover. When these unhedged assets gain value a) rebalance into delta-neutral positions to lock gains on surplus reserves or b) rebalance into other decentralized and over-collateralized stablecoins like LUSD and RAI.
https://medium.com/derivadex/what-are-perpetual-swaps-130236587df2
Aditya Palepu, May 2020
Perpetual swaps have quickly become the most popular way to trade cryptocurrencies. Given how popular they are, it might be a little embarrassing to ask about the basics. What exactly are perpetual swaps? Why are they so popular? And how can they help your trading strategy?
Perpetual swaps are derivatives that let you buy or sell the value of something (that something is usually called an “underlying asset”) with several advantages: 1) there is no expiry date to your position (i.e., you can hold it as long as you want), 2) the underlying asset itself is never traded (meaning no custody issues), and 3) the swap price closely tracks the price of the underlying asset, and 4) it’s easy to short.
If you want to bet on the price of Bitcoin going up, you have several options. You could go to a spot exchange and purchase actual Bitcoin with your funds at the specified exchange rate. This immediately transfers Bitcoin into your possession for however long you’d like. Or you could go to the derivatives markets and buy an options or futures contract. These contracts do not immediately settle or transfer any Bitcoin to you, but they do allow you to increase your “buying power” (i.e., using derivatives, you can buy more Bitcoin and profit more from its price movements than you would otherwise be able to on the spot markets). However, options and futures contracts expire at a certain date, meaning you have to constantly manage and reestablish your positions. For traders who want the benefits of derivatives, but don’t want to deal with the complexity of expiration dates, there’s another option, one that dominates the crypto markets: the perpetual swap.
The perpetual swap is a fairly new type of financial derivative. Unlike the futures and options markets, perpetual swaps do not expire and do not have a settlement date, meaning you can hold your position forever. Unlike the spot markets, the underlying asset is never involved directly, so you can gain exposure to an asset’s price movements without having to actually hold or borrow the asset itself. And lastly, unlike futures contracts where the price can deviate from the spot underlying price (commonly referred to as basis), perpetual swaps should always be closely pegged to the underlying they track. This is accomplished with something called a “funding rate mechanism”, which you can think of as either a fee or a rebate for traders to hold positions. This mechanism balances the buyer and seller demand for the perpetual swap so that its price falls in line with the underlying asset.
It can sound intimidating without the context provided above, but given all of these advantages, you can see how powerful perpetual swaps can be for traders. Now, add in the fact that you can take on a position with varying degrees of leverage (did someone say 100x?), and you can see why all of these properties have driven perpetual swaps into the spotlight as the darling of the crypto trading ecosystem.
Fair enough! To give you a concrete example of how perpetual swaps work, we’ll take a look at the 2019 activities of hypothetical trader Alice, who believed the price of Bitcoin would go up and wanted to find a smart way to execute her trade.
At the start of April 2019, BTC/USD was trading at roughly $4,000. Alice wanted to bet on a rise in Bitcoin price relative to USD, so she bought 2 BTC/USD perpetual swap contracts using $8,000 of collateral.
$8000 / $4000 = 2 BTC/USD perpetual swap contracts
Until the end of June 2019, the Bitcoin spot price steadily rose in dollar value to about $14,000. As part of the funding rate tethering mechanism (this is how a perpetual swap stays close in price to the underlying asset), Alice periodically lost a fee or received a rebate to her account. We describe this further below, and will in much greater detail in a separate post, so don’t worry if this doesn’t make perfect sense yet!
Alice, being a hypothetical trader, timed the market very well. She eventually closed her position after almost 3 months at the local maximum of $14,000. In this scenario, her profits, not considering the periodic funding rate fees or rebates, were $20,000.
Nicely done, Alice! She did this all without ever holding any Bitcoin at any point.
Perpetual swaps are a cleverly-designed derivative type that have taken the market by storm, dominating volumes on most of the leading exchanges. It combines desirable qualities of the spot and futures markets, and enables traders to use high leverage without the headache of rolling over expiring futures contracts. As with any leveraged trade, especially with volatile assets like Bitcoin, these trades can be risky! Make sure you understand them in full before participating.
Funding rates are the magic behind how perpetual swaps track the underlying asset’s spot price. The following better illustrates the relationship between perpetual swap price, underlying price, and funding rate:
Let’s consider the two scenarios depicted above:
Perpetual swap trades above the price of the underlying (green area): when a perpetual swap has been trading above the price of the underlying, the funding rate will be positive. Long traders will pay short traders, thus disincentivizing buying and incentivizing selling, lowering the perpetual swap price to fall in line with the underlying asset.
Perpetual swap trades below the price of the underlying (red area): when a perpetual swap has been trading below the price of the underlying, the funding rate will be negative. Short traders will pay long traders, thus disincentivizing selling and incentivizing buying, raising the perpetual swap price to fall in line with the underlying asset.
The exact methodology for computing funding rates and how the payments are carried out vary from exchange to exchange and will be studied in another post. To give you a sense of the magnitude, funding rates generally oscillate somewhere between the range of -0.025% and 0.025%. This range implies that every funding period (for example, every 8 hours), a trader who had a Bitcoin perpetual swap position on worth $100,000 would either pay or receive a funding fee of 0.025% * 100,000
= $25. The important takeaway for now is that a perpetual swap fundamentally stays in line by balancing supply and demand by paying an interest rate. If the price of the perp is close to spot, the interest rate is small to bring it back into line. The farther off it gets, the higher the interest rate.
Since the initial unveiling of the perpetual swap, BitMEX’s XBTUSD, there have been many new offerings. Several of the prominent exchanges (and inception date) driving volumes are:
BitMEX (May 2016)
ByBit (December 2018)
OKEx (December 2018)
Binance (September 2019)
FTX (October 2019)
Huobi (April 2020)
dYdX (May 2020)
As far as underlying assets go, some specialize in a limited offering (such as BitMEX, with only Bitcoin, Ether, and Ripple) while others really pride themselves in a broad array of offerings (looking at you, FTX). To get a sense of how the centralized exchanges in the list above stack up with respect to daily volumes, refer to the following illustration:
Opening a perpetual position can be done using any of the exchanges above, among others. The mechanics vary from exchange to exchange, but for simplicity’s sake, we will describe one of the more intuitive patterns:
You can first deposit USD to the exchange, serving as collateral to back your position.
Upon doing so, you can then submit an order signifying how many contracts you would like to purchase and at what price.
When you order is matched and settled, you will have a position notional value equal to amount * price
. The position will carry certain attributes along with it, most notably a leverage and margin fraction necessary to understand your risk-to-reward profile and assess liquidation and auto-deleveraging risk. We will address liquidation and auto-deleveraging in very great detail in another post. Leverage and margin fraction are computed as follows:
It’s common practice in the crypto derivatives world to allow for up to 100x leverage, meaning you can take on a position worth 100 times more than your collateral.
Let’s take a concrete example to better understand:
Alice deposits $100,000 of collateral to the exchange
Alice decides to purchase 20 BTC/USD perpetual swap contracts at a price of $10,000 each
The exchange successfully matches her full order at the price she intended, and she now has a position worth 20 * 10,000
= $200,000. Since she only has $100,000 collateral deposited (to simplify things, we assume the current price of BTC/USD is still at $10,000, meaning her position has no unrealized profit and loss associated with it, thus account_value = collateral
), her leverage would be 200,000/100,000
= 2x. Her margin fraction would be 1/2
= 50%. This has implications for her liquidation risk, which we will cover in another post.
Let’s assume position liquidation does not occur — we will cover this in depth in a subsequent post. Perpetual swaps can either adhere to inverse nonlinear or linear settlement procedures, with payout profiles that look like this:
Inverse Nonlinear settlement: Perpetual swaps began as inversely settled futures, meaning that they were settled in crypto instead of USD. This allowed exchanges to not have to have exposure to the traditional banking system and live entirely in the crypto sphere. Traders deposit BTC to begin with and these contracts settle in the base currency as opposed to the quote currency, meaning if you were to trade BTC/USD, you would actually receive your payout in BTC itself. The nonlinearity arises from the fact that as you profit from a long position in BTC, you will receive a BTC payout, but in a smaller amount since BTC itself is more expensive relative to USD. On the flip side, if BTC/USD drops in value, you will be losing BTC at a greater rate since BTC itself is cheaper relative to USD. BitMEX is an example of an exchange that offers this style of perpetual swap.
Linear settlement: With the rise of stablecoins, exchanges now offer liquid linearly-settled contracts that avoid touching fiat, but pay out with more intuitive USD-like assets, such as USDT. FTX is an example of an exchange that offers this type of perp swap. Such contracts demonstrate a more typical linear PNL profile, as shown above.
For simplicity, we will look at these linearly-settled contracts as they are more intuitive and growing in demand with an example. Alice decides to purchase 20 BTC/USD perpetual swap contracts at a price of $10,000 each. She checks back 6 months later to find that Bitcoin is trading at $20,000. Excitedly, she decides to sell 20 BTC/USD perp swaps, thereby closing her position. Her profits would roughly be position_size * (close_price — average_entry_px) = 20 * (20,000 — 10,000)
, which equals $200,000. I say roughly because, keep in mind, during the 6 months she’s held the position, she will also gain/lose any funding rate related payments along the way.
Since their inception, perpetual swaps have dominated trading volumes across a number of trading venues by several multiples over their spot counterparts. Take a look at 24hr volumes of exchange-traded BTC/USD on two of the leading exchanges (Binance and FTX):
Pros
No expiry: You can hold a position forever without having to worry about the mechanics of rolling a position over from one futures contract to the next.
Make (a lot) more with (a lot) less: You can trade upwards of 100x leverage on some of these exchanges. This means that you can get the upside of 100x your collateral amount. Put differently, if BTC/USD is trading at $10,000 and you were to use $10,000 to buy Bitcoin on the spot markets, you could only buy 1 BTC. However, if you decided to go 100x leverage on a perpetual swap, you could get the upside of 100 BTC for the same $10,000 collateral.
Liquidity: As the most traded product, perpetual swap markets have strong liquidity profiles, making it easier for buyers and sellers to participate.
Cons
Lose (a lot) more: With great power comes great responsibility — going 100x leverage gives you much less downside protection before your position starts getting liquidated.
Funding rate costs: While in theory you could stand to gain from receiving funding rate payments, generally speaking, the funding rate mechanics work against popular trades. Meaning if most people are long BTC and you also want to be long BTC, you are most likely paying a funding rate fee to all the shorts who are keeping the perpetual swap price in line. This ultimately lowers the returns for consensus positions.
https://mirror.xyz/0x0C23E0dE114d28112f52203cb9583B9826b05dDe/8W5T5qFJerprS92FCF9858HFU-juoqoLJFwdEaH2re4
Jan 2022
in the post shanav highlights that if you take crypto options volume (this includes centralized exchanges as well) as a % of spot volume, options only make up 2% of total volume traded. compare this to equity markets where options volume trades at a 35x multiple to spot volume. crypto derivative markets have a massive growth opportunity in front of them and the growth is particularly in defi.
in the rest of the post I will detail types of derivatives and their implementation and design in DeFi. along with innovations and what I am looking at next.
since perps have no expiry the concept of a funding rate is introduced to tether the price of the perp to the current spot price.
mark price refers to the current price of the perp
index price refers to the current spot price to tether to
negative funding rate shorts pay longs
positive funding rate longs pay shorts
funding rate is calculated by funding_period * (mark - index) / index
perps quickly have become the most liquid and favorite way to trade on centralized exchanges and are a hot topic to be built out in DeFi.
so far there have been about three different perp designs
orderbook + margin account
vAMM
embedded funding rates
a vAMM based model can be extremely advantageous, for a trader there is always guaranteed liquidity since you are trading peer to pool, vAMMs can very easily be cross-margined with just about any asset since all of the trading is virtual.
in voltz example there are three users
fixed interest rate taker
variable interest rate taker
liquidity provider
if a fixed taker takes a trade that uses the LPs liquidity the LP is locked into the given swap at the rate the fixed taker took. but if a variable taker comes along and uses that same liquidity (the same rate) the positions can be netted out and the LP’s collateral can be freed and reused. in turn making the protocol much more capital efficient (there is more free liquidity to trade through).
even cooler this concept can be applied to most vAMMs. imagining recycling within a perp vAMM is even easier, if traders have opposite positions on the same asset you could settle their trades “off the curve”. additionally this might be an interesting way to integrate limit orders. a trader sets their limit price and if there is another trader that has an open position on the other side it settles. if not the limit is open until it can be filled by the vAMM.
in a way this makes a vAMM function like a pseudo orderbook with pricing based off of a curve.
these are all very raw thoughts, iterate, be wrong, and discuss
as referenced in the background of this options are only 2% of spot volume in crypto (note the majority of this 2% is centralized exchange volume not DeFi options), where in equity markets options volume trades at a 35x multiple to spot.
either DeFi and overall crypto options haven’t fully been figured out yet or no one wants to trade them.
I’ll take the former all day, right now it seems like DeFi options haven’t entirely taken off due to fragmentation (more on this in the liquidity section), the design, and getting large funds and market makers to trade on various chains.
we have seen over a 10x increase in usage of onchain option protocols. I expect this growth to continue, especially as products become more cleanly packaged, capital efficient, and composable.
I can see assets that volmex offers to be interesting and attractive to traders, however I think they struggle from a nascent market.
although this is not the fault of volmex. if there was a highly liquid onchain options market when they where designing their protocol they could have used that as a source rather than deribit.
as derivatives markets are built out further, I will be watching for volatility products.
liquidity is by far the most important part to design for when creating a derivatives project, there are just so many ways it can be fragmented.
over chains
over protocols
over strikes
over expires
hegic works by having option writers function as liquidity providers to a pool in which they are locked for the options expiration
hegic was the first to do this, and they have seen considerable first mover advantage
the v2 of hegic, which introduced auto-exercising options, pools for both calls and puts, zero-loss pools (when you provide liquidity you can optionally hedge your liquidity)
pods released a great pool based options model in which the black scholes pricing model built into the AMM in which all that needs to be inputted is the IV
right now each pool is a separate option series, but the pods team is looking at how multiple option series could be in a single pool (see section 7 future work)
premia has taken a lot of the options pool concepts to the next level, buyers can select your expiration date and strike price at a granular level and sellers (liquidity providers) can choose which markets they want to underwrite.
lrya functions with two pools, the collateral (writes the options) and the delta pool (hedges option writers delta). lyra also uses market driven IV which is then inputted to the black-scholes model to determine the option price
all of the research that has gone into designing these protocols is very impressive but due to poor capital efficiency and often much higher prices compared to say deribit most of these protocols have not seen significant volume compared to centralized counterparts.
more on squeeth and power perpetuals later, but its important to know that cleanly packaging a payoff, in a perpetual, and composable fashion will always win out.
these vaults are somewhat of a win-win for all parties. retail has a clean UX, simple, and easy way to generate constant yield. sophisticated market makers can take advantage of the not very competitive option pricing and arb the prices across other option exchanges.
there has been a decent backlash to option vaults on twitter see
all I’ll say on this is option vaults are much more transparent in their onchain activities and structure its just really about communicating that risk to the retail investors using the platform. if you are marketing the option premiums as risk free yield then ya backlash is deserved.
across all option vault protocols there is about $500m of short dated options being sold each week by DOVs, so much that they are starting to affect the skew
some smart traders new this was coming, but I assume most did not
if you take away anything from this post, know that your yield will always be compressed even if it is “sustainable”
so far we have seen 40% interest rate on stablecoins, 4-5 figure APYs on the newest yield farm, very high double digit yields on option vaults, and (likely) will see large swings in negative funding via algorithmic stablecoin mints. and this is were things start getting cool… DeFi products affecting the overall crypto market structure (who would have thought back in 2019 when the entire space was sub $1m TVL).
but high tvl vault projects decimating your yield isn’t the point of this post… so back to derivatives.
notice that funding on your favorite perp protocol is positive (longs pay shorts)
spot long an asset
short the asset on the perp protocol
earn the funding rate while being delta neutral
of course you’ll need to continually hedge your positions to keep your delta neutral
as mentioned above a large influx in delta neutral backed stablecoins could push perp markets funding rates negative.
however historically the basis trade has been really profitable
Following a 2x long ETH-USDC on Perp and 2x short ETH-PERP in FTX would have yielded over 100% APR since the market was opened
hidden in this article you will also find some interesting finds relating to the open interest and price on perpetual protocol v1
if your timeline over the past few weeks has been 75% about squeeth like mine, then tweets around “everything being a perp” might have surfaced. here are a few of them
quick overview for MakerDAO and DAI is that DAI is an ETH margined USD perp
DAI holders are long DAI
Vault depositors are short DAI
DAI price is the mark
$1 is the index
and a final exercise to the reader from the above post
This is a collateralized zero coupon bond -- the original stablecoin
okay after that quick overview on how everything is a perp, what can perps learn from algorithmic stablecoins (this specifically applies to those built with an AMM for pricing)?
designing a perp market has the benefit of using an AMM for pricing, most vAMM designs use the constant product formula xy=k
which has worked quite well.
additionally almost all algorithmic stablecoins trade at a current value (the mark) and peg/target a rate of $1 (the index).
note: that this did not work well for FEI and is since removed from the protocol
as much as direct incentives did not really work for FEI, it could be interesting to try in a perp context. as mark deviates from the index you quadratically scale up/down the funding rate. doing so would hopefully attract for arbitrage opportunities and thus maintain a tighter mark/index peg.
closing out this aside, I have a feeling that we will see algo stablecoin designs implemented in perps, and protocols described in the context of perps in the future.
Longing Squeeth means you are long gamma, and is similar to holding a perpetual at the money call options
Shorting Squeeth means you are short gamma, and is similar to selling a perpetual at the money straddle
1x Long ETH Exposure with a oSQTH:USDC LP
1.5x Long ETH Exposure with a oSQTH:ETH LP
Usage as a Volatility oracle
I am extremely excited to see where power perps take us in 2022 simply because I know that the list above is just scratching the surface.
this has already been a somewhat long post, so to end here is my shortlist of innovations I’ll be looking at in 2022.
opyn laid it out nicely
A 0-perp is a stablecoin
A 1-perp is a future
Any p-perp that is not 0 or 1 is a volatility oracle
Power perps can be traded against fixed-expiry power futures
A 2-perp (squeeth) is an excellent hedge for options and constant function market makers such as uniswap and curve
A 0.5 perp (sqrth) is a perfect hedge for a uniswap LP position and it's coming next!
an increased focus on composability
I feel like many derivative designs have been optimizing for liquidity and the next step will be composability
I am looking at usage in other protocols, cross margined, (maybe) cross chain margin
vaults for everyone
continue to create vault products that are accessible (but transparent) to all investors and allow them to participate in high payoff niche trading strategies
https://medium.com/m2p-yap-fintech/central-bank-digital-currency-cbdc-101-a-p-d6f9a2759c84
M2P Fintech, May 2022
Central Bank Digital Currencies (CBDC) have gained immense traction in recent years. Though the concept is new to India, several countries have successfully seized this opportunity to power their economy. In 2020, over 80% of the banks were researching the latent qualities of CBDC. But now, over 90% of countries are on the CBDC trail.
Why the spotlight on CBDC? Here are a few contributory factors.
· Surge in cryptocurrencies such as Bitcoins, Stablecoins, and Libra
· Shift in customer preference towards contact-free payment options (due to COVID-19)
· Sharp incline in tech-savvy population and ecommerce
· Need for trustworthy, stable, regulated digital currencies to manage threats from risky cryptos
Central banks worldwide are adopting digital currencies to gain deeper insight into money movement and prevent financial crimes. Financial institutions, monetary policy advisors, and tech enthusiasts are analyzing the economic and technical viability of digital currencies and their impact on fiscal policies.
The Central Bank Digital Currency (CBDC) is traditional money in electronic form, issued and regulated by a country’s central bank. Businesses, households, and financial institutions can use CBDCs to manage payments and savings. A nation’s monetary policies, central bank, and trade surpluses determine the supply and value of CBDC. The use of distributed ledger or blockchain technologies in CBDC is based on the requirement.
CBDCs are neither cryptocurrencies nor equivalent to electronic cash. Instead, they exhibit features of both fiat currencies and cryptocurrencies.
Fiat currencies
Fiat currency is physical money issued, circulated, and backed by a country’s government for everyday use. Its value depends on the governance factors, Gross Domestic Product (GDP), and the relationship between demand and supply. Examples of Fiat currencies include US Dollar, Euro, Japanese Yen, British Pound, Indian Rupee, and Mexican Peso.
More secure, less volatile
Like physical currency, CBDCs are expected to have a legal tender status whose value can be stored on central ledgers associated with the national bank. This process makes CBDCs more secure and less volatile than other digital currencies.
The below table compares the features of CBDC against the spectrum of other digital currencies. (Source: Ernst & Young 2021)
The following are the core differences between a CBDC and a Cryptocurrency.
· Type of blockchain network
· Anonymity factor
· Use cases
CBDC is a centralized structure as it is operated by central banks’ blockchain networks. It can be accessed only by financial institutions with privileges.
On the other hand, Cryptocurrency is decentralized. It is hosted by the public in a permission-less/open blockchain network that anyone can access. With cryptocurrency, users enjoy anonymity, but CBDC promotes transparency and accountability.
CBDC can only be used for retail payments and other financial transactions; however, any form of hoarding or investment activity is forbidden. Cryptocurrency has no such limitations and can be used for both speculative purposes and payments.
CBDCs are highly versatile and customizable. They endow central banks and governments, with the liberty to evaluate and develop CBDC models to suit their requirements.
For instance, the Monetary Authority of Singapore launched a payment system that improved the speed and reduced the cost of cross-border transactions and foreign currency exchange. And England is evaluating a model that would enable people to use digital currencies alongside cash and bank deposits rather than replacing them.
CBDC models can be broadly classified into two broad categories.
· Wholesale
· Retail
Wholesale CBDC Model
The wholesale CBDC model is the most popular digital currency proposal. It gives financial institutions and banks a robust platform for payment and settlement transactions. Although banks already have direct access to central bank money, this model provides additional efficiency to the existing wholesale financial systems.
This digital currency model does not apply only to money transactions but could also be used in asset transfers between two banks. For example, if two parties are involved in an asset transaction, the wholesale CBDC helps in instantaneous payment and delivery of the asset. Such interbank asset transactions come with a counterparty risk, which could be augmented in the RTGS payment system. Banks believe implementing this model would simplify cross-border transactions, which are usually complex and expensive, as payments are routed through diverse countries, regulations, standards, and technical infrastructures.
Retail CBDC Model
The retail CBDC model will enable households and businesses to own digital currencies and make payments from a wallet or smartphone account. It aims to make financial services easy for the population without access to private banking facilities. Implementation of retail CBDC could decline the circulation of hard cash since it promotes the use of digital money on a daily basis. It eliminates third-party/intermediary risk and retrenches the cost involved in printing and managing cash flow.
Retail CBDC comes in two variants based on access possibilities.
· Account-based access
· Token-based access
Account-based access
In the account-based variant, the transaction between the originator and beneficiary is approved based on the verification of the user identity. Then, the central bank would process the transaction, and funds would instantly transfer from the payer’s CBDC account to the payee’s account. In this model, the central bank would need to create an account for each user with an integrated digital identity system.
Token-based access
The token-based model is similar to regular cash transactions. It entails approval from the originator and beneficiary to transfer the funds successfully. But the transaction would be established by dint of public-private key pairs and digital signatures. This model ensures a high level of privacy as it does not require access to the user identity. On the other hand, there is a potential risk of losing access to the funds if the user forgets the private key.
Retail digital currency architecture plays a vital role in tailoring the design according to functionalities. The legal framework of claims and payments operation of the central bank/private institutions are fundamental components of the architecture. They are the foundation of three different retail CBDC architectures.
· Direct
· Indirect
· Hybrid
In all architectures, the central bank oversees the issuance and redeeming process. Variations can be seen in the records kept by legal claims and central banks. As a result, it has the potential to function on a variety of infrastructures with minimal effort.
Direct (1-Tier) Model
The 1- tier architecture facilitates the end-users to directly hold an account with a central bank, eliminating the need for intermediaries. From community banks to informal gig workers, everyone would have access to central bank services. Here the central bank runs the centralized retail ledger and thus, maintains the records of all balances and transactions. As the central bank server is involved in all payments, this model is a single-tier design.
Indirect (2-Tier) Model
In the 2- tier model, commercial banks act as intermediaries between the central bank and the end customers. The CBDC claim is on the commercial banks (backed by the central bank). They would hold their reserves with the central banks and are obliged to offer CBDC to consumers on demand. The banks onboard customers and handle the retail payments while the central bank controls wholesale payments. The intermediaries take care of the KYC and regulate the retail payments.
Hybrid Model
Hybrid design is an amalgamation of both direct and indirect architectures. It gives an intermediate solution that facilitates direct claims on central banks collaborating with the private sector. This model allows financial institutions to segregate the CBDC from their balance sheet, which enhances portability. The legal framework helps the segregation of claims from the balance sheets of payment providers. This model gives the payment interface/service provider (or the intermediary) the responsibility of onboarding the customer, making KYC checks, and managing retail transactions.
The hybrid CBDC model is more complex than the indirect model, as the central bank maintains retail balances. However, it has the potential to offer better resilience than the indirect model and is simpler to operate than a direct CBDC. Hence, it is considered the most realistic CBDC model.
As of March 2022, around 87 countries are exploring CBDCs. Nations are either researching or testing the potential of official digital currencies.
Live CBDCs
Countries with live digital currencies are the Bahamas, Eastern Caribbean Union, and Nigeria. The names of the currencies are Bahamas Sand Dollar, DCash, and eNaira, respectively.
Pilot stage
Countries in the pilot stage are Jamaica, Uruguay, Sweden, Ukraine, Russia, South Africa, Saudi Arabia, United Arab Emirates, China, Thailand, Malaysia, and Singapore.
Research & development stage
Canada, Brazil, Switzerland, Turkey, Israel, Lebanon, India, Cambodia, Japan, and Australia are in the advanced research and development stage. The United States of America (Retail), Mexico, Peru, Chile, Kenya, and 40 other countries in Europe, South Asia, and Oceania are in the early research stage.
“The motivation for introducing a central bank digital currency may change as policymakers explore the issue. Simply introducing a complement to cash for retail transactions may not make much of a difference in the economy. On the other hand, using wholesale CBDCs in cross-border transactions has the potential to raise efficiency. Employing new digital tools for policy purposes could really alter the macroeconomic playing field. The bigger the step, the more thought it’s likely to require. Expect that to take time.”
- Johanna Jeansson, Former Bloomberg Economist.
According to reports from global financial institutions, CBDC has tremendous potential to make financial services cost-efficient, accessible, and swift.
As the retail CBDC model allows consumers to have direct access to the central bank funds, it would resolve inclusion issues in countries with large underbanked and unbanked populations. Consumers can save/deposit their money in safe central banks and avoid risky propositions.
In addition, CBDC can help governments track money movement, as all transactions are recorded on the digital ledger. Detecting and avoiding illicit activities will be easier.
In March 2020, the Bank of England released a discussion paper summarizing how adopting CBDC models could help banks maintain monetary and financial stability. According to the report, CBDCs pave the way for innovation and resiliency in payments apart from improving the availability and usability of central bank money.
First off, CBDC will give central banks extensive authority over the money flow and transactions. The banks could place restrictions on the types of transactions. Secondly, as the central bank could access user data and transaction information, privacy issues may arise. As an enormous number of public users will possess retail CBDCs, the model could be vulnerable to cyberattacks. Deposits and commercial loan issuance could shrink and banks may lose an appreciable slice of their business. This could lead to a drop in commercial bank revenues and can impact the stock market and financial instability.
The latest budget session shed light on India’s approach toward CBDCs. The Reserve Bank of India (RBI) is developing a retail CBDC model using blockchain technology and aims to launch the digital currency by 2023. With the introduction of CBDC, the government aims to boost the digital economy, curb Money Laundering or illicit activities, expand the horizons of international commerce, and more. CBDC has the potential to streamline currency management, curtail cash printing and logistical concerns, impel new opportunities, and simplify digital payments.
India, the third-largest fintech ecosystem holds colossal growth potential for digital transactions. The underbanked and unbanked market with low penetration of financial services needs to be tapped into, besides the tech-savvy population. CBDC together with banks and Fintechs can drive inclusion, transform customer experience, and enable seamless digital transactions.
CBDC will certainly be a game-changer in tomorrow’s economy. Several countries have ascertained that this concept promotes an efficient and transparent financial system. Given the attention and time that governments have dedicated to the research and development, it is just a matter of time before the models become live products and change the dynamics of economics and geopolitics.
In 2014, two academic papers were published: by Ferdinando Ametrano called “Hayek Money: The Cryptocurrency Price Stability Solution,” and by Robert Sams titled “A Note on Cryptocurrency Stabilisation: Seigniorage Shares.”
Drawing on Friedrich Hayek’s of the Gold Standard, Ametrano argues that Bitcoin, because of its deflationary nature, cannot adequately perform the unit-of-account function that we require of a currency. Instead, he proposes a rules-based, supply-elastic cryptocurrency that “rebases” (i.e. changes the money supply pro rata across all token holders) according to demand.
Overshadowed by Bitcoin’s snowballing institutional adoption, DeFi’s sweltering summer, and Ethereum’s impending network upgrade, stablecoins have been on a tear of late, with a total market cap that has . This parabolic growth has caught the eye of powerful individuals outside of the cryptoverse, including, most recently, a .
Stablecoins in the first category—namely USDT and USDC, but also exchange-based tokens like BUSD—are centrally managed, backed by, and redeemable one-to-one for, U.S. Dollars. These stablecoins have the advantages of an assured peg and capital efficiency (i.e. no over-collateralization), but their permissioned, centralized nature means that users can be and the peg itself is dependent on the trustworthy behavior of the central entity.
The second category, multi-asset collateralized stablecoins, includes MakerDAO’s DAI and Synthetix’s sUSD. Both of these stablecoins are over-collateralized by cryptoassets, and both rely on price oracles to maintain the peg to the U.S. Dollar. Unlike centralized tokens like USDT and USDC, these can be minted permissionlessly, although in DAI’s case, it’s worth noting that permissioned, centralized assets like USDC can be used as collateral. Moreover, the over-collateralized nature of these stablecoins means that they are extremely capital-intensive, and the highly-volatile, hyper-correlated nature of crypto assets have rendered these stablecoins vulnerable to in the past.
Unlike the other two types of stablecoins, algorithmic stablecoins are neither redeemable one-to-one for U.S. dollars, nor are they currently backed by crypto-asset collateral(3). Finally, and perhaps most importantly, algorithmic stablecoins are often highly : demand is driven in large part—and critics might argue, exclusively—by market sentiment and momentum. These demand-side forces are transposed into the token supply, which in turn generates further directional momentum in what can eventually become a violent feedback loop.
For non-algorithmic stablecoins, network bootstrapping does not involve game-theoretic coordination; each stablecoin is (at least in theory) redeemable for an equal amount of U.S. dollars or other forms of collateral(5). By contrast, successful price stability for algorithmic stablecoins is not at all assured, since it is determined solely by collective market psychology. Haseeb Qureshi aptly: “these schemes capitalize on a key insight: a stablecoin is, in the end, a Schelling point. If enough people believe that the system will survive, that belief can lead to a virtuous cycle that ensures its survival.”
Let us now turn from abstract theory to the real world of algorithmic stablecoins, beginning with the largest yet simplest protocol in existence today: .
As noted earlier, Ampleforth is nearly identical to Ferdinando Ametrano’s proposed “Hayek Money.” The supply of AMPLs expands and contracts according to a deterministic rule based on the daily (TWAP) per AMPL: below the price target range (i.e. below $0.96), the supply contracts, and above it (i.e. above $1.06), the supply expands. Crucially, every single wallet “participates” proportionally in each supply change. If Alice holds 1,000 AMPLs before a rebase and the supply expands by 10%, Alice now holds 1,100; if Bob had 1 AMPL, he now holds 1.1 AMPLs.
The network-wide “rebase” is what differentiates Ampleforth’s algorithmic model from the seigniorage shares models adopted by other protocols. While the Ampleforth does not provide a rationale for the single-token rebasing design as opposed to the multi-token approach, there would seem to be two primary justifications for this design decision.
The first is simplicity. Regardless of how well it works in practice, Ampleforth’s single-token model has an elegant simplicity that other algorithmic stablecoins cannot match. Second, Ampleforth’s single-token design purports to be the fairest algorithmic stablecoin model. In stark contrast to fiat monetary policy actions, which disproportionately benefit those individuals “closest” to the monetary source (the “”), Ampleforth’s design enables all token holders retain the same network share after each rebase. Ametrano makes this exact point in his 2014 paper, where he details the “severe unfairness” of monetary policy actions and contrasts this with the comparative equity of “Hayek money.”
Such are the putative justifications for the Ampleforth model, which has been copied by other rebasing tokens like and . But before turning to the model’s flaws, we might first look at the year and a half of data on Ampleforth’s performance available to us. Since its inception in mid 2019 (just over 500 days), more than three quarters of Ampleforth’s daily rebases have been positive or negative—meaning, in other words, that the TWAP of AMPL has been outside of the target range at over 75% of the rebases since launch. To be sure, the protocol is still in its fledgling stages, so it would be premature to dismiss it on these grounds alone. Nevertheless, we will soon examine how a modified “seigniorage” stablecoin, Empty Set Dollar, has managed to stay over twice as stable as Ampleforth in its first months of existence.
Ampleforth’s defenders often shrug off the lack of stability; many of them would even” Their argument is that it would be sufficient for Ampleforth to be a portfolio-diversifying “uncorrelated reserve asset.” However, this argument is questionable. Take, for example, a cryptocurrency that rebases every day according to a random number generator. Like Ampleforth, this token would have a “distinct volatility footprint,” but it would certainly not be valuable for that reason alone. Ampleforth’s value proposition rests on its tendency to move toward equilibrium, a quality that would theoretically enable AMPL to become a price-denominating currency.
Just over a week old, is an overt attempt to revive , an algorithmic stablecoin project that raised over $100 million in 2018 with much fanfare, but never ended up launching. Like Basis, Basis Cash is a multi-token protocol that consists of three tokens: BAC (the algorithmic stablecoin), Basis Cash Shares (holders of which can claim BAC inflation when the network expands), and Basis Cash Bonds (which can be purchased at a discount when the network is in contraction and can be redeemed for BAC when the network exits its deflationary phase). Basis Cash is still in its early stages of development and has encountered some early development hitches; the protocol has yet to undergo a successful supply change.
However, another Seigniorage Shares-esque protocol, , has been live since September and has already navigated multiple expansion and contraction cycles. In fact, of ESD’s more than 200 supply “epochs” thus far (one every eight hours), nearly 60% have occurred when the TWAP of ESD is within the $0.95 < x < $1.05 range—meaning that ESD has been more than twice as stable as Ampleforth, albeit over its much shorter life span(6).
For his part, Ampleforth founder and CEO Evan Kuo has algorithmic stablecoin projects like Basis Cash because they “rely on debt marketplaces (ie: bonds) to regulate supply).” Exhorting people to stay away from these “zombie ideas,” Kuo argues that these algorithmic stablecoins are flawed because, like traditional markets, they “will always rely on lenders of last resort (ie: bailouts).”
However, Kuo’s argument is question-begging, since it assumes, absent any justification, that reliance on debt marketplaces (“bailouts”) is inherently dangerous. In reality, debt-financing is problematic in traditional markets because of moral hazard; business entities that are “too big to fail” can take non-penalized risks by socializing the cost of bailouts. Algorithmic stablecoins like ESD and Basis Cash do not have the same luxury that Fannie Mae and Freddie Mac during the 2008 Financial Crisis. For these protocols, there is not a lender of last resort outside of the system to whom bailout costs can be transferred. It is entirely possible for ESD or Basis Cash to enter into a debt spiral, in which debt accumulates without willing financiers, and the protocol collapses(12).
In fact, Ampleforth also requires debt financing in order to avoid a death spiral. The difference is that this debt financing is hidden in plain sight, as it is simply spread across all network participants. Unlike with ESD and Basis Cash, it is impossible to participate in the Ampleforth system without also acting as an investor in the protocol. Holding AMPL while the network is in contraction is akin to bearing the network’s debt (“acting as central bank,” to use ), since AMPL holders lose tokens with each negative supply rebase.
From both first-principles reasoning and empirical data, we can conclude that the multi-token, “Seigniorage Shares”-inspired model has significantly more built-in stability than its single-token rebasing alternative. Indeed, Ferdinando Ametrano has his “first simplistic implementation” of Hayek Money from 2014, and he now favors a multi-token, seigniorage-based model in light of the problems outlined above.
and do exactly that. ESD v2 is still in the research and discussion phases, after which it will eventually be voted on by governance. If implemented, the upgrade would make several substantial changes to the current ESD protocol. Chief among them is the introduction of a “reserve requirement.”
This minting/redeeming mechanism is at the heart of the Frax network, since it utilizes a dynamic fractional reserve system. To mint one FRAX, a user must deposit some combination of Frax Shares (FXS) and other collateral (USDC or USDT) worth one dollar. The ratio of FXS to other collateral is determined dynamically by demand for FRAX (as demand rises, the proportion of FXS to other collateral increases). Locking up FXS to mint FRAX has deflationary effects on the FXS supply, so as more FXS is required to mint FRAX, demand for FXS will naturally increase as supply drops. Conversely, as Frax’s documentation , during contraction, “the protocol recollateralizes the system so that redeemers of FRAX receive more FXS and less collateral from the system. This increases the ratio of collateral in the system as a proportion of FRAX supply, increasing market confidence in FRAX as its backing increases.”
Nevertheless, it would be foolish to dismiss algorithmic stablecoins at this early stage. It would also be a mistake to forget how high the stakes truly are. In his 1976 tour de force, , Hayek writes: “I believe we can do much better than gold ever made possible. Governments cannot do better. Free enterprise, i.e. the institutions that would emerge from a process of competition in providing good money, no doubt would.” Though still in their nascency, algorithmic stablecoins might eventually serve as a blueprint for, and stepping stone to, Hayek’s vision of a flourishing market for money.
If you have come this far, you probably do not need this reminder but the same mental models mentioned in applies here as well. If you haven’t done so, I invite you to read the Two mental models part in Section 3. If you have already read Section 3, just a TLDR reminder that:
()
()
()
This article is part of the Stablecoin Primer series. If you are interested in reading the other articles, .
“Stablecoins are one of the three trillion dollar narratives in crypto” , founder of the algorithmic stablecoin issuer
The above should not surprise you at this point. In , we discussed that the total addressable market of the stablecoin industry could be as high as $57 trillion — that’s because stablecoins are not only safe havens from crypto’s volatility but also global reserve currency candidates that are much more accessible and advanced than the US dollar.
Layer 1 — Functions of money: In , we discussed how money is like a technology that has three main functions: unit of account, medium of exchange, and store of value. Building on that, in , via real life product market fit examples and use cases, we established how stablecoins successfully posses these functions in real life. Check
Layer 2 — Intrinsic factors: In and , we discussed the intrinsic factors of stablecoins, which we called Design Principles. By discussing how the mechanisms behind stablecoins like Tether’s USDT, Maker’s DAI, and Terra’s UST work, we answered the the simple and sought after question — “so how do these stablecoins actually work?” While stablecoin mechanisms are still under question, especially after , it is safe to say that certain mechanism types like fractional algorithmic stablecoins are proving to function better than the others. As different mechanism design experiments continue, we are getting closer to meeting all the requirements in this layer too.
Utility: As the founder of the Terra stablecoin (rip) Do Kwon , great stablecoin mechanisms need to be supported by even greater economies around them. In the fiat economy, we use money for so many purposes (e.g., transact, save, donate, invest, burn). Stablecoins need this type of utility too. Users need simple on/off ramp solutions to move their fiat into stablecoins and be able to use their stablecoins both in the fiat economy and in the blockchain economy. At the end of the day, stablecoin is a money business and what good does money have if it can’t be used?
Regulation: Haseeb Qureshi of Dragonfly Capital that “all the other avenues have to be exhausted before decentralized stablecoins have a chance of winning.” Along his lines, right now, most stablecoin usage is in terms of fiat-backed stablescoins (e.g., USDC). While this is thanks to their simple mechanism design, strong stability, and early market entry, fiat-backed stablecoins are fully permissioned and non-scalable. This means that if a regulator wants to censor the usage of a stablecoin (e.g., thinking that it poses a threat to other fiat monies), they can easily do so. Additionally, their scaling is dependent on how much fiat collateral is available to them. Decentralized stablecoins however, provide so much more flexibility to their users given they are fully blockchain based, permissionless, and non-custody taking. What this means is that, increased regulation on fiat-backed stablecoins may actually lead to the increased adoption of decentralized stablecoins as people will shy away from regulation.
Inflation of the US dollar: Even throughout the writing of the Primer, inflation in the US continued to increase. At , inflation eats into people’s savings in fiat money, making the US dollar a questionable store of value. But as the World’s reserve currency, wasn’t the US dollar supposed to be super stable? Maybe not. Stablecoin projects like and Frax’s aim to solve this problem by pegging their stable tokens’ value not to the US dollar but to the consumer price index (i.e., the real price of goods & services in the economy.) The point is, stablecoins have flexible designs and they will always follow the more stable alternative. Increased and continued inflation of the US dollar may lead to the adoption of such stablecoins, expediting stablecoins’ mainstream adoption.
Layer 4 — Trust as a function of time: Trust takes years to build and seconds to break. This applies to trust in money as well. When the US dollar was first established via the , not everyone immediately accepted the rule that 1 dollar should equal 416 grains of standard silver. It required years of consensus building for the US dollar to become engrained in the economy and hit the world stage. Now, the US dollar is not really backed by any hard asset but people still have faith in it because it has been working well for them for a long enough time.
Because of the difficulty of building trust, many forms of private money have come and gone. Similarly, many algorithmic stablecoins have lost their peg to 1 US dollar (e.g., UST, , ). Some advocate that a trustless way of building money is via overcollateralization (think crypto-backed stablecoins from Section 3). The logic is that, with this method, users do not need to trust any central issuer because they know that their stablecoins are backed by real collateral behind them. But then again, overcollateralization falls short as it is not scalable to meet the needs of billions of users. This makes algorithmic stablecoins potentially better suited as the stablecoin type that can reach mainstream adoption. And while an extremely difficult feat to accomplish, survival across long durations of time seems to be the last layer in stablecoins’ chain of adoption.
If you share the same excitement, let’s connect via .
All stablecoins imply a peg. Stablecoins generally peg to the US dollar (so each stablecoin trades at $1), but they sometimes peg to other major currencies or to the .
This is not to say that stablecoins are impossible. Stablecoins are just currency pegs, and currency pegs are certainly not impossible — there are . However, almost all large central banks have moved away from currency pegs. This is in part because they’ve realized pegs tend to be . History has taught us again and again, whether it be , the , or the infamous (when George Soros “broke the bank of England”), no currency peg can be maintained against .
If you assume currency markets are performing a , this implies every peg will eventually walk outside of its stable band and break. But the sun will also eventually swallow up the solar system, so screw it — we can call a peg stable if it lasts 20 years. Even in fiat years, that’s pretty good.
The final two points matter a great deal, because currency pegs are all about . If market participants cannot identify when a peg is objectively weak, it becomes easy to spread false news or incite a market panic, which can trigger further selling — basically, a death spiral. A transparent peg is more robust to manipulation or sentiment swings.
This is essentially what Tether purports to be, though they have not been recently audited and Tether is actually a fractional reserve and don’t hold all of the fiat as they claim they do. Other stablecoins like are trying to do the same thing, but with more transparency. is a similar scheme, except the collateral is gold instead of fiat. Nevertheless, it shares the same fundamental properties.
The first way is to simply have someone continually publish a price feed onto the blockchain. This is obviously vulnerable to manipulation, but this may be good enough if the publisher is trustworthy. The second way is to use a scheme, along the lines of . This is much more complex and requires a lot of coordination, but is ultimately less centralized and less manipulable.
The first stablecoin to use this scheme was (collateralized with BitShares), created by Dan Larimer back in 2013. Since then, MakerDAO’s is widely considered the most promising crypto-collateralized stablecoin, collateralized by Ether. An interesting scheme proposed by Vitalik Buterin is using to issue stablecoins against loans with different tranches of seniority (the most senior tranches could act as stablecoins).
This idea is not completely novel — its roots can be traced to in the 70s. A privately issued, non-collateralized, price-stable currency could pose a radical challenge to the dominance of fiat currencies. But how would you ensure it remains stable?
Enter , a scheme invented by Robert Sams in 2014. Seignorage Shares is based on a simple idea. What if you model a smart contract as a central bank? The smart contract’s monetary policy would have only one mandate: issue a currency that will trade at $1.
For example, let’s say the coin is trading at $2. This means the price is too high — or put another way, the supply is too low. To counteract this, the smart contract can mint new coins and then auction them on the open market, increasing supply until the price returns to $1. This would leave the smart contract with some extra profits. Historically, when governments minted new money to finance their operations, the profits were called the .
The most promising project in this category is , which builds upon Seignorage Shares by adding a first-in-first-out “bond” queue. They claim that this addition improves the stability properties of the protocol, and have performed several simulations to model various outcomes.
The chapter sets out the unique features of CBDCs, asking what their issuance would mean for users, financial intermediaries, central banks and the international monetary system. It presents the design choices and the associated implications for data governance and privacy in the digital economy. The chapter also outlines how CBDCs compare with the latest generation of retail fast payment systems (FPS, see ).2
Throughout the long arc of history, money and its institutional foundations have evolved in parallel with the technology available. Many recent payment innovations have built on improvements to underlying infrastructures that have been many years in the making. Central banks around the world have instituted real-time gross settlement (RTGS) systems over the past decades. A growing number of jurisdictions (over 55 at the time of writing)3 have introduced retail FPS, which allow instant settlement of payments between households and businesses around the clock. FPS also support a vibrant ecosystem of private bank and non-bank payment service providers (PSPs, see ). Examples of FPS include TIPS in the euro area, the Unified Payments Interface (UPI) in India, PIX in Brazil, CoDi in Mexico and the FedNow proposal in the United States, among many others. These developments show how innovation can thrive on the basis of sound money provided by central banks.
Yet further-reaching changes to the existing monetary system are burgeoning. Demands on retail payments are changing, with fewer cash transactions and a shift towards digital payments, in particular since the start of the Covid-19 pandemic (, left-hand and centre panels). In addition to incremental improvements, many central banks are actively engaged in work on CBDCs as an advanced representation of central bank money for the digital economy. CBDCs may give further impetus to innovations that promote the efficiency, convenience and safety of the payment system. While CBDC projects and pilots have been under way since 2014, efforts have recently shifted into higher gear (, right-hand panel).
Perhaps the most significant recent development has been the entry of big techs into financial services. Their business model rests on the direct interactions of users, as well as the data that are an essential by-product of these interactions. As big techs make inroads into financial services, the user data in their existing businesses in e-commerce, messaging, social media or search give them a competitive edge through strong network effects. The more users flock to a particular platform, the more attractive it is for a new user to join that same network, leading to a "data-network-activities" or "DNA" loop (see ).
Entrenchment of market power may potentially exacerbate the high costs of payment services, still one of the most stubborn shortcomings of the existing payment system. An example is the high merchant fees associated with credit and debit card payments. Despite decades of ever-accelerating technological progress, which has drastically reduced the price of communication equipment and bandwidth, the cost of conventional digital payment options such as credit and debit cards remains high, and still exceeds that of cash (, left-hand panel). In some regions, revenues deriving from credit card fees are more than 1% of GDP (right-hand panel).
The availability of massive amounts of user data gives rise to another important issue – that of data governance. Access to data confers competitive advantages that may entrench market power. Beyond the economic consequences, ensuring privacy against unjustified intrusion by both commercial and government actors has the attributes of a basic right. For these reasons, the issue of data governance has emerged as a key public policy concern. When US consumers were asked in a representative survey whom they trust with safeguarding their personal data, the respondents reported that they trust big techs the least (, left-hand panel). They have far more trust in traditional financial institutions, followed by government agencies and fintechs. Similar patterns are present in other countries (right-hand panel). The survey reveals a number of concerns, but the potential for abuse of data emerges as an important element. A later section of this chapter discusses data governance issues more fully.
Central banks are accountable public institutions that play a pivotal role in payment systems, both wholesale and retail. They supply the ultimate means of payment for banks (bank reserves), and a highly convenient and visible one for the public (cash). Moreover, in their roles as operators, overseers and catalysts, they pursue key public interest objectives in the payments sphere: safety, integrity, efficiency and access (see ).
Second, central banks provide the means for ensuring the finality of wholesale payments by using their own balance sheets as the ultimate means of settlement, as also reflected in legal concepts of finality (see ). The central bank is the trusted intermediary that debits the account of the payer and credits the account of the payee. Once the accounts are debited and credited in this way, the payment is final and irrevocable.
Central bank digital currencies should be viewed in the context of these functions of the central bank in the monetary system. Wholesale CBDCs are for use by regulated financial institutions. They build on the current two-tier structure, which places the central bank at the foundation of the payment system while assigning customer-facing activities to PSPs. The central bank grants accounts to commercial banks and other PSPs, and domestic payments are settled on the central bank's balance sheet. Wholesale CBDCs are intended for the settlement of interbank transfers and related wholesale transactions, for example to settle payments between financial institutions. They could encompass digital assets or cross-border payments. Wholesale CBDCs and central bank reserves operate in a very similar way. Settlement is made by debiting the account of the bank that has net obligations to the rest of the system and crediting the account of the bank that has a net claim on the system. An additional benefit of settlement in wholesale CBDCs is to allow for new forms of the conditionality of payments, requiring that a payment only settles on condition of delivery of another payment or delivery of an asset. Such conditional payment instructions could enhance the delivery-versus-payment mechanism in RTGS systems (see ).
One attribute of retail CBDCs is that they do not entail any credit risk for payment system participants, as they are a direct claim on the central bank (). A retail CBDC is akin to a digital form of cash, the provision of which is a core responsibility of central banks. Other forms of digital retail money represent a claim on an intermediary. Such intermediaries could experience illiquidity due to temporary lack of funds or even insolvency, which could also lead to payment outages. While such risks are already substantially reduced through collateralisation and other safeguards in most cases, retail CBDCs would put an end to any residual risk.
Retail CBDCs come in two variants (). One option makes for a cash-like design, allowing for so-called token-based access and anonymity in payments. This option would give individual users access to the CBDC based on a password-like digital signature using private-public key cryptography, without requiring personal identification. The other approach is built on verifying users' identity ("account-based access") and would be rooted in a digital identity scheme.15 This second approach is more compatible with the monitoring of illicit activity in a payment system, and would not rule out preserving privacy: personal transaction data could be shielded from commercial parties and even from public authorities by appropriately designing the payment authentication process. These issues are intimately tied to broader policy debates on data governance and privacy, which we return to in a later section.
Wholesale CBDCs are intended for the settlement of interbank transfers and related wholesale transactions. They serve the same purpose as reserves held at the central bank but with additional functionality. One example is the conditionality of payments, whereby a payment only settles if certain conditions are met. This could encompass a broad variety of conditional payment instructions, going far beyond today's delivery-versus-payment mechanism in real-time gross settlement (RTGS) systems. In effect, wholesale CBDCs could make central bank money programmable, to support automation and mitigate risks. Further, wholesale CBDCs would be implemented on new technology stacks. This clean-slate approach would let wholesale CBDC systems be designed with international standards in mind to support interoperability.
State-of-the-art approaches in this domain are exemplified by Project Helvetia – a joint experiment by the BIS Innovation Hub Swiss Centre, SIX Group AG and the Swiss National Bank. This project demonstrates the feasibility of settling digital assets in central bank money. Two proofs-of-concept (PoCs) were compared: (i) issuing a novel wholesale CBDC (, top panel) and (ii) building a link between the new SIX Digital Exchange (SDX) platform and the existing RTGS central bank payment system, Swiss Interbank Clearing (SIC) (bottom panel). Both PoCs were found to be functionally feasible, and transfers were shown to be legally robust and final. Each PoC presents different practical and operational benefits and challenges.
For details of the underlying technology, see R Auer, R Böhme and A Wadsworth, "An introduction to public-private key cryptography in digital tokens", BIS Quarterly Review, March 2020, p 73; M Bech, J Hancock, T Rice and A Wadsworth, "On the future of securities settlement", BIS Quarterly Review, March 2020, pp 67–83. Arrangements for interoperability between domestic CBDCs are discussed in R Auer, P Haene and H Holden, "Multi-CBDC arrangements and the future of cross-border payments", BIS Papers, no 115, March 2021.
The analogy with the payment system is that the market stallholders in the public town square are like PSPs, each offering basic payment functionality with their particular bundle of services, such as banking, e-commerce, messaging and social media. Just as the market stallholders must stick to the standards laid down by the town authorities, these PSPs must adhere to various technical standards and data access requirements. These include technical standards such as application programming interfaces (APIs) that impose a common format for data exchange from service providers (see ). Together with data governance frameworks that assign ownership of data to users, these standards ensure interoperability of the services between PSPs so that they can work seamlessly for the user. Two instances of APIs are account information services (AIS) and payment initiation services (PIS). AIS allow users to "port" data on their transactions from one provider to another. For instance, a user who has accounts with two different banks can open the app of one bank to check the balances in the other. PIS allow a user to operate the app of one PSP to make an outgoing payment from the account of another.
An application programming interface (API, see ) acts as a digital communication interface between service providers and their users. In its simplest form, a modern payment API first takes a request from an authorised user (eg a user who wants to send a friend money through a mobile banking app). It then sends the request to a server to obtain information (eg the friend's bank account details or the sender's account balance). Finally, it reports the retrieved information back to the user (the money has been sent).
Payment APIs may offer software that allows organisations to create interoperable digital payment services to connect customers, merchants, banks and other financial providers. Examples include Mojaloop, an open source system, and the Unified Payment Interface (UPI) in India. For example, to send money to another user via an API, all that is required from the sender's perspective is the unique phone number of the recipient. Behind the scenes, the payment process follows three general steps (). In the first step, the phone number provided is used to identify and authenticate the unique recipient, as well as their bank connection, account details etc. The second step is agreement, in which the recipient's bank (or financial services provider) needs to agree to the transaction on the customer's behalf. During this second step, it is verified that the transaction satisfies rules and regulations (eg sufficient funds and compliance with know your customer (KYC) and anti-money laundering and combating the financing of terrorism (AML/CFT) standards). Once there is agreement, in a third step funds are transferred and made available to the recipient immediately. In all steps, cryptography ensures that the transaction is non-repudiable and that information is shared securely.
See Mojaloop Foundation, "Open Source Software for Payment Interoperability", accessed 11 May 2021; and D D'Silva, Z Filková, F Packer and S Tiwari, "The design of digital financial infrastructure: lessons from India", BIS Papers, no 106, December 2019. "Moja" is Swahili for "one", to underscore the aim of achieving interoperability in a single system..
Much as the local authorities preside over their town's marketplace, a central bank can provide the payment system with access to its settlement accounts. In the case of a retail FPS, the balance sheet of the central bank is, metaphorically speaking, a public space where the sellers of the payment services all interact. The central bank is best placed to play this role, as it issues the economy's unit of account and ensures ultimate finality (see ) of payments through settlement on its balance sheet. The central bank can also promote innovation in this bustling payments marketplace, where the network effects can be channelled towards achieving a virtuous circle of greater participation, lower costs and better services.
Whether retail CBDCs will play a similarly beneficial role will depend on the way that CBDCs frame the interaction between PSPs and their ancillary services. In a general sense, the public good nature of both CBDCs and retail FPS can be seen as resting on an open payment system around the interoperability of the services offered by PSPs. compares cash, retail CBDCs and FPS along dimensions relevant for users and public policy. Several similarities, but also differences, emerge.
Although CBDCs and FPS have many characteristics in common, one difference is that CBDCs extend the unique features and benefits of today's digital central bank money directly to the general public.17 In a CBDC, a payment only involves transferring a direct claim on the central bank from one end user to another. Funds do not pass over the balance sheet of an intermediary, and transactions are settled directly in central bank money, on the central bank's balance sheet and in real time. By contrast, in an FPS the retail payee receives final funds immediately, but the underlying wholesale settlement between PSPs may be deferred.18 This delay implies a short-term loan between parties, together with underlying credit risk on those exposures (): the payee's bank credits its account in real time, while it has an account payable vis-à-vis the payer's bank. In an FPS with deferred settlement, credit exposures between banks accumulate during the delay, for example over weekends. This exposure may be fully or partially collateralised – an institutional safeguard designed by the central bank.
Indeed, there are good arguments against a one-tier system fully operated by the central bank, ie a direct CBDC (, top panel).21 Direct CBDCs would imply a large shift of operational tasks (and costs) associated with user-facing activities from the private sector to the central bank. These include account opening, account maintenance and enforcement of AML/CFT rules, as well as day-to-day customer service. Such a shift would detract from the role of the central bank as a relatively lean and focused public institution at the helm of economic policy.
One possibility is an operational architecture in which the private sector onboards all clients, is responsible for enforcing AML/CFT regulations and ongoing due diligence, and conducts all retail payments in real time. However, the central bank also records retail balances. This "hybrid" CBDC architecture (, centre panel) allows the central bank to act as a backstop to the payment system. Should a PSP fail, the central bank has the necessary information – the balances of the PSP's clients – allowing it to substitute for the PSP and guarantee a working payment system. The e-CNY, the CBDC issued by the People's Bank of China and currently in a trial phase, exemplifies such a hybrid design.23
An alternative model is one in which the central bank does not record retail transactions, but only the wholesale balances of individual PSPs (, bottom panel). The detailed records of retail transactions are maintained by the PSP. The benefits of such an "intermediated" CBDC architecture would be a diminished need for centralised data collection and perhaps better data security due to the decentralised nature of record-keeping – aspects that have been discussed in several advanced economies.24 By reducing the concentration of data, such designs could also enhance privacy (see next section). The downside is that additional safeguards and prudential standards would be necessary, as PSPs would need to be supervised to ensure at all times that the wholesale holdings they communicate to the central bank accurately reflect the retail holdings of their clients.
In addition to these operational considerations, the broader impact on financial intermediation activity is an important consideration in assessing the economic impact of CBDCs. Just like cash, CBDCs can be designed to maximise usefulness in payments, without giving rise to large inflows onto the central bank's balance sheet. The design of CBDCs should further mitigate the systemic implications for financial intermediation, by ensuring that commercial banks can continue to serve as intermediaries between savers and borrowers. While cash offers safety and convenience in payments, it is not widely used as store of value. Today, consumers' holdings of cash for payment purposes are in fact minimal in comparison with sight deposits at commercial banks ().
Central banks have ample scope to ensure the smooth functioning of intermediation activities and possess the tools to achieve this objective (). One option is to remunerate CBDC holdings at a lower interest rate than that on commercial bank deposits.27 Just as cash holdings offer no remuneration, a central bank could pay zero interest, or in principle a negative interest rate. For CBDCs tied to an identity scheme (ie account-based CBDCs), any potential encroachment on private intermediaries could be further mitigated via caps that restrict the amount of CBDC held by households and businesses. Another option might combine caps and an interest rate policy, with CBDC balances below a given level earning a zero or low interest rate and balances above that level earning a negative interest rate. One caveat with hard caps is that households or firms that have reached their cap could not accept incoming payments, resulting in a broken payment process. To ensure that households and firms can accept incoming payments at all times, any funds in excess of a cap could be transferred automatically to a linked commercial bank deposit account – the so-called overflow approach.28 Caps, overflows and remuneration policies would not only limit the impact of a CBDC on credit intermediation in normal times, but they could also mitigate potential runs into the CBDC during market turmoil. Central banks might devise various ways of deterring "digital runs" from commercial banks to CBDCs in times of stress.29
Assuming that CBDCs are to be account-based, an important question is who should verify the identity of an individual seeking to join the network of CBDC users, and how this verification should be done. Digital ID schemes have already emerged in several countries, but their specific designs and the relative roles of the public and private sector differ substantially ().
In an alternative, nascent model of digital ID, an individual has ownership and control over their credentials. These can be selectively shared with counterparties, who can verify that the credentials belong to a valid issuer. In such a "federated" model, different attributes of each person are recorded and issued by different entities. A federated digital ID (see ) could potentially allow for identification alongside decentralised storage of data.
Any identification framework requires a high standard of cyber security. PSPs have been frequently targeted by cyber attacks, both before and during the Covid-19 pandemic (, left-hand panel). The rising incidence of major data breaches in recent years, in particular at financial institutions (right-hand panel), underscores the possibility that data or funds may be stolen. Such risks would be similar for CBDC payment services.
The globalisation of economic activity has required a commensurate evolution of cross-border online services. The massive growth of travel and remittances has led to rising demand for cross-border retail payment services.37 International tourism expenditures, for instance, have doubled over the past 15 years, while the number of parcels shipped across borders has more than tripled. In just one decade, global remittances rose by two thirds, to $720 billion in 2019 (, left-hand panel). Yet payment services do not work seamlessly across borders, as they are at times slow, expensive, opaque and cumbersome to use.
CBDCs could pave the way for innovations that improve international payments. They can make use of the fact that retail users have direct claims on central bank money to simplify the monetary architecture.38 However, design features matter for their overall impact in the cross-border context and whether CBDCs will serve the broader public interest. One potential concern is that the use of CBDCs across borders might exacerbate the risk of currency substitution, whereby a foreign digital currency displaces the domestic currency to the detriment of financial stability and monetary sovereignty. Indeed, a number of central banks see currency substitution – along with tax avoidance and more volatile exchange rates – as a key risk that they are addressing in their work on CBDCs (, centre panel).39
For these reasons, the risks of currency substitution from cross-border use of CBDCs may be limited and could be addressed largely through international monetary cooperation. The widespread international use of some currencies stems from other factors, such as the depth, efficiency and openness of a country's financial markets, trust in a currency's long-run value and confidence in the institutional and legal infrastructure. For instance, dollarisation is typically higher in countries with historically high inflation (, right-hand panel). A foreign currency is unlikely to gain a domestic foothold just because it is digital.
mCBDC arrangements would allow central banks to mitigate many of today's frictions by starting from a "clean slate", unburdened by legacy arrangements. There are three potential models. First, they could enhance compatibility for CBDCs via similar regulatory frameworks, market practices and messaging formats (, top panel). Second, they could interlink CBDC systems (middle panel), for example via technical interfaces that process end user-to-end user transactions across currency areas without going through any middlemen.
A broader stocktake of central bank research and design efforts finds that, out of 47 public retail CBDC projects, 11 feature a cross-border dimension (, left-hand panel). Responses to a survey of major central banks highlight that about one in four is considering incorporating features to enhance cross-border and cross-currency settlement in future CBDC designs (centre panel). Among the central banks that do, all three mCBDC arrangements are being considered. While a single mCBDC (model 3) provides the most benefits from a technological perspective, the preferred choice at present is the interlinking mCBDC arrangement (model 2) – possibly reflecting the reduced need for cooperation. Additionally, some central banks are also considering taking on an operational role in FX conversion (right-hand panel).
Application programming interface (API): a set of rules and specifications followed by software programmes to communicate with each other, and an interface between different software programmes that facilitates their interaction. See .
Central bank digital currency (CBDC): a digital payment instrument, denominated in the national unit of account, that is a direct liability of the central bank. See .
Cross-border and cross-currency payments: cross-border payments are those where the payer and payee reside in different jurisdictions. Many, but not all, of these are also cross-currency payments – that is, payments where the payer and payee are respectively debited and credited in different currencies. Payments within monetary unions or payments in a common invoice currency may be cross-border but not cross-currency. See .
Distributed ledger technology (DLT): the processes and related technologies that enable nodes in a network (or arrangement) to securely propose, validate and record state changes (or updates) to a synchronised ledger that is distributed across the network's nodes. See .
Data-Network-Activities (DNA) loop: the self-reinforcing loop between data, network externalities and activities, as generated on big techs' online platforms (social networks, e-commerce platforms and search engines), that allow different types of user to interact. See .
Fast payment system (FPS): a payment system in which the transmission of the payment message and the availability of final funds to the payee occur in real time or near-real time and on as near to a 24-hour and seven-day (24/7) basis as possible. See .
Financial inclusion: universal access to, and frequent use of, a wide range of reasonably priced financial services, in particular transaction accounts. See and .
Integrity: compliance with rules against unlawful action, including the adherence to rules against bribery and corruption, anti-money laundering and combating the financing of terrorism; as well as consistent and complete reporting. See .
Safety: the "safety" of different forms of money, in the context of their use as settlement assets, means the likelihood of the asset retaining its value to the holder, and hence its acceptability to others as a means of payment. See .
Ultimate finality: final settlement in central bank money. Finality is achieved when settlement of an obligation is legally irrevocable and unconditional. The choice of settlement asset is important as, even when the original payment obligation is fully extinguished (ie paid with finality), there can be both credit and liquidity risks for the payee associated with holding the resulting settlement asset. The related term "ultimate settlement" combines the concept of settlement being final with the concept of the settlement asset being the least risky possible. See .
Wholesale CBDC: a CBDC for use by financial institutions (wholesale transactions) that is different from balances in traditional bank reserves or settlement accounts. See .
See Group of central banks, , October 2020.
See Committee on Payments and Market Infrastructures (CPMI), , November 2016.
See M Bech, J Hancock and W Zhang, "", BIS Quarterly Review, March 2020, p 28.
See S Foley, J Karlsen and T Putniņš, "Sex, drugs, and bitcoin: how much illegal activity is financed through cryptocurrencies?", The Review of Financial Studies, vol 32, no 5, May 2019, pp 1798–853; M Paquet-Clouston, B Haslhofer and B Dupont, "Ransomware payments in the Bitcoin ecosystem", Journal of Cybersecurity, May 2019, pp 1–11.
In early June 2021 the estimated annualised electricity consumption of the Bitcoin network was roughly the same as that of the Netherlands. See Cambridge Bitcoin Energy Consumption Index, .
For a discussion of the risks to stablecoins' value backing, and potential use cases, see D Arner, R Auer and J Frost, "Stablecoins: risks, potential and regulation", Bank of Spain, Financial Stability Review, November 2020.
See J Frost, L Gambacorta, Y Huang, H S Shin and P Zbinden, "BigTech and the changing structure of financial intermediation", Economic Policy, vol 34, no 100, October 2019, pp 761–99.
See R McMorrow, "China tech groups given a month to fix antitrust practices", Financial Times, 13 April, 2021.
See F Restoy, "", FSI Occasional Papers, no 17, February 2021.
See J Stiglitz and J Rosengard, Economics of the Public Sector, fourth edition, New York, W W Norton & Company, 2015.
See Reuters, "Brazil antitrust watchdog questions Facebook's WhatsApp payment fees", 28 July 2020.
See CPMI and World Bank, Payment aspects of financial inclusion in the fintech era, April 2020.
See M Kutzbach, A Lloro, J Weinstein and K Chu, "How America Banks: household use of banking and financial services", FDIC Survey, October 2020; R Auer, J Frost, T Lammer, T Rice and A Wadsworth, "Inclusive payments for the post-pandemic world", SUERF Policy Notes, September 2020; World Bank, Findex.
See A Carstens, "", lecture at Princeton University, Princeton, 5 December 2019.
See C Kahn, "How are payment accounts special?" Payments innovation, symposium, Federal Reserve Bank of Chicago, October 2016.
See BIS, "", Annual Economic Report June 2020, Chapter III.
See A Carstens, "", speech at the Peterson Institute for International Economics, 31 March 2021.
In today's payment systems, both real-time and deferred net settlement are used. Examples of the latter include SNCE in Spain, IBPS in China and FPS in the United Kingdom, while examples of the former include TIPS in the euro area and BiR in Sweden. Among the CPMI members, 12 FPS use deferred net settlement while 15 use real-time settlement.
See C Boar and A Wehrli, "", BIS Papers, no 114, January 2021.
See A Carstens, "", Bank of Ireland Whitaker Lecture, Dublin, 22 March 2019.
The various CBDC architectures are described in R Auer and R Böhme, "", BIS Quarterly Review, March 2020, pp 85–100; R Auer and R Böhme, "", BIS Working Papers, no 948, June 2021.
Banks must know or estimate borrowers' solvency to price the associated risk. Public sector institutions may not have the same degree of relevant knowledge as local and specialised private lenders do. This is the core case for free markets, as presented in F Hayek, "The use of knowledge in society", American Economic Review, vol 35, no 4, 1945, pp 519–30.
See the discussion of the e-CNY project in R Auer, G Cornelli and J Frost, "", BIS Working Papers, no 880, August 2020.
See eg J Powell, "Letter to Congressman French Hill", 19 November 2019.
See BIS (2020), op cit; N Kocherlakota and N Wallace, "Incomplete record-keeping and optimal payment arrangements," Journal of Economic Theory, vol 81, no 2, 1998, pp 272–89; C Kahn and W Roberds, "Why pay? An introduction to payments economics", Journal of Financial Intermediation, vol 18, no 1, January 2009, pp 1–23.
See R Auer, C Monnet and H S Shin "", BIS Working Papers, no 924, January 2021.
See eg D Andolfatto, "Assessing the impact of central bank digital currency on private banks", The Economic Journal, vol 131, no 634, February 2021, pp 525–40; and J Fernandez-Villaverde, D Sanches, L Schilling and H Uhlig, "Central bank digital currency: central banking for all?", NBER Working Papers, no 26753, February 2020.
See U Bindseil, "Tiered CBDC and the financial system", ECB Working Paper Series, no 2351, January 2020.
See Bindseil (2020), op cit.
See M Bordo and R Levine, "Central bank digital currency and the future of monetary policy", Hoover Institution Working Papers, 2017. The authors advocate the introduction of CBDCs so that central banks can implement negative interest rate policies more effectively.
See CPMI and Markets Committee, , March 2018.
US data show that reports of identity theft have risen steadily over the last years. See Federal Trade Commission, "Consumer Sentinel Network Data Book 2020", February 2021.
Because of their digital, borderless nature, fully anonymous CBDCs could become a vehicle for illicit activity. Even with transaction limits, there is the potential for "smurfing", or laundering the proceeds of illicit transactions into many smaller transactions or accounts.
For a relevant discussion, see UK Department for Digital, Culture, Media and Sport, "The UK digital identity and attributes trust framework", February 2021.
See, for proposals of such semi-anonymous designs, Auer, Cornelli and Frost (2020), op cit; ECB, "Exploring anonymity in central bank digital currencies", In Focus, no 4, December 2019; Reuters, Technology News, "China's digital currency not seeking 'full control' of individuals' details – central bank official", 12 November 2019.
Switzerland's "" took place on 7 March 2021. The referendum was on the introduction of a digital ID for Swiss citizens, which would be provided by private companies. While 36% of voters backed the proposal, 64% rejected it.
See B Cœuré, "", speech at a conference on "The future of the international monetary system", Luxembourg, 17 September 2019.
See R Auer, C Boar, G Cornelli, J Frost, H Holden and A Wehrli, "", BIS Papers, no 116, June 2021.
See M Ferrari, A Mehl and L Stracca, "Central bank digital currency in an open economy", ECB Working Paper Series, no 2488, November 2020.
See G20 Finance Ministers and Central Bank Governors, Communiqué, Riyadh, 23 February 2020.
See CPMI, , July 2020.
As argued by Carstens (2021), op cit.
Enter stablecoins - cryptocurrencies with zero/minimal volatility. Though much has been said about stablecoins, their importance in achieving global scale crypto adoption cannot be overstated. Paraphrasing , stablecoins will serve as the first point of entry into the world of crypto for millions of future users. Ensuring their journeys are financially safe is essential to repelling negative regulatory attention and preventing a financial collapse from jeopardizing the industry’s prospects.
Taken for granted by the majority of users, stablecoins are like the internet - enablers of access to goods and services in a blockchain-enabled world. If one believes crypto will reshape global commerce and finance, it would be stupid to ignore the
Virtually all stablecoins in existence are designed to stabilize their prices at or tightly around the value of 1 unit of a fiat currency, predominantly the US Dollar. This is not arbitrary. It is simply a reflection of the US Dollar’s dominance in the world’s financial markets. Apart from being the global , it is the official currency in 10 countries (excluding the USA) and the de facto currency in many more. What makes the US Dollar so attractive to several foreign nations is its ability to preserve its purchasing power better than their own currencies.
To illustrate, Argentina suffered from rising inflation from the 1940s to the 1990s. At its peak in March 1989-1990, inflation was over , which translates to the Argentinian Peso losing ~1.5% of its value every day. Consequently, Argentinians would increasingly convert and store their earnings and wealth in the US Dollar to retain purchasing power. Simply put, inflation means ‘X units of currency can procure Y units of goods today and less than Y units of the same goods tomorrow’.
If fiat money is just worthless paper that has neither intrinsic value nor a claim against hard money, why does it continue to be used all over the world? Simply because of a collective belief that it must be worth something because others believe it is worth something. describes this as legitimacy and suggests ways in which legitimacy develops, two of which can explain the growth and survival of fiat money:
The concept of money’s value being derived from a common belief isn’t an entirely new phenomenon. Stone disks in Micronesia, known as (sounds familiar?) have been used as money, showing that value can be assigned to an object through consensus.
, the OG stablecoin network proposed and was built on 17 foundational principles of an ideal free-market financial system. Out of these, 6 in particular are desirable traits in a stablecoin (apart from stability, of course)
1) Decentralization
5) Endogenous information (only use price information from within the system)
6) Privacy
By design, PCV is entirely under the control of TribeDAO, which can choose not to provide liquidity to FEI (Yes, there are safeguards in the form of a ‘Guardian’ Multisig which can veto malicious proposals, but it isn’t impossible for the DAO to reconstitute the Guardian itself. Further, the Guardian Multisig is controlled by the Fei team which also holds a substantial share of TribeDAO. is an actual event where a proposal to offer 1:1 redemptions to all FEI holders was shot down by TribeDAO governance. A stablecoin must offer easy and permissionless exits to holders to inspire long term confidence in the protocol. (Gyro Finance, another stablecoin protocol, has a unique design that combines the PCV model while empowering Gyro Dollar holders through governance rights)
Although FRAX, the popular algorithmic stablecoin, claims to be the inventor of the fractional stablecoin, there was another before it. Sogur the concept of the fractional reserve model in 2017. Here’s an excerpt from their whitepaper -
Terra’s founder Do Kwon argued that algorithmic stablecoins can thrive if they have organic demand for its usage. In an early of this essay two weeks before the UST crash, I wrote that I find this argument untenable even though algorithmic stablecoins are structurally no different from fiat currencies. Fiat currencies work because of a collective belief in the absence of intrinsic value. When history has shown on numerous occasions that fiat currencies (which inherently have ~100% adoption within their nations) are subject to bank runs, public’s loss of confidence and capital flight of foreign investors, ask yourself how an algorithmic stablecoin without the ability to enforce behaviour and whose scale of adoption pales in comparison, will succeed in doing what nations with larger forex reserves and the ability to drive coordinated-behaviour have failed at.
Want to start trading perpetual swaps in the most open, secure, performant, and rewarding way? Sign up for the mailing list and join the community today. We’re launching the next generation of decentralized perpetual swaps and derivatives on Ethereum.
in the last year options protocols have grown from around $85m to over $1b in tvl, similarly perpetual swap trading platforms have seen quite the growth in trading volume from far less than $1b to coming close to clearing $10b in volume in a single day. value locked and volumes traded will only increase in 2022 with new platforms and products being released practically weekly (as im writing this , , , and all released or are releasing in next few days).
there have been several good writeups on defi derivatives and their outlook for 2022 here is a particularly good one from .
a product initially developed by perp swaps have no expiry, and function similar to a spot margin account allowing for highly leveraged long/short trading.
also gives a very good explanation
using an orderbook and margin account mirrors a centralized exchange experience and this is what protocols like work. trading on an orderbook is nice you can easily set limit orders, you trade , and for the protocol trading is very capital efficient (each trade is settled once it can be matched with a counterparty).
the third design (which I have only seen and squeeth use) is embedding the funding rate into a parameter of asset. known as in-kind funding squeeth uses a to settle funding between longs and shorts without ever having to manage a cash payment. using in-kind funding is something that I think will become much more common management of funding was the largest limitation for perp composability. since funding is handled in-kind squeeth can be easily traded as an ERC20, used to LP, used as collateral on other defi platforms, and the list goes on.
this does not entirely relate to perps (its about interest rate swaps actually), but has increased the benefits of using a vAMM through a concept known as .
this might not be the best comparison as crypto options trade much than equities, but either way eye opening
the leading option protocol is along with pool based protocols like , , , and . there was almost zero use of these protocols until option vaults landed early this year (there is an entire section dedicated to this below).
there have also been a decent amount of products in the market that package risk or volatility in a simple index. has seen the most traction as a way to trade an assets volatility index. trading on volmex is similar to trading on a prediction market like or , you supply collateral and receive two tokens the IV index and its inverse. from here you can trade the volatility directionally how you choose.
in their section of the docs, its described that the volatility index is taken from averages across call and put options sourced offchain from . while this solution works its not very decentralized or native to DeFi.
power perpetuals can be used as a , also check out oracle-free derivatives from and
A good thread detailing in the options market
when option markets initially started to be designed on ethereum, the orderbook model was out of the question (still is on ethereum, but products like on solana use it), and since AMMs worked so well for spot tokens many protocols began to develop an AMM or pool based model for trading options. Below are a few of the designs
looking at and total volume they have cleared less all time than squeeth has in 2 weeks. nothing against either team (I actually really enjoy reading their docs) but squeeth has cleared more in volume in a fraction of the time because squeeth has done away with 3/4 of the liquidity fragmentations. there is no expiry, no need to select a strike price, and due to built in funding rate (thanks to the normalization factor) squeeth can be cross protocol (see a list of integrations ).
in addition to composable perpetuals like squeeth and option based AMM formulas there has been significant research in using to form option payoffs.
if you are interested in this I suggest you read all ten articles from starting with , followed by , , , , , , , , . following that go and play around with observe the volatility of each pool and visualize the payoff of your LPs
the top tier team at took this a step further and developed an entirely new protocol which focuses on being a spot and derivative exchange through concentrated liquidity. if you are curious on how you can create a replicating portfolio from a constant product market I suggest you read the following
having a spot and derivative exchange bundled in one unlocks tons of capital efficiency, allows for granular strike selection, automatically rolling expires when the AMM rebalances (), and freedom to select your own quote and base asset.
but… there are always tradeoffs replicating an option with concentrated liquidity only allows you to replicate selling options and the premium from those using the spot exchange or arbitrageurs. yield being derived only from swap fees has two consequences, (1) in which you would not earn yield and (2) since yield is from swaps your premium is paid over time of the expiry and not given upfront.
there are a few solutions to these: if you want to have a long call payoff you could a primitive position, if you want to have a somewhat guaranteed premium you could a primitive position
a somewhat common and basic options strategy to generate additional yield on an asset by forgoing potential upside is known as a covered call strategy. there are of on how this works and the risk associated. the basics are you sell a far out of the money option on a regular cadence taking home the premium as yield and praying (or hopefully hedging) that prices do not rise enough for your calls to be excised.
yield farming to earn 4 digit APYs is extremely lucrative, but not very sustainable by selling risk through options you will have a much lower but sustainable yield. these concepts are summed up nicely in by co-founder .
the ribbon team (and now a ) saw strategies like covered calls and yield vaults like as a perfect fit, and thus Option Vaults were formed.
option vaults are simple, user deposits ETH and each week the ETH is used to collateralize 1w expiry and 0.1d options via oTokens using which are then sold to market makers via telegram and or through .
or the quote tweets from .
check the date ^
the deribit insights thread aeto was looking for can now be found
another vault structure that is becoming more common is the basis trading vault. and are leading the way. the basis trade is pretty simple especially when you are using perps.
uxd and lemma run strategies on decentralized perp markets ( and ) in a vault structure allowing anyone to invest. additionally they mint a stablecoin against the position. in uxd example you deposit SOL and the vault shorts SOL perps on mango.
algorithmic stablecoins are another interesting DeFi concept, maybe will detail in a later post but for now this piece from and and FRAX’s on seniorage shares should get you started
understanding how these stables pay the negative funding is important (and directly affects your coins stability), generally this is paid through an and if thats not enough governance tokens may be auctioned off (similar to ).
note: all
the and act like a funding rate
on the RAI and reflexer side the team has called , if you read closely on the rai you can see how RAIs redemption rate is similar to a funding rate.
another side note , , are all good reads
you can read more about perps as stablecoins from a more technical side, written by opyn,
there have been plenty of iterations and new designs for AMMs for and if you think of the mark and index of a perp as two separate like priced assets you can start to see how implementing a stableswap-like curve might allow for a perp to maintain a tighter peg.
the FEI stablecoin initially had the concept of , essentially as FEI deviated from its peg of $1 a reward/penalty would take place for a mint/burn. if FEI is trading at $0.98 minters earn 2% and burners are hit with a 4% penalty and vice versa. the trick here is that the farther FEI deviates from its peg the exponentially higher penalty. here is a (brutal) graph of what FEIs price movement from peg when adding in direct incentives.
ty bantg
power perpetuals are perhaps one of the most interesting research topics and products to come out of defi derivatives. the core concept is simple, a power perp (for example ETH^2) tracks the price of ETH squared. if ETHs price rises by 200% ETH^2 rises by 400%, if ETH price goes down you loss less than you would through
sidenote you should read:
squeeth or any power perpetual ( will be launching sol^2 soon) can be used for a lot of different strategies, here is an initial list
you can see a full list of use cases and articles on squeeth
if you want to think about use cases I suggest reviewing and iterating from there.
it can also be helpful to know that holding squeeth provides a similar payoff to holding an always at the money call (shorting is like an at the money straddle). however power perps are not limited to just thinking in the option space, it can also be to a perp swap and constant leverage (something like FLI).
on power perps
https://qcpcapital.medium.com/an-explanation-of-defi-options-vaults-dovs-22d7f0d0c09f
QCP, Dec 2021
Defi options vaults (DOVs) have been a phenomenon in the second half of 2021, capturing the interest of not just retail Defi investors but the largest institutional players as well. From scratch, DOVs have grown exponentially to become the dominant part of the $700 million Defi option TVL, with notionals trading in billions of dollars every month.
The beauty of DOVs lies in its simplicity. Investors simply ‘stake’ their assets into vaults which deploy the assets into options strategies. Before DOVs, option strategies were only available to accredited investors through over-the-counter (OTC) trading or by self-execution on option exchanges like Deribit.
The strategies deployed thus far have been vanilla covered call and cash-covered put strategies which provide the highest base yield available in Defi (averagely 15–50%). On top of that, token rewards are distributed, providing an even higher yield for users.
In some instances, the collateral in the vault earns staking/governance yields as well, creating three sources of yield in a single vault. This triple layer of option premiums, token rewards and staking yield creates significantly high and (more importantly) sustainable yield that is unprecedented in Defi.
On the other side, market makers compete to buy these options from the vaults. They pay the premium for these options upfront and thus provide the high base yield.
DOVs bring high organic yield to Defi
The primary source of yield in Defi has been token rewards. While money market protocols, AMMs and more recently Protocol-Owned Liquidity (POL) protocols like OHM do provide some small base yield, there is critical dependence on token distributions to achieve high APYs.
The problem here is that the yield is largely synthetic and circular, heavily dependent on token price inflation. If the flood of new entrants into Defi reverses and token prices collapse, yields will flatten out across the board, putting an end to the virtuous cycle that we’ve seen in Defi.
The base yield from DOVs do not rely on token rewards at all. DOVs effectively monetise the high volatility of the underlying asset and inject this yield into Defi through the payment of option premiums. It also solves the problem of diminishing returns (or crowding out) from increasing sizes of LP pools, as the base yield is sourced from a large external options market.
This real base yield is the missing key for long-term sustainability and scalability in Defi. Moving away from the ponzinomics of token creation and distribution (layer, rinse and repeat) and towards true value accrual from underlying market structures and trading volatility.
DOVs are effectively democratizing the element of the crypto space that financial institutions have been eyeing with envy, which is the implied volatility (IV) that is 10–20 times the IV in Tradfi instruments. Institutional players have been scrambling to offer this to Tradfi investors, structuring products on crypto assets with risk-adjusted returns that far outstrip anything currently available. DOVs are making this alpha accessible to every individual.
2. DOVs allow for scalable trading of non-linear instruments on Defi
Defi market structures have been able to manage delta-1 or linear instruments well. Spot trading, over-collateralized borrow/lend and margined perpetual swap trading have been scalable with effective liquidation mechanisms through smart contracts. However, when it comes to non-linear instruments like options, Defi has hit a brick wall. Options traded on defi orderbooks has not been a scalable endeavour.
To be fair, non-linear liquidations is a difficult problem to solve. Even centralised exchanges like Deribit manage non-linear liquidations with some difficulty. For the liquidation of large option portfolios, the delta (or spot risk) is managed first by executing a perp/futures position against the portfolio. The other greeks in the portfolio (non-linear risk) are then systematically liquidated over time with a active involvement by the intermediary.
DOVs present an elegant solution to this problem. DOVs use a hybrid Defi model where investment, collateral management, price discovery and settlement occur on-chain while non-linear risk management is performed off-chain. In this model, all the elements that actually need to be trustless are executed onchain, yield is realised upfront and the whole process is fully transparent.
DOVs effectively solve for the sell-side of the problem, bringing in collateral from Defi investors and matching them with market makers who provide the high base yield. All option contracts traded through DOVs are fully collateralized which eliminates the need for liquidations altogether. The option contracts can then be tokenized and actively traded on Defi by RFQ or orderbook in a scalable manner, without the need for a liquidation mechanism.
The solution sounds simple, but the implications are enormous. This is pure Defi innovation. Non-linear liquidations without an intermediary is a problem that Tradfi has never had to solve for. To be able to trade options on a large scale in a transparent, sustainable manner purely governed by smart contracts could fundamentally change the way financial products are structured and traded forever.
3. DOVs will be the cornerstone liquidity for Altcoin option markets
The crypto option market is dominated by BTC and ETH, which are the only coins offered on Deribit. The Deribit orderbooks serve as the core liquidity venue for BTC and ETH and this is reflected in OTC markets as well, where options outside of BTC and ETH are not nearly as liquid.
Many might not recall that even in early 2020, ETH options were wide and illiquid, hardly traded on Deribit. QCP was one of the main players that brought life to the ETH options market by injecting vol supply in huge size and crossing wide spreads that consequently led to more market makers coming in to provide bids and thicken the ETH orderbooks. Fast forward a year and a half to present day and the ETH vol market is arguably more liquid and as large as (if not larger than) the BTC vol market.
We have been trying for a while now to do the same for other coins/tokens with limited success. For a vol market to really come alive, it needs a scalable venue as well as critical liquidity injection to incentivise traders, investors and market makers to participate.
Ironically, we are seeing Defi succeed at bringing life to Altcoins options where Cefi has failed. DOVs have become the largest trading venue for Altcoins with sizable vaults in ALGO, LUNA, AAVE, AVAX and more in the pipeline.
Functioning as both the venue and liquidity source for Altcoins options, DOVs are fast becoming the cornerstone liquidity for Altcoins. This liquidity will inevitably spread into Cefi. Exchanges will confidently list more Altcoin vol books with confidence now that demand already exists and OTC players will be able to tighten spreads and push Altcoin options products having received inventory through the DOVs.
This is significant. For the first time, we are seeing Defi lead Cefi as the originator of liquidity and not just a layer built on top of Cefi powered by layers of token incentives.
More importantly, the advent of Altcoin vaults in DOVs will give holders of all kinds of coins/tokens a viable alternative source of high return, besides just hodling and staking. Investors, speculators, foundations and project treasuries clearly see the value in monetising their large coin inventories and have started to pour into the vaults. The demand for these Altcoin vaults has been incredible and will continue to grow exponentially.
As is the case with all financial instruments, the creation of healthy derivatives markets will also significantly improve spot liquidity. Complaints of poor liquidity in Altcoin spot markets might soon become a thing of the past.
What’s next in the evolution of DOVs?
The appeal of DOVs is the stunning simplicity of the model. But this is just the start.
To begin with, the current strategies offered by the vaults are just vanilla puts and calls but these can and will become increasingly complex. Examples of such are more sophisticated option structures which better utilise the collateral and amplify the base yield. In time, perhaps even the offering of exotic options like digitals and barriers through the vaults. The strategies could even move away from purely option-related instruments to include more complicated non-linear products but of course that will come with additional risk. The sky’s the limit when it comes to DOVs!
Taking it one step further, the option contracts traded by the vaults can be tokenized and traded in secondary markets through orderbooks or RFQ. With large enough TVL and multiple vaults, particularly if the various DOV protocols are interoperable, DOVs could effectively function as a full Deribit exchange on Defi with a wide range of contracts and various types of structured products being actively traded in size. The realisation of this vision would be incredible, full blown derivatives markets trading purely through smart contracts!
Lastly, it is worth nothing that DOVs present a dual Defi disruption. DOVs are not just disrupting how options and structured products are being traded. It is also disrupting how asset management is being conducted. One no longer needs to be an accredited investor with a minimum million dollar investment to access institutional-grade trading strategies. Any investor with a single dollar is able to participate in DOVs and enjoy the supernormal returns from sophisticated strategies, with an optimal risk-return profile of their choice.
Most importantly, DOVs do away with the typical hedge fund 2/20 model. Investors are subject to minimal or even no fees, as all participants are given token incentives to reward both the investment and liquidity provision side of the vaults.
This is how token rewards should be structured, a mechanism to facilitate disintermediation as part of Defi innovation. Not as a primary source of yield. I would go so far as to say DOVs are the true expression of Defi values and will be the point of convergence between the current Defi community and the Tradfi world.
Solana Chain:
Katana Finance
Tap Finance
Polygon Chain:
Opium.Finance — Offering ETH and 1INCH vaults.
Avalanche Chain:
First mover and successful proof-of-concept for DOVs. Built on Ethereum with close to $200 million TVL mostly in WBTC and ETH. They also offer AAVE, AVAX and STETH. Ribbon uses Opyn and Airswap for settlement and Gnosis for on-chain auctions.
Up-and-coming DOV focused on multi-chain operability, it is currently available on Ethereum, Binance Smart Chain, Polygon, Fantom and Avalanche. Will also soon be on most EVM compatible chains like NEAR Aurora, Boba and Solana NEON among others. In addition to WBTC and ETH vaults, they’ve had a big push for Altcoins having launched a $10 million ALGO vault and a $2 million LUNA vault. They also currently offer ADA and BCH with BOBA, ROSE and NEAR in the works. Thetanuts manages settlements in-protocol and does not have third party dependencies.
: Currently operates on Ethereum with plans to provide services on Polygon in the future. Its passive strategies involve the staking of stablecoins and tokens or providing liquidity across multiple protocols to earn rewards.
— Offering SOL, BTC and ETH vaults. Planning vaults with convex structured products and also an impermanent loss hedge mechanism.
— Offering bull and bear spread vaults. Also offering a unique price oracle for settlement written and hosted by the protocol.