CryptoLocally Partners With MakerDAO to Increase the Accessibility of DeFi Around the World

At CryptoLocally, we are pleased to announce our partnership with MakerDAO, the protocol behind DAI, a decentralized stablecoin which can…

On Governance, safety and controlled removal of unwanted, centralised power

Decentralised governance is a vital part of ensuring that power rests with the right people: the users themselves — learn more here.

Continue reading on »

Ten Theses on Decentralized Network Governance

Based on my research over the past couple of years, I’ve put together a list of ten theses on decentralized network governance, including…

Continue reading on Medium »

How Decentralised Notifications can Revolutionize On-Chain Governance (Part II)

This article is Part 2 of a 2-part series on how EPNS as a service can improve the efficiency of on-chain governance, and hence eventually…

Continue reading on Ethereum Push Notification Service »

Inside the blockchain developer’s mind: The governance crisis

In order to achieve blockchain mass adoption, three fundamental problems should be solved. Let’s dive into the third one: governance.

This is Part 3 of a three-part series in which Andrew Levine outlines the issues facing legacy blockchains and posits solutions to these problems. Read Part 1 on the upgradeability crisis here and Part 2 on the vertical scaling crisis here.

Upgradeability, vertical scaling and governance: What all three of these issues have in common is that people are attempting to iterate on top of a flawed architecture. Bitcoin and Ethereum were so transformative that they have totally framed the way we look at these issues.

We need to remember that these were developed at a specific moment in time, and that time is now in the somewhat-distant past when blockchain technology was still in its infancy. One of the areas in which this age is showing is in governance. Bitcoin launched with proof-of-work to establish Byzantine fault tolerance and deliver the decentralization necessary to create a trustless ledger that can be used to host digital money.

With Ethereum, Vitalik Buterin was seeking to generalize the underlying technology so that it could be used not just to host digital money but also to enable developers to program that money. With that goal in mind, it made perfect sense to adopt the consensus algorithm behind the most trusted blockchain: proof-of-work.

Proof-of-work is a mechanism for minimizing Byzantine fault intolerance — proving BFT is not as easy as people like to pretend. It is not a governance system. Bitcoin doesn’t need a governance system because it is not a general-purpose computer. The reason general-purpose computers need a governance system is that computers need to be upgraded.

One needs no clearer proof than the magnitude of changes planned for Ethereum 2.0 and the aggressive advocacy for the adoption of the necessary hard forks. We are not the first to point out this problem. The founders of Tezos accurately forecast this problem, but they ultimately failed to deliver a protocol that meets the needs of most developers for the following reasons:

  1. The blockchain is written in a different language than the smart contracts.
  2. They introduced a political process where decision-making occurs off-chain.
  3. They failed to deliver on an on-chain explicit upgrade path.
  4. They failed to establish distinct classes that can act as checks and balances.

The smartness of smart contracts

Developers must be able to code up the behaviors they would like to see in the blockchain as smart contracts, and there must be an on-chain process for adding this behavior to the system through an explicit upgrade path. In short, we should be able to see the history of an upgrade just as we can see the history of a given token.

The appropriate place for governance is in determining which smart contracts are made into “system” contracts based on whether they will increase the value of the protocol. The challenge is, of course, coming to a consensus on that value.

The most controversial point I will make is the critical need for algorithmically distinct classes that act as checks and balances on one another. While intuition might suggest that more classes make consensus more difficult, this is not the case.

First, if the upgrade candidates are already running as smart contracts on the mainnet, objective metrics can be used to determine whether the ecosystem would benefit from turning the “user” contract into a “system” contract. Second, if we were not trying to bundle upgrades into hard forks, they could be piecemeal and targeted. We would simply be trying to assess, in a decentralized manner, whether the system would be improved by a single change.

Checks and balances

It is commonly understood that in any economy, there are essentially three factors of production: land (infrastructure), labor and capital. Every major blockchain only recognizes one class: capital. In PoW chains, those who have the most capital buy the most ASICs and determine which upgrades can go through. In proof-of-stake and delegated proof-of-stake chains, control by capital is more direct.

In addition to being problematic on its face, the absence of any other classes to act as a check on capital has a paradoxical effect that leads to political paralysis. No group is homogenous. Classes, properly measured, create efficiency — not inefficiency — by forcing the members of a class to come to a consensus around their common interest. Without such pressure, subclasses (groups within a class) will fight among one another, leading to gridlock. Properly designed classes motivate their members to come to an internal consensus so that they can maximize their influence on the system relative to the other classes.

If we can codify individual classes representing infrastructure, development and capital, then upgrades that receive approval from all three classes must, by definition, add value to the protocol, as these three classes encompass the totality of stakeholders within any economy.

Such a governance system, when combined with a highly upgradeable platform, would be able to rapidly adapt to the needs of developers and end-users, and evolve into a platform that can meet the needs of everyone.

The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

Andrew Levine is the CEO of OpenOrchard, where he and the former development team behind the Steem blockchain build blockchain-based solutions that empower people to take ownership and control over their digital selves. Their foundational product is Koinos, a high-performance blockchain built on an entirely new framework architected to give developers the features they need in order to deliver the user experiences necessary to spread blockchain adoption to the masses.

Dash is evolving into a decentralized cloud cryptocurrency

Dash is transitioning into a decentralized cloud cryptocurrency by releasing a platform that supports blockchain-verified user data storage and a decentralized API.

Payments-focused cryptocurrency Dash is starting to release insights into its new platform, which enables data to be stored within the network in the form of decentralized cloud service.

The forthcoming Dash Platform has been developed from longstanding ideas to evolve the cryptocurrency’s functionalities — dating back to the announcement of “Dash Evolution” back in 2015.

Dash Platform will incorporate four features: a Dash Drive, a decentralized API, or DAPI, a username layer, or DPNS, and the Dash platform protocol, or DPP.

Speaking to Cointelegraph, Mark Mason outlined what exactly the company means by “turning Dash into a decentralized cloud.”

In Mason’s words, “Dash Platform is an application development platform that leverages the Dash masternode network and blockchain by transforming the p2p network into a decentralized cloud.”

Clients will be able to integrate their applications to the Dash Platform by using the distributed, decentralized application programming interface —  the DAPI. Meanwhile, the Dash Drive provides support by enabling these clients to send, store and retrieve application data as well as to query the blockchain through a simplified interface.

“One key advantage of DAPI is that it provides developers with the same access and security of a full node, without the cost and maintenance overhead,” Mason said.

For its initial MVP release, the Dash Platform will work as a Database as a Service, or DBaaS. To this end, it will use data contracts with custom data structures defined for the applications that store their data on the Dash masternode network. This data will, in turn, be notarized via Dash’s blockchain.

Ryan Taylor, the CEO of Dash Core Group, has summarized the driving idea behind the platform as being to combine the “user experience of a centralized solution with the decentralized benefits of a permissionless network like Dash.”

The platform’s cloud functionalities mean that all data on the network will sync across user devices — e.g. tablets, desktop and smartphones.

New human-readable usernames, rather than alphanumeric cryptographic addresses, will be supported via the Dash Platform Name Service, or DPNS, layer. Platform users will be able to create usernames on the layer, “friend” other platform users and accept friend requests — as well as transact DASH using these usernames.

Dash believes that moving away from complex cryptographic identifiers will spur more people to adopt cryptocurrency by incorporating familiar interfaces and processes into its decentralized system.

As previously reported, cryptocurrency can already be transacted with usernames within a number of existing closed wallet ecosystems, though Dash claims that its service is distinct as the username layer operates natively to the blockchain.

How Decentralized Notifications can Revolutionize On-Chain Governance (Part I)

This article is Part 1 of a 2-part series on how EPNS as a service can improve the efficiency of on-chain governance.

Continue reading on Ethereum Push Notification Service »

DEGO Protocol — Decentralized Finance with Sustainability

Yield Farming has recently been the focus of interest and discussion in the broader crypto community, initiating a new era for DeFi via…

Inspecting Tezos decentralization: 200+ public nodes, 1000+ in total

When it comes to arguing Tezos decentralization they usually put roll distribution on the first place saying: “look, top 5 entities own more than half of the stake”. More advanced also highlight attacks on the voting mechanism: how many entities can block or force a proposal (which is actually a changing value).

However it’s not that straightforward, because once you are in a Proof-of-Stake network it’s not just rewards but also Value at Risk. At the end of the day it’s risk/reward ratio that matters when it comes to economic incentives and it’s only if we assume all agents are rational!

Ideally, for each attack vector (and strictly speaking every proposal introduces a new vector) one should estimate reward/VaR considering all risks for each attacker class (there are more than one profile).
We leave that for a separate study, but in this article, let us focus on another aspect of decentralization namely P2P layer.

Collecting peers and connections

In order to conduct a comprehensive analysis, we needed a high-quality data set.
Basically we could just set max_connections in the node config to a relatively large value and use /network/points RPC endpoint. However, as we found out, this output is rather polluted with nodes having different chain_id or nodes that are not operating.

Moreover, we also wanted to try to build the network graph so we needed not only vertices (nodes) but also edges (connections). We didn’t get to do it precisely in the end, but we learned a lot about how P2P works in Tezos.

Tezos Handshaker

Anyways, we went deeper and wrote a simple P2P scanner that connects to bootstrap nodes and queries known peers, then tries to connect to those peers and query their connections, etc. It worked great, however we faced several limitations:

  • Obviously, we couldn’t query known peers from nodes that are not exposed to the internet ( hidden nodes). Basically that’s fine, since we are mostly interested in public nodes;
  • Some nodes were probably rejecting our connections because they have reached the maximum connections count or for other reasons. As a workaround we do the scanning in a repeatable manner, however that does not give us 100% guarantee we’re not missing something;
  • The main problem is related to the way nodes respond to the request: they return no more than 50 results, of which 30 are best (active connections sorted by the time of establishment), and the remaining 20 are random (could be both active or not).


If you are interested in how P2P layer works in Tezos, check out the SimpleStaking blog.

Another problem relates to determining whether a node belongs to a particular network, in our case mainnet. We can confidently distinguish between public nodes, as they return version string during the handshake, however we cannot be 100% sure about hidden nodes. All we can say is that if a particular hidden node is known by several public mainnet nodes, it is likely to be mainnet node as well.

We are not sure about the reasons why carthagenet/zeronet/other nodes occur in the list of known peers of mainnet nodes. Probably this is due misconfiguration, or one’s running several nodes on the same machine, or else.

Goals and objectives

Given the above problems and limitations, we had to decide what we could calculate and how. We have formulated several goals:

  1. Identify all public nodes as they are in essence the “center” of the network and have the greatest importance;
  2. Try to detect active hidden nodes using heuristics;
  3. Make geographical analysis of these two groups;
  4. Draw an approximation of the network topology.

In order to do that we used the following algorithm:

  1. Do iterative peer scanning in order to handle max-connections issue and enumerate all random points;
  2. Finish the scan when the number of nodes stop growing for a sufficiently long period of time;
  3. Filter out nodes that do not belong to the mainnet
  4. Assign a score to each hidden node calculated as the number of public nodes that know that particular node;
  5. Filter out hidden nodes that have score less than the average.

Terms and conditions

In this article we will operate with the terms Public node and Hidden node. In both modes nodes are connecting to others, but only public ones accept incoming connections.
Bootstrap nodes are the default ones specified in the node config. This is actually a single hostname hiding a load balancer that routes requests to 27 nodes spreaded across the globe.


In this article:

We analyse only Mainnet nodes;

The scanning method is time-stretched and it’s not possible to make a snapshot at a particular time;

We only rely on the geographical location of the nodes as well as the connections between them;

We recognize that we may not have scanned the entire network or may included inactive nodes in the dataset.

Thus, it’s important to understand that our results DON’T fully characterize the system.

We will look at the criteria for decentralization which determine how well the network can oppose a breakdown or an attack.

Tezos mainnet results


During the scan we have discovered:

6298 addresses in total

1679 presumably operating nodes

203 public nodes

As you may notice, there are far more nodes in Tezos mainnet than the number of bakers. It is clear why the bakers should be decentralized (in all senses), but what about the other nodes? What are they?

Roughly speaking, while baker nodes ensures the valid state of the blockchain and actually “write” the data, the rest of the network provides decentralized access to that data (i.e. “reading”) and makes sure broadcasted “write requests” reach the baker.
This is just as important as block validation, because what’s the point in a decentralized network if you cannot access it in a decentralized way.

In the next chapters we will analyze all (presumably) running nodes and public nodes in isolation. Note, that while we are pretty confident about public nodes, there are certainly some deviations when we operate with the whole network. Still, we think it could give some interesting insights.

Geographical distribution

This is an intuitive criterion: the more continents, countries, jurisdictions, segments of the global network are covered by Tezos the better.
Connectivity and network topology are also important, especially their dependence on transcontinental communications and tier-1/2 operators, but we will examine that a bit later.

The heat map looks good, and although there are obviously countries with high concentrations of nodes, we will see later that these are mostly cloud provider data centers.


Tezos nodes are distributed across 56 countries and 193 regions.

Let’s take a look at each of the sub-criteria in detail.

Hosting providers

Before we move on to detailed statistics by country and region, let’s look at the distribution of nodes by hosting providers.

Not surprisingly, we see the prevalence of popular cloud hosts, but if you take into account the country where the hosts are located, the numbers are not that big. For example, top 3 cloud providers with data centers in US (AWS, Google, Digital Ocean) host 300 Tezos nodes. The actual question is how important are those nodes for the network in general, and although we cannot answer that from the staking perspective, we can analyze the network topology based on our dataset.


Europe and the U.S. dominate, taking on about 2/3 of all nodes.

Interactive map

Note the (decimal) logarithmic scale.


As for the regions of individual countries, we can see that there is a correlation with the location of data centers of the largest hosting providers.

Interactive map

It’s more interesting, we think, to see how Tezos is scattered around the planet. Use the zoom to see the names of settlements.

Tezos network topology

We will only investigate the logical network topology. Unlike the physical topology, we will not consider the physical distance between nodes, latency and speed of packet propagation in the underlying network (Internet).


As was pointed out, the numbers can differ in reality, but the topology will likely remain the same.

Using nodes as graph vertices and known peers connections as edges we built a network graph and calculated its basic properties.


Radius: 2
Average path length:
Center size:
Clustering coefficient:

Here is a simplified interpretation of the results:

  • , , and are small which is good for network synchronization and fast propagation, and also says that presumably every node can reach the network center directly or via a trusted peer, or is part of the center itself;
  • is more than half of all presumably running nodes, supposedly it’s a more robust estimation of the network size that we used;
  • is high, the network is divided into three clusters, varying in the degree of connectivity. This is most likely a side effect of the way the scan is done, so let’s not give it much importance;
  • is low which indicates that Tezos graph is sparse;

Public nodes

Let’s take a closer look at the public nodes, we are particularly interested in how they are distributed across hosting providers and countries.

In theory, you can optimize the latency and improve connectivity using this information, e.g. in order to deal with endorsement misses or resolve other network issues.

Top countries and hostings

While the world’s largest cloud providers provide a highly reliable service, diversification will never hurt.

Interesting observation: half of Tezos’ public nodes are spinning on Amazon, including all the bootstrap nodes.

Bootstrap nodes alternatives

There is a predefined set of peers (set in the default configuration) a new node initially connects to. These peers called bootstrap peers and there are currently 27 of them, hidden behind load balancers. It is logical to assume that they are part of the center, and we will mainly care what proportion they make up and how far they are geographically dispersed.

The question that worries many people is what happens if the bootstrap nodes suddenly stop working?

As the graph shows, nothing terrible.

Further work

Using results of this work we will enrich our products with two features:

Stay tuned!

Originally published at on July 30, 2020.

Inspecting Tezos decentralization: 200+ public nodes, 1000+ in total was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Clear Governance Key for Enterprise Blockchain to Move Forward

Law expert believes a governance structure where risks and rules are thought through is the key for enterprise blockchain to move ahead.

A clear governance structure within a decentralized ecosystem is the key for enterprise blockchain to move away from uncertainties said Mark Radcliffe, a partner at global law firm DLA Piper who has extensive experience in blockchain governance, in an interview with Cointelegraph.

Freedom of decentralization and governance 

Radcliffe argues that blockchain is an industry that attracts highly individualistic people who are skeptical of authority. However, he believes collaborative frameworks will be essential for the success of blockchain implementation and tokenization, just as they have been for open-source software. He added that:

“Blockchain projects frequently say that they will just be a place where people can show up and do whatever they want, but we won’t put any restriction on that.  We don’t care what people do, we don’t care if we come or go, all that matters is that everyone has maximal individual freedom of choice.”

Radcliffe stresses that people need to move away from the idea that “being on blockchain hence there is no need for governance.” Building a governance structure that makes enterprises such as banks and insurance companies comfortable plays a key role in making blockchain work in the long run, according to Radcliffe. 

Using the example of Ethereum forking, Radcliffe pointed out that members of the community provided a software update that caused a hard fork in the Ethereum blockchain, then the fork “rolled back” and returned Ether to original wallets for the nodes that adopted it. About 80% of the nodes adopted the software update and the remaining 20% of the nodes did not adopt the software update since “Code is law” and became Ethereum Classic. 

The DAO had no board of directors or officers, so participants had no one to ask for redress which makes “on-chain governance” extremely “uncertain”. Radcliffe concluded that if enterprises are considering using blockchain to improve business efficiency, it is important to design a governance structure where the risks and rules are clear to avoid the uncertainty of new technology. 

As Cointelegraph previously reported, decentralized mesh networks became a technological lifeline in a disaster and decentralized governance could help people start learning how to make decisions and create together.

Existing Consensus Mechanisms in Decentralized Cloud Storage — I

In this series of articles we will introduce you to the different types of consensus methods that have been used by different…

Continue reading on Medium »

Decentralized Gaming Platform BetProtocol Integrates Band Protocol To Launch Esports & Sports…

BetProtocol, one of the leading betting protocols providing technology for over 12+ operators, has strategically partnered and integrated…

Continue reading on Band Protocol »