NKN: The Primer
This article was originally published on The Daily Chain, 18th November 2019.
An Introduction to New Kind of Network, Censorship Resistance, and Cellular Automata
“The decisions we make about communication security today will determine the kind of society we live in tomorrow.”
Whitfield Diffie, co-inventor of public key cryptography, adviser to NKN
This article is a best attempt at a non-technical primer for NKN (New Kind of Network) from an NKN community member. It was the first crypto article I wrote, and I republish it here (with an adjustment to the section on the testnet, since its grown so much) in anticipation of my forthcoming interview with NKN Founder and OnChain Co-founder, Yanbo Li.
“Why has this guy stuck a picture of a shell in here?”, you may wonder. By the time you finish reading this article the way in which you look at that shell, and indeed the whole universe around you, will never be the same again.
An epic effort went into the creation of the article, and if you enjoy it, please consider making a donation at one of the addresses found at the end.
NKN Mainnet launch is scheduled for the end of this month.
The scope and technology of NKN is so huge that covering it all in a one article is nearly impossible. But I’ll give it a go anyway. Links to references, videos, and further reading are provided at the end.
NKN (New Kind of Network) is a massively distributed, peer-to-peer, self-evolving and scalable network which uses blockchain and a proof-of-relay mining algorithm to incentivize participation in a trustless network. NKN utilizes a break-through consensus layer mechanism which efficiently scales to millions of nodes, called MOCA (Majority vOte Cellular Automata)
Before we discuss further what NKN does and why it is revolutionary, we should briefly mention two main problems NKN helps solve. The problems of (1) censorship resistance and (2) scaling. Both problems exist on the Internet and in Bitcoin, and other cryptocurrencies.
Like the continents and countries of the World, the Internet is broken up into regions. Internet backbone (Tier 1) providers who work closely with government regulators control the flow of traffic between these regions. These few large providers, who number a dozen or so globally (there is no official number) enter peering agreements and transmit agreements with each other, and also the smaller Tier 2 and Tier 3 networks, to remunerate one another for data transmitted. All these providers can serve as ISP’s (Internet Service Providers).
Internet backbone providers try to operate with a settlement-free interconnection, also known as settlement-free peering. In other words, Tier 1 networks can exchange traffic with other Tier 1 networks without having to pay any fees for the exchange of traffic in either direction. However peering is founded on the principle of equality of traffic between the partners and so disagreements arise between partners in which usually one of the partners disconnects the link in order to force the other into a payment scheme. These payment schemed are known as transmit agreements. Negotiating transmit agreements takes time and the system of remuneration is far from perfect.
Tokenizing micro-payments on a blockchain with smart contracts whose peers relay traffic as a proof-of-work (proof-of-relay) could simplify or replace traditional transmit and peering agreements and save millions of dollars in litigation and settlement.
Governments can censor Internet content at their discretion and even prevent citizens accessing the Internet altogether, a power granted in part by the imbalance of power online, their relationship with Tier 1 providers, and the way in which traffic is routed on the Internet.
The government of Egypt shut down the four major ISPs on January 27, 2011 at approximately 5:20 p.m. EST. Evidently the networks had not been physically interrupted, as the Internet transit traffic through Egypt, such as traffic flowing from Europe to Asia, was unaffected. Instead, the government shut down the Border Gateway Protocol (BGP) sessions announcing local routes. BGP is responsible for routing traffic between ISPs
Only one of Egypt’s ISPs was allowed to continue operations. The ISP Noor Group provided connectivity only to Egypt’s stock exchange as well as some government ministries. 
The Web too is rapidly centralizing into a handful of data silos such as Facebook, Amazon, Google, Netflix, Apple and Microsoft. Billions of users today now beholden to the mega-companies that dominate the online space.
In 2019 Internet users across the world enjoy little to no privacy, seldom control their own data, are routinely surveilled, and are commodities to the companies who monetize their private information.
No right of private conversation was enumerated in the Constitution. I suppose it never occurred to anyone at the time that it could be prevented.
Whitfield Diffie (co-inventor of public key cryptography, adviser to NKN)
As if all that weren’t bad enough Net Neutrality concerns look set to make matters worse. Put simply the mega-companies that dominate the online space do deals with Tier 1 providers and secure themselves a “fast lane” for traffic, while the competition is left struggling in the slow lane.
Even Bitcoin, often dubbed the “Internet of Money”, has centralized into a handful of mining silos such as BTC.com, Antpool, Slush, F2pool and ViaBTC. The manufacture of the specialized hardware used to mine bitcoin is monopolized by a small cartel of vendors.
Since mining operations are centralized, the Bitcoin network relies on full nodes for censorship resistance. There are around 10,000 full nodes today. In Q1 2019 Bitcoin’s mining proof-of-work consumes as much energy as nation-states.
Does it matter if the Internet and Bitcoin are centralizing?
The answer lies in something called Censorship Resistance. Censorship resistance describes the property of a distributed (Bitcoin) or decentralized network (Internet) to withstand unauthorized modification, deletion or censorship by third parties. Censorship resistance can also be used to describe how easily people can participate in, and use, the network. If people cannot easily join the network it is not censorship resistant. By sharing data (or the blockchain) across many computers a networks’ resistance to censorship and deletion increases.
People in the Bitcoin community know about the importance of censorship resistance, which is why many of them run non-mining full nodes to preserve blockchain data, and do so at their own expense and without financial reward. Unlike mining nodes, the network does not incentivize full nodes, and so the game theory which keeps mining nodes honest does not strictly apply to full nodes. Lacking proper incentive also means the full node count has not increased in years. Today they number around 10,000 worldwide, roughly the same number of full nodes as 2014, as revealed in this Jameson Lopp blog from the same year (Lopp is the creator of Statoshi, a fork of Bitcoin Core that analyzes statistics of Bitcoin nodes) :
Recently I’ve been trying to quantify the strength of Bitcoin’s infrastructure with respect to its full nodes. I keep tabs on the number of full nodes via Bitnodes, which recently updated its crawling algorithm to be faster and more accurate. This update caused the number of reported nodes to drop by an order of magnitude, from more than 100,000 to fewer than 10,000 because it no longer counts nodes that do not accept inbound connections. Is this a cause for concern?
In 2016 the number of full nodes dropped below 5,000.
It is a surprising and not well-known fact that mining pools are not by design required to keep a copy of the blockchain, although they generally do since it in their own interest to validate the transactions themselves before adding them to a block.
Is 10,000 nodes enough? In ‘The State of the Bitcoin Network’, Lopp continues:
While the network is quite healthy, we still desire more nodes in order to further decentralize the network, disperse trust, and make it more expensive for a malicious entity to conduct a successful Sybil attack.
Yet the number of full nodes is the same as it was five years ago in 2014 when Lopp made this comment. The hardware requirements for running a node are high, and the knowledge and dedication required to run one non-trivial. Badly configured full nodes will end up “leeching” network resources and hinder network performance. Paying full node operators has been rejected as a viable strategy, in part because it changes “the contract for distributing transaction fees” and does not guarantee non-malicious behavior.
Instead of seeking to pay node operators, which would be an incredible engineering challenge that might result in disenfranchising miners due to changing the contract for distributing transaction fees / newly minted coins, the Bitcoin Core developers will seek to make it less technically challenging and less resource intensive for a user to run a node.
Nick Szabo  discusses the importance of node decentralization in his speech at Ethereum Devcon1 from November 2016. In this clip Szabo states the “most diverse nodes” are the “most independent nodes”, and the “best nodes”. Diversity of node location (across many countries and continents) is is a better measure of censorship resistance than sheer volume of nodes.
With Szabo’s comments in mind we can look at bitcoin node distribution, and see that 40% of nodes are hosted on servers belonging to six companies based in only three jurisdictions: the USA, Canada and Germany.
Scaling (is more than just transactions per second)
Scaling in bitcoin and cryptocurrency usually refers to how many transactions per second (TPS) the network can process. Due to the fact unconfirmed transactions and blocks (which contain already confirmed transactions) must be broadcast throughout the network, a large number of nodes (or peers) usually slows down network consensus times (the time required by the network to confirm that a block, or set of transactions, is valid) as the messages take longer to propagate to all the peers.
Many cryptocurrencies and blockchain projects have therefor opted to reduce node count in order to reduce transaction propagation time and consensus times, in order to process more transactions. But every blockchain project which does so fails to fully leverage the greatest feature of blockchain, namely censorship resistance. Whether they employ “validator nodes”, “masternodes” or “staking nodes”, such designs almost always reduce node count, raise the barrier to entry, and ultimately weaken censorship resistance.
There does, however, exist a second and perhaps more important definition of scalability in blockchain, and that refers to how many nodes can scale to reach consensus fast enough to meet demand and sustain a high TPS.
Ideally someone would design a system which could support both definitions of scalability: a network of millions of nodes which can arrive at consensus in an instant, yet still capable of processing thousands transactions per second; a network in which all nodes were equally incentivized, kept a copy of the chain, were able to mine blocks, and weren’t pooled into massive centralized silos; a network with a low-barrier to entry not requiring specialized hardware, yet possessing a useful proof-of-work that doesn’t consume enough energy to power a mid-sized nation-state.
Well, someone did.
New Kind of Network
Yanbo Li was that someone.
After over 10 years of P2P/Mesh network protocol development at Qualcomm and Nokia (where he worked with NKN co-founder Zheng “Bruce” Li ) and with a “profound experience in blockchain system architecture and development” (co-founder OnChian), the stage was almost set for the birth of NKN. But something was still missing.
Yanbo found it in a book. A New Kind of Science by Stephen Wolfram . More precisely the sections concerning Cellular Automata. This is what led Yanbo to create New Kind of Network.
Cellular Automata are frameworks for modelling complex systems. In its simplest form a cellular automaton starts as a grid of cells. We begin by coloring the cells of the top row either white or black. The colors (or states) of the cells in the second row are determined by a set of simple rules. The rules say whether a cell will be white or black based on the color of its three neighboring cells in the row immediately above it. The colors of the cells in the third row are determined by the colors of its three neighboring cells in the second row, and so on.
If there are 24 cells in the top row of our grid there are 256 different combinations of white and black cell arrangement, and Wolfram named these Rule 1 to Rule 256.
Instinctively we would assume such simple rules would create simple and predictable patterns, and in the case of Rule 1 this assumption is borne out when we grow the pattern through successive generations of rows.
Now let’s take a look at Rule 50
Again boring and predictable.
Now let’s try Rule 30
The results here were shocking and unexpected and fascinated Wolfram. Rule 30 creates a totally random and unpredictable patterns even as you grow it out. The result is so random, in fact, that Wolfram uses rule 30 as a pseudo-random number generator for Wolfram Alpha. So random and irreducibly complex it is used in cryptography. Some have even considered its use as a proof-of-work, in favor of prime number factorization.
The revelation in all this is that simple rules can create random and complex patterns. Even up until the 1980’s when Wolfram began his research mathematicians had thought complex patterns could only be created by complex algorithms.
Nature and Life can know be understood with elegance and simplicity. What appears complex is made from simple rules. Whether a pattern is predictable or unpredictable, or maybe unpredictable for a while before becoming predictable, or completely random, it can always be explained with simple rules.
Game of Life
In fact a cellular automaton (CA) can display such complexity that mathematician John Conway devised one called the Game of Life, a 2D grid of square cells, each of which is in one of two possible states, alive or dead, (or populated and unpopulated, respectively). Every cell interacts with its eight neighbors, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:
- Any live cell with fewer than two live neighbors dies, as if by underpopulation.
- Any live cell with two or three live neighbors lives on to the next generation.
- Any live cell with more than three live neighbors dies, as if by overpopulation.
- Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction.
With these simple rules the patterns in the Life cellular automaton “evolve” forever.
Since its publication, Conway’s Game of Life has attracted much interest, because of the surprising ways in which the patterns can evolve. Life provides an example of emergence and self-organization. Scholars in various fields, such as computer science, physics, biology, biochemistry, economics, mathematics, philosophy, and generative sciences have made use of the way that complex patterns can emerge from the implementation of the game’s simple rules. The game can also serve as a didactic analogy, used to convey the somewhat counter-intuitive notion that design and organization can spontaneously emerge in the absence of a designer. For example, cognitive scientist Daniel Dennett has used the analogy of Conway’s Life “universe” extensively to illustrate the possible evolution of complex philosophical constructs, such as consciousness and free will, from the relatively simple set of deterministic physical laws, which might govern our universe.
To really appreciate the evolution of Game of Life you have to watch a video of the cellular automaton in action. Grab some popcorn.
Nodes in the NKN network act like cells in a cellular automaton (CA). Each node is connected to a number of ‘neighbors’ and network consensus is formed as nodes react to the states of their neighbors, until they reach consensus on a particular state. Sufficed to say the CA NKN use are not 2D 8-bit CA (as in the examples above), or even 3D CA, but multi-dimensional CA that are always changing in NKN’s ‘self-evolving’ network. The mechanisms by which NKN does this is beyond my understanding and, fortunately, the scope of this article. But for this article it is enough to say NKN uses lots of different CA that rapidly converge (so not rule 30)and reach consensus and which Wolfram generated images of (see below) while first getting to know the project with NKN Chief Technical Officer Dr. Yilun Zhang. Yanbo and Bruce met Dr. Yilun Zhang at a blockchain conference in San Fransisco in 2018. With his research in computational neuroscience and cellular automata Yilun soon realized the potential and joined the team. Quoting from the article Stephen Wolfram (Creator of NKS) Tries to Understand NKN which details this first meeting. 
SW (Stephen Wolfram): If we start with 70% white, everything becomes white:
Notice that there are those triangles that stick out, as the system “decides” what the dominant color will be. So what happens if it’s really close to 50% black, 50% white? Here’s a case with 52% black:
At the end of the article Yilun explains how NKN’s routing is more efficient than Internet routing, and he describes use cases for NKN.
Making the Network Efficient
SW (Stephen Wolfram)
So if you want to prevent malicious nodes from disrupting the network, you need to randomize the route. This seems inefficient in terms of the shortest and fastest path to send a packet.
YZ (Yilun Zhang)
There are some efficiency and security tradeoffs. However, we can actually make NKN routing better than current Internet routing. Each link between NKN nodes knows its ping time, so from a given node, you can pick the node with the lowest latency.
In addition, you can create multiple concurrent NKN routes between sender and receiver. This way, you can even aggregate bandwidth of all the virtual paths. Recently we did a prototype of a web accelerator and achieved a 167% — 273% speed boost by doing so. And the bigger the file, the better the boost. It shows us that the bottleneck for web downloads is neither at the content server nor the user’s ISP, but rather in the middle of the default network routing path.
OK, so if all this works according to plan, what can you do with it? It seems like you could make a better version of something like Bit Torrent. Is that right?
We can enable a lot of applications to communicate directly without any centralized servers. Some of the low hanging fruit is instant messenger, web proxy and relays, live video streaming and sharing, dynamic Content Delivery Network (CDN), for example.
In principle, any application that requires user-to-user communication. Therefore we believe the potential of NKN is boundless, and we are really happy you and the Wolfram team can help us achieve our ambitious goal.
The key takeaways from this interview are:
1) NKN uses a novel packet routing protocol based on Chord DHT, which can be simulated and visualized as an overlay network with “chords” by Wolfram|One. This has general implication to all blockchain projects: protocol designers can now use the powerful tools of Wolfram|One to mathematically prove, simulate, and improve algorithms without burning thousands of dollars’ worth of cloud computing costs in running large-scale testnets.
2) NKN is creating a new breed of consensus algorithms that is both extremely scalable and efficient, based on NKS principles in general and cellular automata rules in particular. The traditional consensus algorithms by competing hashing power is an interesting and surprising twist of Stephen Wolfram’s computational irreducibility principle found in NKS and foreseen 30 years ago.
3) Stephen Wolfram believes by exploring the computational universe further through NKS principles and methods, we can discover even better consensus algorithms for NKN and the asynchronous case. Working together with NKN, we can improve the cutting edge of blockchain technology in general.
So as well as improving on the censorship resistance of traditional cryptocurrency consensus mechanisms, NKN also improves Internet routing.
the bottleneck for web downloads is neither at the content server nor the user’s ISP, but rather in the middle of the default network routing path.
Yilun explains how an NKN node in a network of 1,000,000 peers consumes only 50% more resources (bandwidth, CPU , RAM) than a node in a network of 10,000 peers.
The consensus costs in NKN scales as O(log N) so it can literally scale to any number of nodes
If we increase to 1M nodes, it only consumes 50%+ more resources than now which is nothing
When I asked in discord what effect in nature (like the pattern of the Conus textile shell resembling rule 30) displays the convergence NKN looks for in its CA, Yilun proposed spontaneous magnetization.
The circuit of nodes your traffic takes through the NKN network obfuscates your IP address, and the nodes relaying your traffic sign each others’ data in a ‘proof-of-relay’ to form a ‘signature chain’ that ensures the data you sent has not been tampered with. So as well as being censorship resistant, NKN is also tamper resistant.
NKN nodes pay to relay traffic across the network in NKN token. This facilitates micro-payments for bandwidth relayed. Since the system is trustless Tier 1 providers and ISP’s can benefit from NKN’s built-in metering system when settling transmit agreements.
The relay reward is a key part of NKN’s incentive scheme. The more data you relay the more NKN you earn. This creates competition as nodes become faster to collect more relay rewards. The more data you relay the more NKN token the network pays you, ultimately resulting in a faster and more robust network. Even though NKN also has block rewards, there are far too many nodes on the network to rely on block rewards alone as incentive, and besides, someone needs to pay for the data.
Proof-of-relay and relay rewards are perfect instruments for measuring and tokenizing transmit agreements between backbone providers and ISP’s
NKN has long since past the theoretical whitepaper stage. NKN is running a fully open-sourced and public testnet, currently with 12,000 active consensus, the most of any crypto network. Participants can convert testnet tokens (tNKN) to NKN tokens at (a minimum of) 5:1 when mainnet is released later this month (June 2019). Check its progress at https://testnet.nkn.org/
Looking to the future we can also expect smart contract deployment on NKN, a class of application which relies heavily on censorship resistance in a distributed P2P network to function best.
Wrapping it up
Just like cellular automata themselves, NKN’s possible use-cases are too numerous to mention in this article, and many of them haven’t been thought up yet.
In the words of Dr. Yilun Zhang “any application that requires user-to-user communication” [can now] “communicate directly without any centralized servers” and can benefit from NKN’s unrivaled censorship resistance, decentralization, speed and scaling: from dApps and smart contracts to P2P messaging systems, file-sharing applications, https and anonymizing proxies like TOR, IoT devices, or even social networks, the possibilities are endless. And every participant in the network equally incentivized to run them.
NKN benefits not just end-users and developers building robust decentralized applications, but also companies who route traffic across the world and whose technological advancement is hindered by the inefficiencies of traditional peering and transmit agreements, and all the bureaucracy and legal apparatus which goes with them. Just as Bitcoin eliminates the expenses and bureaucracy associated with policing a ‘trusted’ banking system by replacing it with a ‘trustless’ P2P electronic payment system, so too does NKN eliminates the expenses and bureaucracy associated with policing a ‘trusted’ telecom industry.
NKN is a technology that empowers people by freeing information and breaking down borders in a truly global network, and it does it by imitating the simple rules of life which bind our universe together.
Thanks to Yilun, Bruce, Allen and Yanbo. For sharing.
Shout out to Lukas, ChrisT, insider, lightmyfire and all the NKN community.
Appendix: The history of NKN core team
NKN Tech Talk: Million-node consensus with Dr. Yilun Zhang
A New Kind of Science — Stephen Wolfram
Computing a theory of everything | Stephen Wolfram
Inventing Game of Life — Numberphile
ACM A.M. Turing Award — Whitfield Diffie and Martin E. Hellman
Stanford Seminar — Cryptology and Security: the view from 2016
https://playgameoflife.com/ (thanks lightmyfire, for being a player, and recommending this)