Satoshi on Scaling

Gigamesh
16 min readJun 27, 2021

This article was originally published on The Daily Chain, 5th February 2020.

“SegWit does not reduce transaction size (when size means bytes). What it does is define a new concept as ‘size’”

Pieter Wuille, January 2018

Part 1: The Dispute
Part 2: Payment Channels

Part 3: SegWit
Part 4: Big Blocks
Part 5: VISA
Part 6: Conclusion
References

Part 1: The Dispute

I will preface this by saying that I learned a great deal over the last 48 hours about Satoshi’s ideas around scaling. Arthur van Pelt, I salute you.

I sincerely hope these documents raise debate. This article is not sponsored, nor should it be regarded as an endorsement of BTC, BCH or BSV. It is simply an attempt to understand what was in Satoshi’s mind when he (or she, or they) considered the future scaling of Bitcoin.

Twitter is not a good medium for referencing and organizing data, so I thought it would be educational to pool it all in one writing.

It is quite remarkable that two individuals, on reading the same documents, can arrive at such differing conclusions. Had Satoshi only been more specific about his vision for scaling Bitcoin such disputes would never occur.

The disagreement Arthur and I have is essentially two-fold:

  • The extent to which Satoshi believed in on-chain scaling by increasing block size
  • The extent to which Satoshi would approve Lightning Network (LN) as a high-frequency trades (HFT) payment channel

It is indisputable Satoshi wanted both.

You can start reading Arthur’s “tweetstorm” laying out his views here.

Part 2: Payment Channels

Nakamoto high-frequency transaction

I did not know, despite my time spent researching cryptocurrency, that Satoshi laid the foundations for an HFT payment channel in the original 0.1 release of Bitcoin, known as Nakamoto high-frequency transactions [1]. Without Arthur I may never have known this, I thank him as it’s important.

[1a]

What is described here is off-chain scaling. A transaction that can be updated by two or more parties many times before commitment to the blockchain.

Unfortunately the “design was not secure:[citation needed]

one party could collude with a miner to commit a non-final version of the transaction, possibly stealing funds from the other party or parties.”

Understandably the ostensible similarities between Nakamoto HFT and Lightning Network might give rise to the idea that one is the natural evolution of the other.

Seeing references to nLockTime (which LN does use) may even serve as confirmation of that notion. However closer examination of the development of payment channel in the Wiki reveals that other solutions exist which don’t rely on SegWit. Only one in the figure below actually needs SegWit, and that is of course Poon-Dryja, better known as Lightning Network.

[1b]

Part 3: SegWit

Lightning Network relies on SegWit because it fixes transaction malleability, where an attacker can change a transaction hash by messing with the signatures.

SegWit segregates and relocates signature data from the transaction inputs to its own structure at the end of the transaction known as the witness.

It’s quirky. And there’s a lot of confusion about how it works.

A SegWit transaction is bigger than a Legacy Bitcoin (non-segwit) transaction, and also smaller than a Legacy Bitcoin transaction.

If everybody sends Legacy transactions then the 1MB block size cap still holds and the network will be congested.

If everybody sends SegWit transactions the blocks get bigger, up to 3.7 MB, and congestion is relieved (until of course that becomes saturated).

Confused?

“SegWit does not reduce the transaction size, if you’re referring to the raw byte length of transactions.”

So begins the top answer on bitcoin stackexchange to a question asking how the SegWit magic is done.

Turns out the magic is done by making the blocks bigger, but only if you send SegWit transactions.

“Instead it introduces block weight as a new metric that does not directly correspond to the raw byte length of transactions, but treats witness data as having less weight than other parts of the transaction.

The limit for Bitcoin blocks has been changed with the activation of segwit. Blocks used to be limited to 1,000,000 bytes (1MB). Since segwit they are limited to 4,000,000 weight units.

In calculating the weight of a transaction, bytes are weighed differenty depending on whether they are part of the witness or not:

• A non-witness byte weighs four weight units.

• A witness byte weighs one weight unit.

This has the effect that a non-segwit transaction contributes exactly the same portion of the limit as before. E.g. the raw bytelength of a P2PKH transaction with one input und two outputs is 222 bytes, and it therefore weighs 888 WU, i.e. 222B / 1,000,000 B = 888 WU / 4,000,000 WU. This means that for non-segwit transactions, the block weight limit has exactly the same effect as the blocksize limit had before, and it is backwards compatible.

However, for segwit transactions the weight is not a quadruple of the raw transaction size. E.g. the raw bytelength of a P2SH-P2WSH 2-of-3 multisig transaction with one input and two outputs is 409 bytes, but its weight 868 WU as a large portion of the transaction input is witness data. A segwit transaction will therefore take a smaller portion of the weight limit than its raw bytelength would suggest.

To allow easier comparison to legacy fee rates, the block weight is often expressed as “virtual size” in “virtual bytes” or “vbytes”. The virtual size is calculated by dividing the weight of a transaction by four and rounding up to the full integer. For non-segwit transactions, the raw bytelength and virtual size are equal.

In conclusion, the raw byte length of blocks can now exceed 1,000,000B, but the virtual size cannot exceed 1,000,000vB.”

So people sending Bitcoin Legacy (non-segwit) transactions are effectively penalized. They pay over twice as much in transaction fees than a non-segwit transactions even though they send the same amount of data (or actually a little less since segwit transactions are bigger than non-segwit in raw data).

The block capacity is defined in weight, not size.

Segregation as a word has bad connotations in the English language. The practice of restricting a person’s rights and privileges in society, based on skin color, faith or ethnicity. Apartheid.

SegWit gives preference to its own transaction type and punishes non-segwit transactions by charging them almost three times as much for the same amount of 1’s and 0’s.

SegWit is like an airline who charges one person $10 to check-in one piece of luggage, and another person $10 to check-in two pieces of luggage.

Whether you think it is still relevant what Satoshi thought about the future of Bitcoin scaling or not, it seems unlikely that he would have approved.

The irony of the scaling debate is that SegWit makes blocks bigger, even though those who support it are so often negative about on-chain scaling.

There is a great deal of misinformation about SegWit. For example, this Beginner’s Guide on Binance seems to think the signature is being removed and that SegWit transactions are somehow smaller in bits and bytes. They are not.

A Beginner’s Guide to Segregated Witness (SegWit)

Another blunder, this time from Investopedia.

The recurring trope in Bitcoin is that the block stays small because it can’t scale, and that the SegWit transactions are smaller. Actually its the opposite. SegWit transactions are larger in bits than Legacy, and the blocks get bigger only when you send them!

Speaking of SegWit.

Satoshi would 100% for sure not have approved SegWit. LN as it is implemented requires SegWit. A non-SegWit implementation of LN, that would be something satoshi wouldnt really care about, it would just be another usecase. the CEX are doing layer-2 transactions now. 1MB limit was TEMPORARY.

10 years ago network bandwidth was much less, also all the mitigations for spam were not in place yet. I would imagine that some sort of adaptive blocksize that adjusted, similar to how mining difficulty adjusted, would be the type of solution for blocksize he would prefer, but that is just a guess, we never discussed this specific issue.

When I say SegWit is an abomination, I speak from a technical point of view. That is all and I will gladly debate anybody that claims that SegWit is an elegant design to solve the scaling problem. You just cant if you understand the tech, and the reason it becomes so convoluted is due to Blockstreams conflicts, which that video explains.

Blockstream said that 4MB blocks were not safe to do, I can find references to this if you don’t believe me. SegWit allows up to 4MB blocks, this happens because there is “1MB” worth of space and SegWit tx only counts at 25%. This is a technical thing not subject to bias.

So on the one hand we are pounded with “4MB is unsafe and the blocksize has to stay with 1MB, SegWit is the only viable solution” but SegWit allows 4MB blocks, which we were told is unsafe. and it just uses an accounting trick to stay at 1MB. which is why I say that non-technical factors are at work as the technical arguments don’t hold up. I have no reason to make this up, KMD uses BTC after all. I just point out the technical reality

Komodo Discord

This is the opinion of Komodo Founder and Lead Dev JL777, and below is the video he linked to:

Now without wishing to digress over whether Satoshi would approve SegWit, I believe it reasonable to assume he would prefer a solution more similar to his own design.

Payment Channels

Nakamoto used protocol-level opcodes (scripts) in Bitcoin to create his payment channel. For me, this is no more Layer 2 scaling than an Ethereum ERC20 contract is Layer 2.

On the other hand LN relies on SegWit to function.

[1c]

What is Layer 2?

Regrettably usage of the phrase Layer 2 has come to mean any form of off-chain scaling, as can be seen when reading the introduction of a 2019 paper [2] describing a more recent technique for creating an HFT, and published by Blockstream researchers:

Bitcoin, and other blockchain based systems, are inherently limited in their scalability. On-chain payments must be verified and stored by every node in the network, meaning that the node with the least re- sources limits the overall throughput of the system as a whole. Layer 2, also called off-chain protocols, are often seen as the solution to these scalability issues:

Arguing over the correct usage of Layer 2 is not necessary to make my point. I simply wish to disambiguate. In my mind, there is a chasm between using protocol opcodes and scripts (Nakamoto), and creating LN channels enabled by SegWit.

As I have once had the pleasure of interviewing former Microsoft VP Mike Toutonghi who, as it happens, put multi-threading into the Windows 95 kernel and was the Chief Architect behind .Net. Here is an excerpt formt hat interview [3]

Often compared with Ethereum smart contracts, Komodo’s Custom Consensus (CC) protocol on which Verus has expanded, represent a keystone in the Verus architecture. What are they?

“Custom Consensus protocols (they’re internally called Crypto Conditions), and really what they are is the ability to write new functions and opcodes into the bitcoin script.”

Bitcoin without SegWit

So is it possible to create an HFT without segwit, and maybe closer to Satoshi’s original design?

If you recall Nakamoto HFT was broken because of an issue with how “one party could collude with a miner”.

This precise issue has been addressed in BIP-68[4]

What’s more at least one protocol-level HFT has been designed to utilize BIP-68, making (what is to me) something much closer to Satoshi’s original design.

[1d]

Decker-Wattenhofer duplex payment channels are joined by Decker-Russell-Osuntokun eltoo Channels as another answer to off-chain HFT. Neither use SegWit and both are much closer to Satoshi’s answer than Lightning Network.

[1e]

In fact Decker-Russell-Osuntokun eltoo Channels (presented in April 2019) have advantages over LN, like not requiring a “punishment branch” (see above)

I will not presume to know why LN was picked over the other techniques. But I will presume to suggest Satoshi would have preferred one designed like his own. And they do exist

On reflection when the Bitcoin scaling issues became horrendous in 2017, and people were paying $20 or more to send one single transaction, it is rather extraordinary that instead of simply increasing the blocksize, the “less risky” approach of “segregating and relocating signature data from the transaction inputs to its own structure at the end of the transaction known as the witness”, was chosen.

“… segregating and relocating signature data from the inputs to its own structure at the end of transaction known as the witness.”

Typical bandwidth speeds have increased 2000% since the 1MB cap was introduced by the late Hal Finney in 2010. Yet rather than leverage a far more powerful internet infrastructure a decision was made to make a paradigm shifting change to the workings of Bitcoin in order to compress those transactions.

Part 4: Big Blocks

Before continuing the arguments I put forward are not an endorsement of Bitcoin SV or Bitcoin Cash. I only wish to better understand Satoshi’s intentions in the 2008–2010 period, remembering that of course he vanished in 2011 leaving the community to sort it out for themselves.

Firstly we must remember that the whitepaper makes no mention of HFT, and in the original spec Bitcoin had an unlimited blocksize.

History shows us that when the network was spammed in 2009, Hal Finney introduced a 1MB blocksize cap to prevent the creation of super large blocks that would break consensus.

The problem is easy to understand. A block must propagate to 99% of the network nodes in order to be finalized. If the block is too large to propagate before the next block is mined problems start, blocks are orphaned and consensus is broken.

Yet this cap was always intended to be a temporary fix, as recalled by Ray Dillinger who reviewed Satoshi’s code and worked alongside Hal, in this Bitcointalk forum post from 2015. [5]

Famously back in October 2010 Satoshi had presented a means of relieving the 1MB cap and raising block sizes to deal with their collective concerns about scaling.

Part 5: VISA

2008

Satoshi’s interest in bandwidth can be tracked back to a 2008 cryptography@metzdowd mailing list post [6].

The above message talks about scaling to 100 million transactions a day, and the feat would require masses more bandwidth than was available at the time, whether he was referencing on-chain scaling, off-chain scaling, or quite probably both.

Regardless of what he meant, he made no claim Bitcoin could scale to that “today” and it’s clear from reading that its something he expected to happen with increasing bandwidth, over time.

f the network were to get that big, 
it would take several years, and by then, sending 2 HD movies over the
Internet would probably not seem like a big deal.
Satoshi Nakamoto---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Unusually the great one makes a poor prediction, and one that I think is of key importance within the scaling debate. This is what he got wrong (at least for now).

Only people trying to create new coins would need to run 
network
nodes. At first, most users would run network nodes, but as the
network grows beyond a certain point, it would be left more and more to
specialists with server farms of specialized hardware. A server farm
would only need to have one node on the network and the rest of the LAN
connects with that one node.

Only people trying to create new coins would need to run network nodes.

Non-mining bitcoin full nodes are the slowest nodes on the network, by far. A peer-to-peer network can only scale on-chain as fast as its slowest peers. Satoshi did not anticipate non-mining nodes. If only mining operators ran nodes Bitcoin would scale on-chain much faster because mining nodes have very fast connections.

Worryingly Satoshi’s message was removed from /r/bitcoin by Theymos who was the moderator at that time. I will not speculate about the mods motives for removal, but deleting anything written by Satoshi is probably abhorrent to most of you reading this article.

2009

In 2009 email to Mike Hearn, Satoshi claimed that bitcoin could scale to Visa’s credit card network daily Internet purchases, which at that time was 15 million transactions a day

I must confess I struggle with this sentence:

Bitcoin can already scale much larger than that with existing hardware for a fraction of the cost.

I’m not 100% on the timeline here but will give Arthur the benefit of the doubt and say that when these words were published Nakamoto HFT had no obvious issues.

Does “Bitcoin can already scale” mean bitcoin can process 15 million on-chain transactions at the time of writing? This is an impossibility due to bandwidth restrictions at the time, and must therefor refer to the Nakamoto HFT channel

Or does “Bitcoin can already scale” mean that the protocol can scale with bandwidth improvements over time, much like his 2008 post?

If he meant it the first way, presupposing that Satoshi didn’t also believe in scaling with block size is a dangerous leap.

After all if one envisages creating thousands of off-chain payment channels it is only reasonable that blocks should be big enough to deal with many of them shutting down and finalizing to the chain simultaneously.

Interestingly though if he meant it the second way, that the protocol can scale on-chain with bandwidth improvements over time, then his claim abut processing more transactions than Visa in 2009 is now, today, technically true.

More scalable than Visa

A 2019 article on Hackermoon tried to prove that big blocks don’t scale and Bitcoin is nowhere near Visa. Arthur of course cited it as part of his argument that big blocks are essentially a dead-end.

The article, The Blockchain Scalability Problem & the Race for Visa-Like Transaction Speed [7], explains in clear detail what I have already mentioned of block propagation and bandwidth. A block must safely propagate to 99% of nodes before a new block is mined.

Using a simple formula its author Kenny Li calculated that theoretical on-chain throughput performance has increased substantially with the masses more bandwidth we now have since the 1MB cap was placed in 2010. His calculations reveal that Bitcoin could now be capable of 188 TPS (transactions per second)

[7]

188 TPS doesn’t come close to Visa’s current 1730 TPS. But the author has made a mistake because in 2009 Satoshi tells us Visa processed 15 million Internet purchases a day, or 173 TPS.

Sadly Visa’ network has scaled better than Bitcoin since his remarks.

10 years after his statement and with no HFT to help, Bitcoin can theoretically scale using only bigger blocks beyond Visa in 2009.

Goodbye Hal

While I don’t regard it as being particularly helpful, this Hal Finney bitcointalk post from late 2010 [8] has convinced Arthur that big block scaling was a non-starter, despite Ray Dillinger’s recollections (see above).

For me it reads as if Hal didn’t think Bitcoin would scale on-chain or off-chain. The use-case of digital cash issuance from banks, for me speaks to lack of confidence in the protocol’s abilities to scale. Indeed his final words sum this up for me:

I believe this will be the ultimate fate of Bitcoin, to be the “high-powered money” that serves as a reserve currency for banks that issue their own digital cash. Most Bitcoin transactions will occur between banks, to settle net transfers. Bitcoin transactions by private individuals will be as rare as… well, as Bitcoin based purchases are today.

For the sake of thoroughness I include it here for your perusal. As you read it I’d recommend you remember that it was Satoshi, and not Hal, who created Bitcoin. I believe Satoshi had greater confidence in his protocol’s future than the illustrious computer scientist.

Part 6: Nakamoto vision

In conclusion there is no evidence to suggest Satoshi wanted blocks kept small, and every reason to believe he wanted on-chain and off-chain scaling to grow in unison.

Lightning Network bears less similarity to Nakamoto HFT than many other designs, which Satoshi would likely approve over Segregated Witness and its new concept of size.

Scaling through both block size increase and off-chain payment channels seems by far the likeliest intent. Numbers provided by Kenny Li point to a theoretical ceiling of around 188 TPS of on-chain scaling with current Internet speeds. For those interested that represents a 40 MB blocksize which would imply a 10 MB blocksize should be quite safe.

“10MB shouldn’t be any problems at all, and with actual effort toward solving, 32 or even 64MB would be doable with today’s Internet.”

JL777, Komodo Founder and Lead Developer

However, despite my conclusions (and Arthur’s) I urge readers to form their own opinions, and remain circumspect of those who claim 100 percent certainty when divining the wishes of Satoshi Nakamoto.

The recurring trope in Bitcoin is that the block stays small because it can’t scale, and that the SegWit transactions are smaller. Actually its the opposite. SegWit transactions are larger in bits than Legacy, and the blocks get bigger only when you send them!

--

--

Gigamesh

The Immutable Network (DARA), founder. Immutable builds free blockchain products and platforms to fight censorship and stop data loss. Also a journalist/writer.