Vitalik Buterin is a co-founder of Bitcoin Magazine who has been involved in the Bitcoin community since 2011, and has contributed to Bitcoin both as a writer and the developer of a fork of bitcoinjs-lib, pybitcointools and multisig.info, as well as one of the developers behind Egora. Now, Vitalik's primary job is as the main developer of Ethereum, a project which intends to create a next-generation smart contract and decentralized application platform that allows people to create any kind of decentralized application on top of a blockchain that can be imagined.
Decentralization, n. The security assumption that a nineteen-year-old in Hangzhou and someone who is maybe in the UK and maybe not have not yet decided to collude with each other.
There has been a large amount of ruckus in the past week about the issue of mining centralization in the Bitcoin network. We saw a single mining pool, GHash.io, amass over 45 hashpower for many hours, and at one point even grow to become 51 of the entire network. The entire front page of the Bitcoin reddit was ablaze in intense discussion and a rare clash of complacency and fear, miners quickly mobilized to take their hashpower off GHash, and surprisingly clever strategies were used in an attempt to bring back the balance between the different pools, up to and including one miner with “between 50 TH/s and 2 PH/s” mining at GHash but refusing to forward valid blocks, essentially sabotating all mines on the pool to the extent of up to 4. Now, the situation has somewhat subsided, with GHash down to 35 network hashpower and the runner up, Discus Fish, up to 16, and it is likely that the situation will remain that way for at least a short while before things heat up again. Is the problem solved? Of course not. Can the problem be solved? That will be the primary subject of this post.
First of all, let us understand the problem. The purpose of Bitcoin mining is to create a decentralized timestamping system, using what is essentially a majority vote mechanism to determine in which order certain transactions came as a way of solving the double-spending problem. The double-spending problem is simple to explain: if I send a transaction sending my 100 BTC to you, and then one day later I send a transaction sending the same 100 BTC to myself, both of those transactions obviously cannot simultaneously process. Hence, one of the two has to “win”, and the intuitively correct transaction that should get that honor is the one that came first. However, there is no way to look at a transaction and cryptographically determine when it was created. This is where Bitcoin mining steps in.
Bitcoin mining works by having nodes called “miners” aggregate recent transactions and produce packages called “blocks”. For a block to be valid, all of the transactions it contains must be valid, it must “point to” (ie. contain the hash of) a previous block that is valid, and it must satisfy “the proof of work condition” (namely, SHA2562(block_header) <= 2190, ie. the double-hash of the block header must start with a large number of zeroes). Because SHA256 is a pseudorandom function, the only way to make such blocks is to repeatedly attempt to produce them until one happens to satisfy the condition. The 2190“target” is a flexible parameter; it auto-adjusts so that on average the entire network needs to work for ten minutes before one node gets lucky and succeeds; once that happens, the newly produced block becomes the “latest” block, and everyone starts trying to mine a block pointing to that block as the previous block. This process, repeating once every ten minutes, constitutes the primary operation of the Bitcoin network, creating an ever-lengthening chain of blocks (“blockchain”) containing, in order, all of the transactions that have ever taken place.
If a node sees two or more competing chains, it deems the one that is longest, ie. the one that has the most proof-of-work behind it, to be valid. Over time, if two or more chains are simultaneously at play, one can see how the chain with more computational power backing it is eventually guaranteed to win; hence, the system can be described as “one CPU cycle, on vote”. But there is one vulnerability: if one party, or one colluding group of parties, has over 50 of all network power, then that entity alone has majority control over the voting process and can out-compute any other chain. This gives this entity some privileges:
The entity can only acknowledge blocks produced by itself as valid, preventing anyone else from mining because its own chain will always be the longest. Over time, this doubles the miner’s BTC-denominated revenue at everyone else’s expense. Note that a weak version of this attack, “selfish-mining“, starts to become effective at around 25 network power.
The entity can refuse to include certain transactions (ie. censorship)
The entity can “go back in time” and start mining from N blocks ago. When this fork inevitably overtakes the original, this removes the effect of any transactions that happened in the original chain after the forking point. This can be used to earn an illicit profit by (1) sending BTC to an exchange, (2) waiting 6 blocks for the deposit to be confirmed, (3) purchasing and withdrawing LTC, (4) reversing the deposit transaction and instead sending those coins back to the attacker.
This is the dreaded “51 attack”. Notably, however, even 99 hashpower does not give the attacker the privilege of assigning themselves an arbitrary number of new coins or stealing anyone else’s coins (except by reversing transactions). Another important point is that 51 of the network is not needed to launch such attacks; if all you want is to defraud a merchant who accepts transactions after waiting N confirmations (usually,
N = 3
N = 6
), if your mining pool has portion P of the network you can succeed with probability
(P / (1-P))^N
; at 35 hashpower and 3 confirmations, this means that GHash can currently steal altcoins from an altcoin exchange with 15.6 success probability – once in every six tries.
Here is we get to pools. Bitcoin mining is a rewarding but, unfortunately, very high-variance activity. If, in the current 100 PH/s network, you are running an ASIC with 1 TH/s, then every block you have a chance of 1 in 100000 of receiving the block reward of 25 BTC, but the other 99999 times out of 100000 you get exactly nothing. Given that network hashpower is currently doubling every three months (for simplicity, say 12500 blocks), that gives you a probability of 15.9 that your ASIC will ever generate a reward, and a 84.1 chance that the ASIC’s total lifetime earnings will be exactly nothing.
A mining pool acts as a sort of inverse insurance agent: the mining pool asks you to mine into its own address instead of yours, and if you generate a block whose proof of work is almost good enough but not quite, called a “share”, then the pool gives you a smaller payment. For example, if the mining difficulty for the main chain requires the hash to be less than 2190, then the requirement for a share might be 2190. Hence, in this case, you will generate a share roughly every hundred blocks, receiving 0.024 BTC from the pool, and one time in a thousand out of those the mining pool will receive a reward of 25 BTC. The difference between the expected 0.00024 BTC and 0.00025 BTC per block is the mining pool’s profit.
However, mining pools also serve another purpose. Right now, most mining ASICs are powerful at hashing, but surprisingly weak at everything else; the only thing they often have for general computation is a small Raspberry Pi, far too weak to download and validate the entire blockchain. Miners could fix this, at the cost of something like an extra $100 per device for a more decent CPU, but they do not – for the obvious reason that $0 is less than $100. Instead, they ask mining pools to generate mining data for them. The “mining data” in question refers to the block header, a few hundred bytes of data containing the hash of the previous block, the root of a Merkle tree containing transactions, the timestamp and some other ancillary data. Miners take this data, and continue incrementing a value called a “nonce” until the block header satisfies the proof-of-work condition. Ordinarily, miners would take this data from the block that they independently determine to be the latest block; here, however, the actual selection of what the latest block is is being relegated to the pools.
Thus, what do we have? Well, right now, essentially this:
The mining ecosystem has solidified into a relatively small number of pools, and each one has a substantial portion of the network – and, of course, last week one of those pools, GHash, reached 51. Given that every time any mining pool, whether Deepbit in 2011 or GHash in 2013, reached 51 there has been a sudden massive reduction in the number of users, it is entirely possible that GHash actually got anywhere up to 60 network hashpower, and is simply hiding some of it. There is plenty of evidence in the real world of large corporations creating supposedly mutually competing brands to give the appearance of choice and market dynamism, so such a hypothesis should not at all be discounted. Even assuming that GHash is in fact being honest about the level of hashpower that it has, what this chart literally says is that the only reason why there are not 51 attacks happening against Bitcoin right now is that Discus Fish, a mining pool run by a nineteen-year-old in Hangzhou, China, and GHash, a mining pool run supposedly in the UK but may well be anywhere, have not yet decided to collude with each other and take over the blockchain. Alternatively, if one is inclined to trust this particular nineteen-year-old in Hangzhou (after all, he seemed quite nice when I met him), Eligius or BTCGuild can collude with GHash instead.
So what if, for the sake of example, GHash gets over 51 again and starts launching 51 attacks (or, perhaps, even starts launching attacks against altcoin exchanges at 40)? What happens then?
First of all, let us get one bad argument out of the way. Some argue that it does not matter if GHash gets over 51, because there is no incentive for them to perform attacks against the network since even one such attack would destroy the value of their own currency units and mining hardware. Unfortunately, this argument is simply absurd. To see why, consider a hypothetical currency where the mining algorithm is simply a signature verifier for my own public key. Only I can sign blocks, and I have every incentive to maintain trust in the system. Why would the Bitcoin community not adopt my clearly superior, non-electricity-wasteful, proof of work? There are many answers: I might be irrational, I might get coerced by a government, I might start slowly inculcating a culture where transaction reversals for certain “good purposes” (eg. blocking child pornography payments) are acceptable and then slowly expand that to cover all of my moral prejudices, or I might even have a massive short against Bitcoin at 10x leverage. Those middle two arguments are not crazy hypotheticals; they are real-world documented actions of the implemenation of me-coin that already exists: PayPal. This is why decentralization matters; we do not burn millions of dollars of electricity per year just to move to a currency whose continued stability hinges on simply a slightly different kind of political game.
Additionally, it is important to note that even GHash itself has a history of involvement in using transaction reversal attacks against gambling sites; specifically, one may recall the episode involving BetCoin Dice. Of course, GHash denies that it took any deliberate action, and is probably correct; rather, the attacks seem to be the fault of a rogue employee. However, this is not an argument in favor of GHash; much the opposite, it is a piece of real-world empirical evidence showing a common argument in favor of decentralization: power corrupts, and equally importantly power attracts those who are already corrupt. Theoretically, GHash has increased security since then; in practice, no matter what they do this central point of vulnerability for the Bitcoin network still exists.
However, there is another, better, argument for why mining pools are not an issue: namely, precisely the fact that they are not individual miners, but rather pools from which miners can enter and leave at any time. Because of this, one can reasonably say that Ars Technica’s claim that Bitcoin’s security has been “shattered by an anonymous miner with 51 network power” is completely inaccurate; there is no one miner that controls anything close to 51. There is indeed a single entity, called CEX.io, that controls 25 of GHash, which is scary in itself but nevertheless far from the scenario that the headline is insinuating is the case. If individuals miners do not want to participate in subverting the Bitcoin protocol and inevitably knocking the value of their coins down by something like 70, they can simply leave the pool, and such a thing has now happened three times in Bitcoin’s history. However, the question is, as the Bitcoin economy continues to professionalize, will this continue to be the case? Or, given somewhat more “greedy” individuals, will the miners keep on mining at the only pool that lets them continue earning revenue, individually saving their own profits at the cost of taking the entire Bitcoin mining ecosystem collectively down a cliff?
Even now, there is actually one strategy that miners can, and have, taken to subvert GHash.io: mining on the pool but deliberately withholding any blocks they find that are actually valid. Such a strategy is undetectable, but with a 1 PH/s miner mining in this way it essentially reduces the profits of all GHash miners by about 2.5. This sort of pool sabotage completely negates the benefit of using the zero-fee GHash over other pools. This ability to punish bad actors is interesting, though its implications are unclear; what if GHash starts hiring miners to do the same against every other pool? Thus, rather than relying on vigilante sabotage tactics with an unexamined economic endgame, we should ideally try to look for other solutions.
First of all, there is the ever-present P2P mining pool, P2Pool. P2Pool has been around for years, and works by having its own internal blockchain with a 10-second block time, allowing miners to submit shares as blocks in the chain and requiring miners to attempt to produce blocks sending to all of the last few dozen share producers at the same time. If P2Pool had 90 network hashpower, the result would not be centralization and benevolent dictatorship; rather, the limiting case would simply be a replica of the plain old Bitcoin blockchain. However, P2Pool has a problem: it requires miners to be fully validating nodes. As described above, given the possibility of mining without being a fully validating node this is unacceptable.
One solution to this problem is to have a mining algorithm that forces nodes to store the entire blockchain locally. A simple algorithm for this in Bitcoin’s case is:
def mine(block_header, N, nonce):
o = 
for i in range(20):
o.append(sha256(block_header + nonce + i))
n = 
for i in range(20):
B = (o[i] / 2**128) % N
return sha256(block_header + str(n))
Where tx(B, k) is a function that returns the kth transaction in block B, wrapping around modulo the number of transactions in that block if necessary, and N is the current block number. Note that this is a simple algorithm and is highly suboptimal; some obvious optimizations include making it serial (ie. o[i+1] depends on n[i]), building a Merkle tree out of the o[i] values to allow them to be individually verified, and maintaining two Merkle trees in each block, one storing transactions and the other storing all current balances, so the algorithm only needs to query the current block.
This approach actually solves two problems at the same time. First, it removes the incentive to use a centralized pool instead of P2Pool. Second, there is an ongoing crisis in Bitcoin about how there are too few full nodes; the reason why this is the case is that maintaining a full node with its 20GB blockchain is expensive, and no one wants to do it. With this scheme, every single mining ASIC would be forced to store the entire blockchain, a state from which performing all of the functions of a full node becomes trivial.
A second strategy is another cryptographic trick: make mining non-outsourceable. Specifically, the idea is to create a mining algorithm such that, when a miner creates a valid block, they always necessarily have an alternative way of publishing the block that secures the mining reward for themselves. The strategy is to use a cryptographic construction called a zero-knowledge proof, cryptographically proving that they created a valid block but keeping the block data secret, and then simultaneously create a block without proof of work that sends the reward to the miner. This would make it trivial to defraud a mining pool, making mining pools non-viable.
Such a setup would require a substantial change to Bitcoin’s mining algorithm, and uses cryptographic primitives far more advanced than those in the rest of Bitcoin; arguably, complexity is in itself a serious disadvantage, and one that is perhaps worth it to solve serious problems like scalability but not to implement a clever trick to discourage mining pools. Additionally, making mining pools impossible will arguably make the problem worse, not better. The reason why mining pools exist is to deal with the problem of variance; miners are not willing to purchase an investment which has only a 15 chance of earning any return. If the possibility of pooling is impossible, the mining economy will simply centralize into a smaller set of larger players – a setup which, unlike now, individual participants cannot simply switch away from. The previous scheme, on the other hand, still allows pooling as long as the local node has the full blockchain, and thereby encourages a kind of pooling (namely, p2pool) that is not systemically harmful.
Another approach is less radical: don’t change the mining algorithm at all, but change the pooling algorithms. Right now, most mining pools use a payout scheme called “pay-per-last-N-shares” (PPLNS) – pay miners per share an amount based on the revenue received from the last few thousand shares. This algorithm essentially splits the pool’s own variance among its users, resulting in no risk for the pool and a small amount of variance for the users (eg. using a pool with 1 hashpower, the expected standard deviation of monthly returns is ~15, far better than the solo mining lottery but still non-negligible). Larger pools have less variance, because they mine more blocks (by basic statistics, a pool with 4x more mining power has a 2x smaller standard deviation as a percentage). There is another scheme, called PPS (pay-per-share), where a mining pool simply pays a static amount per share to miners; this scheme removes all variance from miners, but at the cost of introducing risk to the pool; that is why no mining pool does it.
Meni Rosenfeld’s Multi-PPS attempts to provide a solution. Instead of mining into one pool, miners can attempt to produce blocks which pay to many pools simultaneously (eg. 5 BTC to one pool, 7 BTC to another, 11.5 BTC to a third and 1.5 BTC to a fourth), and the pools will pay the miner for shares proportionately (eg. instead of one pool paying 0.024 BTC per share, the first pool will pay 0.0048, the second 0.00672, the third 0.01104 and the fourth 0.00144). This allows very small pools to only accept miners giving them very small rewards, allowing them to take on a level of risk proportionate to their economic capabilities. For example, if pool A is 10x bigger than pool B, then pool A might accept blocks with outputs to them up to 10 BTC, and pool B might only accept 1 BTC. If one does the calculations, one can see that the expected return for pool B is exactly ten times what pool A gets in every circumstance, so pool B has no special superlinear advantage. In a single-PPS scenario, on the other hand, the smaller B would face 3.16x higher risk compared to its wealth.
The problem is, to what extent is the problem really because of variance, and to what extent is it something else, like convenience? Sure, a 1 mining pool will see a 15 monthly standard deviation in its returns. However, all mining pools see something like a 40 monthly standard deviation in their returns simply because of the volatile BTC price. The difference between 15 standard deviation and 2 standard deviation seems large and a compelling reason to use the largest pool; the difference between 42 and 55 not so much. So what other factors might influence mining pool centralization? Another factor is the fact that pools necessarily “hear” about their own blocks instantly and everyone else’s blocks after some network delay, so larger pools will be mining on outdated blocks less often; this problem is critical for blockchains with a time of ten seconds, but in Bitcoin the effect is less than 1 and thus insignificant. A third factor is convenience; this can best be solved by funding an easy-to-use open-source make-your-own mining pool solution, in a similar spirit to the software used by many small VPS providers; if deemed important, we may end up partially funding a network-agnostic version of such an effort. The last factor that still remains, however, is that GHash has no fee; rather, the pool sustains itself through its connection to the ASIC cloud-mining company CEX.io, which controls 25 of its hashpower. Thus, if we want to really get down to the bottom of the centralization problem, we may need to look at ASICs themselves.
Originally, Bitcoin mining was intended to be a very egalitarian pursuit. Millions of users around the world would all mine Bitcoin on their desktops, and the result would be simultaneously a distribution model that is highly egalitarian and widely spreads out the initial BTC supply and a consensus model that includes thousands of stakeholders, virtually precluding any possibility of collusion. Initially, the scheme worked, ensuring that the first few million bitcoins got widely spread among many thousands of users, including even the normally cash-poor high school students. In 2010, however, came the advent of mining software for the GPU (“graphics processing unit”), taking advantage of the GPU’s massive parallelization to achieve 10-100x speedups and rendering CPU mining completely unprofitable within months. In 2013, specialization took a further turn with the advent of ASICs. ASICs, or application-specific integrated circuits, are specialized mining chips produced with a single purpose: to crank out as many SHA256 computations as possible in order to mine Bitcoin blocks. As a result of this specialization, ASICs get a further 10-100x speedup over GPUs, rendering GPU mining unprofitable as well. Now, the only way to mine is to either start an ASIC company or purchase an ASIC from an existing one.
The way the ASIC companies work is simple. First, the company starts up, does some minimal amount of setup work and figures out its plan, and starts taking preorders. These preorders are then used to fund the development of the ASIC, and once the ASICs are ready the devices are shipped to users, and the company starts manufacturing and selling more at a regular pace. ASIC manufacturing is done in a pipeline; there is one type of factory which produces the chips for ASICs, and then another, less sophisticated, operation, where the chips, together with standard parts like circuit boards and fans, are put together into complete boxes to be shipped to purchasers.
So where does this leave us? It’s obvious that ASIC production is fairly centralized; there are something like 10-30 companies manufacturing these devices, and each of them have a significant level of hashpower. However, I did not realize just how centralized ASIC production is until I visited this unassuming little building in Shenzhen, China:
On the third floor of the factory, we see:
What we have in the first picture are about 150 miners of 780 GH/s each, making up a total 120 TH/s of miners – more than 0.1 of total network hashpower – all in one place. The second picture shows boxes containing another 150 TH/s. Altogether, the factory produces slightly more than the sum of these two amounts – about 300 TH/s – every single day. Now, look at this chart:
In total, the Bitcoin network gains about 800 TH/s every day. Thus, even adding some safety factors and assuming the factory shuts down some days a week, what we have is one single factory producing over a quarter of all new hashpower being added to the Bitcoin network. Now, the building is a bit large, so guess what’s on the first floor? That’s right, a fabrication facility producing Scrypt ASICs equal to a quarter of all new hashpower added to the Litecoin network. This projects an image of a frightening endgame for Bitcoin: the Bitcoin network spending millions of dollars of electricity every year only to replace the US dollar’s mining algorithm of “8 white guys” with a few dozen guys in Shenzhen.
However, before we get too alarmist about the future of mining, it is important to dig down and understand (1) what’s wrong with ASICs, (2) what’s okay with CPUs, and (3) what the future of ASIC mining is going to look like. The question is a more complex one than it seems. First of all, one might ask, why is it bad that ASICs are only produced by a few companies and a quarter of them pass through one factory? CPUs are also highly centralized; integrated circuits are being produced by only a small number of companies, and nearly all computers that we use have at least some components from AMD or Intel. The answer is, although AMD and Intel produce the CPUs, they do not control what’s run on them. They are general-purpose devices, and there is no way for the manufacturers to translate their control over the manufacturing process into any kind of control over its use. DRM-laden “trusted computing modules” do exist, but it is very difficult to imagine such a thing being used to force a computer to participate in a double-spend attack.
With ASIC miners, right now things are still not too bad. Although ASICs are produced in only a small number of factories, they are still controlled by thousands of people worldwide in disparate data centers and homes, and individual miners each usually with less than a few terahashes have the ability to direct their hashpower wherever they need. Soon, however, that may change. In a month’s time, what if the manufacturers realize that it does not make economic sense for them to sell their ASICs when they can insted simply keep all of their devices in a central warehouse and earn the full revenue? Shipping costs would drop to near-zero, shipping delays would go down (one week shipping delay corresponds to ~5.6 revenue loss at current hashpower growth rates) and there would be no need to produce stable or pretty casings. In that scenario, it would not just be 25 of all ASICs that are produced by one factory in Shenzhen; it would be 25 of all hashpower run out of one factory in Shenzhen.
When visiting the headquarters of a company in Hangzhou that is involved, among other things, in Litecoin mining, I asked the founders the same question: why don’t you just keep miners in-house? They provided three answers. First, they care about decentralization. This is simple to understand, and is very fortunate that so many miners feel this way for the time being, but ultimately mining will be carried out by firms that care a little more about monetary profit and less about ideology. Second, they need pre-orders to fund the company. Reasonable, but solvable by issuing “mining contracts” (essentially, crypto-assets which pay out dividends equal to a specific number of GH/s of mining power). Third, there’s not enough electricity and space in the warehouses. The last argument, as specious as it seems, may be the only one to hold water in the long term; it is also the stated reason why ASICminer stopped mining purely in-house and started selling USB miners to consumers, suggesting that perhaps there is a strong and universal rationale behind such a decision.
Assuming that the funding strategies of selling pre-orders and selling mining contracts are economically equivalent (which they are), the equation for determining whether in-house mining or selling makes more sense is as follows:
On the left side, we have the costs of in-house mining: electricity, storage and maintenance. On the right side, we have the cost of electricity, storage and maintenance externally (ie. in buyers’ hands), shipping and the penalty from having to start running the ASIC later, as well as a negative factor to account for the fact that some people mine at least partially for fun and out of an ideological desire to support the network. Let’s analyze these figures right now. We’ll use the Butterfly Labs Monarch as out example, and keep each ASIC running for one year for simplicity.
Internal electricity, storage, maintenance – according to BFL’s checkout page, internal electricity, storage and maintennance cost $1512 per year, which we will mark down to $1000 assuming BFL takes some profit
External electricity – in Ontario, prices are about $0.1 per KwH. A Butterfly Labs Monarch will run 600 GH/s at 350 W; normalizing this to per-TH, this means an electricity cost of $1.40 per day or $511 for the entire year
External storage – at home, one can consider storage free, or one can add a convenience fee of $1 per day; hence, we’ll say somewhere from $0 to $365
External maintenance – hard to quantify this value; for technically skilled invididuals who enjoy the challenge it’s zero, and for others it might be hard; hence, we can say $0 to $730
Shipping cost – according to BFL, $38.
Revenue – currently, 1 TH/s gives you 0.036 BTC or $21.6 per day. Since in our analysis hashpower doubles every 90 days, so the effectiveness of the ASIC halves every 90 days, we get 122 days of life or $2562 revenue
Shipping time – according to my Chinese sources, one week
Hashpower doubling time – three months. Hence, the entire expression for the shipping delay penalty is 2562 * (1 - 0.5 ^ 0.0769) = 133.02
Hobbyist/ideology premium – currently, a large portion of Bitcoin miners are doing it out of ideological considerations, so we can say anywhere from $0 to $1000
Thus, adding it all up, on the left we have $1000, and on the right we have $511 + $38 + $133 = $682, up to plus $1095 and minus up to $1000. Thus, it’s entirely ambiguous which one is better; errors in my analysis and the nebulous variables of how much people value their time and aesthetics seem to far outweigh any definite conclusions. But what will happen in the future? Fundamentally, one can expect that electricity, storage and maintenance would be much cheaper centrally than with each consumer simply due to economies of scale and gains from specialization; additionally most people in the “real world” are not altruists, hobbyists or admirers of beautiful ASIC coverings. Shipping cost are above zero, and the shipping delay penalty is above zero. So thus it seems that the economics roundly favor centralized mining…
… except for one potential factor: heat. Right now, ASICs are still in a rapid development phase, so the vast majority of the cost is hardware; the BFL miner used in the above example costs $2200, but the electricity costs $511. In the future, however, development will be much slower; ultimately we can expect a convergence to Moore’s law, with hashpower doubling every two years, and even Moore’s law itself seems to be slowing. In such a world, electricity costs may come back as the primary choke point. But how much does electricity cost? In a centralized warehouse, quite a lot, and the square-cube law guarantees that in a centralized environment even more energy than at home would need to be spent on cooling because all of the miners are in one place and most of them are too deep inside the factory to have exposure to cool fresh air. In a home, however, if the outside temperature is less than about 20’C, the cost of electricity is zero; all electricity spent by the miner necessarily eventually turns into “waste” heat, which then heats the home and substitutes for electricity that would be spent by a central heater. This is the only argument for why ASIC decentralization may work: rather than decentralization happening because everyone has a certain quantity of unused, and thereby free, units of computational time on their laptop, decentralization happens because many people have a certain quantity of demand for heating in their homes.
Will this happen? Many Bitcoin proponents seem convinced that the answer is yes. However, I am not sure; it is an entirely empirical question whether or not electricity cost is less than maintenance plus storage plus shipping plus shipping delay penalty, and in ten years’ time the equation may well fall on one side or the other. I personally am not willing to simply sit back and hope for the best. This is why I personally find it disappointing that so many of the core Bitcoin developers (though fortunately not nearly all) are content to consider the proof of work problem “solved” or argue that attempting to solve mining specialization is an act of “needless re-engineering”. It may prove to be, or it may not, but the fact that we are having this discussion in the first place strongly suggests that Bitcoin’s current approach is very far from perfect.
The solution to the ASIC problem that is most often touted is the development of ASIC-resistant mining algorithms. So far, there have been two lines of thought in developing such algorithms. The first is memory-hardness – reducing the power of ASICs to achieve massive gains through parallelization by using a function which takes a very large amount of memory. The community’s first attempt was Scrypt, which proved to be not resistant enough; in January, I attempted to improve Scrypt’s memory-hardness with Dagger, an algorithm which is memory-hard to compute (to the extent of 128 MB) but easy to verify; however, this algorithm is vulnerable to shared-memory attacks where a number of parallel processes can access the same 128 MB of memory. The current state-of-the-art in memory-hard PoW is Cuckoo, an algorithm which looks for length-42 cycles in graphs. It takes a large amount of memory to efficiently find such cycles, but a cycle is very quick to verify, requiring 42 hashes and less than 70 bytes of memory.
The second approach is somewhat different: create a mechanism for generating new hash functions, and make the space of functions that it generates so large that the kind of computer best suited to processing them is by definition completely generalized, ie. a CPU. This approach gets close to being “provably ASIC resistant” and thus more future-proof, rather than focusing on specific aspects like memory, but it too is imperfect; there will always be at least some parts of a CPU that will prove to be extraneous in such an algorithm and can be removed for efficiency. However, the quest is not for perfect ASIC resistance; rather, the challenge is to achieve what we can call “economic ASIC resistance” – building an ASIC should not be worth it.
This is actually surprisingly likely to be achievable. To see why, note that mining output per dollar spent is, for most people, sublinear. The first N units of mining power are very cheap to produce, since users can simply use the existing unused computational time on their desktops and only pay for electricity (E). Going beyond N units, however, one needs to pay for both hardware and electricity (H + E). If ASICs are feasible, as long as their speedup over commodity hardware is less than (H + E) / E, then even in an ASIC-containing ecosystem it will be profitable for people to spend their electricity mining on their desktops. This is the goal that we wish to strive for; whether we can reach it or not is entirely unknown, but since cryptocurrency as a whole is a massive experiment in any case it does not hurt to try.