eic

36
[email protected] EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET HANNES ROLLIN Interweaving a Close Examination of the Bitcoin Digital Cash Project Abstract. Everything is connected — not merely people and ma- chines, but the concepts of distributed wealth as well. Digital cash, distributed storage, open source systems and online voting all depend on one another and influence each other, while none of these can be un- derstood without basic knowledge of networks, cryptography and econ- omy... 1. Instead of an Introduction: Does Humor Belong to Science? A number of years ago, the American musician and poet Frank Zappa publicly posed the question to himself and to the philosophically inclined fraction of his listeners: Does humor belong to music? This question then, or more likely the examples which Zappa employed to deliberate his question, introduced a period of intense headache for legal cencors, who were obviously not paid for their sense of irony. The question was never a regular question for Zappa, but a working hypothesis he put forward and pursued to verify, and in his relentless spirit he himself settled the issue with the seminal Bobby Brown, which was, and unambiguisly so, generally thought of as funny in its total lack of self-irony. Of the lyrical ego, I mean. And now, I pose the same question in the realm of serious science, where the censorship is hard-wired in the minds of the key characters. Or have you ever giggled 1 reading a scientific paper? I daringly adhere to scientific methodology, but I am sick of scientific writing. When I want to tell you, formerly known as the reader, something, I will announce that with the keyword I, not, as is usually done, with a third-person reference to this author or a transference of responsibility via first person plural. Adding to this, I will of course be sure to underpin my brilliance with extensive use of Latin, French and German terms, as if there was no precise English counterpart. What I will not do, I have to apologize, but this is a matter of personal aesthetics, is to write in an easy linear fashion. I do interrupt others, so I may well interrupt myself. If you come without formal academic training (in humor-eradication), you might be puzzled by the number of literature references, such as [17], scat- tered, or rather cluttered, throughout the text. There is a simple formula: Date : June 27, 2011. 1 Try [82]. 1

Upload: hannesrollin

Post on 10-Mar-2015

67 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: eic

[email protected]

EVERYTHING IS CONNECTED — DISTRIBUTEDWEALTH ON THE INTERNET

HANNES ROLLIN

Interweaving a Close Examination of the Bitcoin Digital Cash Project

Abstract. Everything is connected — not merely people and ma-chines, but the concepts of distributed wealth as well. Digital cash,distributed storage, open source systems and online voting all dependon one another and influence each other, while none of these can be un-derstood without basic knowledge of networks, cryptography and econ-omy...

1. Instead of an Introduction: Does Humor Belong to Science?

A number of years ago, the American musician and poet Frank Zappapublicly posed the question to himself and to the philosophically inclinedfraction of his listeners: Does humor belong to music? This question then, ormore likely the examples which Zappa employed to deliberate his question,introduced a period of intense headache for legal cencors, who were obviouslynot paid for their sense of irony. The question was never a regular questionfor Zappa, but a working hypothesis he put forward and pursued to verify,and in his relentless spirit he himself settled the issue with the seminal BobbyBrown, which was, and unambiguisly so, generally thought of as funny inits total lack of self-irony. Of the lyrical ego, I mean.

And now, I pose the same question in the realm of serious science, wherethe censorship is hard-wired in the minds of the key characters. Or haveyou ever giggled1 reading a scientific paper? I daringly adhere to scientificmethodology, but I am sick of scientific writing. When I want to tell you,formerly known as the reader, something, I will announce that with thekeyword I, not, as is usually done, with a third-person reference to thisauthor or a transference of responsibility via first person plural. Addingto this, I will of course be sure to underpin my brilliance with extensiveuse of Latin, French and German terms, as if there was no precise Englishcounterpart. What I will not do, I have to apologize, but this is a matterof personal aesthetics, is to write in an easy linear fashion. I do interruptothers, so I may well interrupt myself.

If you come without formal academic training (in humor-eradication), youmight be puzzled by the number of literature references, such as [17], scat-tered, or rather cluttered, throughout the text. There is a simple formula:

Date: June 27, 2011.1Try [82].

1

Page 2: eic

2 HANNES ROLLIN

The more references, the better and more deeply conceived the scientificmerits of the present paper are. Hence, logically, I quoted a bulk of paperswithout ever reading more than the abstract (which is a smart word for abanal summary, and it does not very well conceal the fact that most papersget not very concrete after that), and often not even that. Never mind,they all do that. Quoting and citing is still considered chic in the academicworld; in some artificial environments quoting rules are, as if Google nevercame to existence and literature research was still conducted in dusty un-derground archives composed of rotten books and magazines, rather strict.If you come across a [cite] tag, I merely forgot to insert a link to some scien-tifically empowered confirmation of an opinion of mine. Perhaps there wasnone.

Examples, calculations and straying elaborations I will imprison to shrunkparagraphs like this one, hoping to relax the eye and ease the flow ofreading. Yes, mark this as a revolutionary moment. Not many scientistsat all tried to be easy to read, never mention accessible to a larger pop-ulation. Nevertheless, I cannot completely relinquish maths and codewhen discussing the matters I will discuss, but you will certainly notneed a Master’s degree in number theory, although that may help attimes. Code examples will come as ad hoc invented pseudocode, whichmeans mostly that the simple part is on my side — I choose functionsas highlevel as I desire, and implementation issues I utterly ignore. Donot hack that code into an existing Visual Studio project: It might run.

It is a common feature of scientific papers that the authors get undressedand uncover their motivations, which I, try as you might, will not do. Thisconvention is just another dogma I will not bow down to, for I had no partin creating it. The rest of the paper is structured as follows: Nay, neverwill I return to that kind of indulging scientificism.— There are great thingshappening out there in the digital world, which is not so far away as to beproperly called an outside opposed to our mental inside. We are digital. Asworld-wide primary energy production de facto achieved its, yes, all-timemaximum, bumping uncomprehensively on some plateau, banks and gov-ernments evolved to be recipients of severe (and well-deserved) mistrust,tearing down the vortex the entire idea of protecting and nurturing cen-tral authorities, more and more participants, from single users to giant gridhosters, of the mighty internet come together and form new networks, In-ternets within the Internet, intentional communities of the digital kind, fora variety of reasons as plentiful as the variety of these networks itself.

What, exactly, are those new types of networks, which are not that newanyway? Who are the driving and pushing forces behind? Where is allthat leading to? And why is it that precisely the most courageous andadvanced projects entail the most brutal downfall? I am talking here ofthe digital cash experiment Bitcoin [14, 15, 16, 17, 18], which is not bymany perceived as an experiment (which it just is), since it involves realmoney, and real money can, as we all have experienced too many times,turn adults, who never wanted to grow up, into really serious zeitgenossen;

Page 3: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 3

anyway, Bitcoin was possibly the key trigger event for the writing of thisvery document, since it comprises a number of critical features, unheard of inthis combination: Almost total lack of central control, maximum anonymityof transactions, maximum openness of the concept, self-organization in anur -democratic sense, yet connected to the ordinary money market and at-tempting to achieve world dominance as a means of payment.

When pondering digital cash, you naturally stumble upon related issuessuch as cryptography, distributed computing, open source software, onlinevoting and legal, eh, problems. Thus, we will elaborate all that. Yet havea glance just one layer deeper, and you become aware that technical andscientific progress cannot be abstracted from one’s weltanschauung, just asyour weltanschauung cannot be understood without taking into account thezeitgeist, and, better hidden but much more importantly, your menschenbild.What is, in your belief, a man or a woman able to? How much protectiondoes he need before himself? How well will she become what she is meantto be? What is the nature of punishment? Of the state? When it comes tomoney, creation and participation, you cannot separate technical discussionsfrom moral and ethics anymore. No, you can’t.

In true accordance to these my words, I will now lay down my premises,such that you may understand the filter through which I perceive the world.I believe, that in principle everyone wants to develop optimally, and, evenstronger, everyone knows in a dark and half-consious manner, where hispersonal path should and would go. Henceforth, I am undoubtedly part ofthe suspicious, suspicious to the ordinary power circles, of course, milieu,where the firm conviction is held, that the government is the best whichgoverns the least. For if, I might redundantly add, you regard and treatpeople essentially dumb and unreliable and self-destructive, you unwittinglyrender them thus, as every father and mother will have to confirm. YetI am not a blind follower of market economy such as to maintain, amongother credos, that success is a result of sensible selection criteria like quality,which it is of course is not; simply remember the painful years of evolution ofMicrosoft’s Windows operating system — it was market leader long beforeit could catch up with reliable operating systems such as Linux and MacOS.

In the real world, obviously, matters become tremendously complex, sincefor instance people are situated on a great variety of stages measured onthe scale of humanity’s development itself. Some guys are barely medivalin their attitude, while others, sometimes hilariously so, appear utterly self-controlled and conscious and could immideately populate some sophisticatedStar Trek Utopia. Hence the power of (distributed) intentional communities— I cannot help myself but believe in the power of truth, consciousness andcunning tactics. Those evil forces, which, by the way, are always withinourselves and projected or induced or attracted to the outside, can grantsorts of instant satisfaction, but they lack the property to inform you witha sense of achievement in the moment of your death.

Page 4: eic

4 HANNES ROLLIN

2. Peers in a World of Crowds, Grids and Clouds

There are a number of movements toward distributed computing andstorage, which can, and not unexpectedly so, I presume, be divided intobottom-up and top-down movements. Top-down examples (I will compactlysummarize crowdsourcing, cloud computing and grid computing) are entirelyconceived and devised by large companies as another means of increasingincome, viz. concentrating ever more wealth and influence in the handsof a few. Bottom-up examples, in contrast, are to be seen as spontaneouseruptions of grass-roots movements, often introduced by a handfull of peopleacting out as citizens, not as members of businesses or organizations, and ifthese projects hit the spirit of many in their capacity to fill a void, they arerapidly adopted by great numbers of men and women, giving the whole thinga mostly self-organizing and basically unpredictable character, as is the casewith Bitcoin. Buttom-up approaches are generally implemented as peer-to-peer (P2P) networks, circumventing ordinary centralized structures, whichalways need some trusted third party nobody really likes to trust. Banks.Global companies. Governmental organizations. Anyhow, let us begin withcommercial modes of collaboration to understand where the big boys areheading.

2.1. Crowdsourcing. Ever been in need of telephone support and surpris-ingly connected to a call center on another continent? Well, this is out-sourcing in the digital age. The argument goes like this: When telephonecalls between any two places on earth are cheap and practically withoutperceivable delay, call centers do not need to be located in close proximityto the customer. Just move them to the place where labor is cheapest, andeven better yet, leave the set-up of these call centers to the locals to furtherexternalize costs. India is a good place, since its IT infrastructure is alright,at least in the big cities, of which there is a number, and India providesmultitudes of English-speaking and digitally enabled young adults, you maysay [cite].

As data transfer rates grow and bandwidth prices slip to the basement,companies also start to think about moving (manual) data processing to thefolks, to whom they refer in a, I must append, derogative manner as crowds.The same is true for professional attempts at financial exploitation of socialnetworks — want to be a Facebook spammer? Note that crowdsourcingrefers to the transfer of labor not to another company, but to masses ofindividuals, which reminds me somehow of early industrial age. Back then,the foundation of our current proud (yet decaying) digital societies has beenbuilt by way of extremely cheap labor under precarious circumstances. Now,exposing their view of human nature, many firms utilize crowdsourcing toget projects done, that have been profoundly splitted to small particles to beworked at by the single partcipant, who neither is allowed an overview of thewhole nor is he equipped with responsibility and freedom of decision. Thiscould be different, though. As I will explain in the section on open source,

Page 5: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 5

people are indeed able to self-organize around a more or less well-definedcommon ground and thereby achieve rather complex goals — often withoutbeing paid. Yet, increasing intrusion of businesses into the opensource sectoris not entirely desirable (for me), since assimilation and commercializationof the community’s work is always waiting around the corner.

It appears to the learned eye, that crowdsourcing aims at a true glob-alization, nay, unification of (digital) labor markets. Of course, businesseshave an interest in buying labor at the cheapest price still guaranteeingan accetable level of quality (is that so?), and this is, on the other hand,an opportunity for information workers with low income requirements, forwhatever reason — be they self-sufficient gardeners in upstate New York orMalaysian students.

2.2. Grid Computing. Grid computing is a whole different story. Com-puting grids are basically computer networks specifically designed and con-structed for high performance (in terms of speed, reliability, security...),for a certain pre-defined task (climate simulation, physical experiments) bya certain coalition of resourceful participants, mostly governments, globalcompanies and universities. Currently, many grids are bypassing the usualinternet infrastructure (as is the case with CERN’s Large Hadron Collider)for a number of reasons, most of which are closely related to the immensebandwidth requirements and to poor performance of the transmission con-trol protocol (TCP) under high performance circumstances.

TCP is the basis of all ordinary internet communication. Yet, in the realmof high performance computing, the rules are different. High performancecomputing usually employs giant bandwidths and even more vast amountsof data — much is send for longer periods of time. This is measured via thebandwidth-delay product, which again imposes a lower bound on the buffersize.

Imagine two workstations in a grid connected with extravagant 100GB/sec optics, which is a realistic number, and a delay of half a second.The product is 50 GB, which is the minimum buffer required on bothsides to prevent data losses.

The next problem with TCP is, that ordinarily extensive CPU and mem-ory operations are performed on both sides, therefore the probability ofpackage loss and time-outs is comparably high. Finally, a few bulk ser-vents2 share tremendous bandwidth, which is quite contrary to, say, privatepeer-to-peer networks. There are attempts at improved versions of TCPsuch as HighSpeed TCP and a couple of novel protocols [62].

A common approach to integrate the omnipresence of the internet withthe enormous capabilities of a grid is a so-called three tier architecture: Anintermediate secure server leads the negotiations between a client (normally

2Servent is an artificial word composed of server and client to clarify that an entityplays both roles. We will meet that term again in the section on peer-to-peer networks —for obvious reasons.

Page 6: eic

6 HANNES ROLLIN

represented by a web browser) on one side and the supercomputers on theother side. Once in the mid-90s, the term grid computing promised comput-ing power for everyone, but no commercial provider appeared to this day.If you are interested in further details, please concern yourself with recentlecture notes such as [4] and the forum [50]. There is an old school, homeuser and possibly low budget version of grid computing, notably its older,yet smaller, brother: Cluster computing [96]. A cluster is just a number ofcomputers plugged together (via LAN, Ethernet or Internet) to act as anintegrated whole, that is, as one machine. A little while ago, it appearedtrendy to outperform conventional supercomputers with clusters made ofsecond-hand workstations.

There is, however, a less strict definition of grid, namely any connectionof computers for the purpose of a distributed solution of tasks that mightotherwise be too slow or even computationally infeasible to solve, such asSETI@home [108], IBM smallpox research [110] and GIMPS [45]. In myopinion, the term grid is ill-conceived in these instances, for they are builtupon a classic client-server architecture with little to zero communicationbetween the clients [cite]. Rather, I would propose to move SETI@homeet al. to the category of crowdsourcing, since this is what happens, withonly two differences: 1. Instead of human labor, these projects capture(supposingly unused) CPU labor; and 2. mostly, clients don’t get paid —this is likely to change completely, if people at large realize the cumulativepower of their computers: The 4.5M contributors of SETI@home add upto 15 Teraflops, which outruns IBM’s premium supercomputer ASCI Whiteby 25% (the Bitcoin network exceeds 120.000 Teraflops — money is anincentive). Alas, whenever money is involved, people feel intrigued to cheat,this to prevent demands the smartest of researchers and developers [36].I would not be too harsh with hackers and exploiters, for they could beregarded as well paid colleagues, whose job it is to detect weak points andthus enlarge our understanding of networks and cryptography.

2.3. Cloud Computing. Now, cloud computing received plenty of mediaattention in recent years, because cloud is a cool metaphor for some remotepart of the Internet you can perceive but you cannot really grasp. There isa funky lab at Berkeley, the Reliable Adaptive Distributed Systems Labora-tory (RAD lab, [97]), which gets even more funky when you look at the listof its founders, including such splendid names as Google, Microsoft, IBM,Facebook, Amazon and Siemens. What is the common denominator of thesecompanies? All of these companies have expanded to a ridiculous size, makeludicrous amounts of money, have incredibly over-provisioned data centersscattered all over the planet and do not know when it’s enough.

There you have the key enabler of the whole cloud computing hype: sparecapacity of the big fish. Essentially, it’s the SETI@home idea on the level ofGoogle and Amazon. A relatively recent survey by RAD lab [5] defines cloudcomputing as a combination of Software as a Service (SaaS) and Utility

Page 7: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 7

Computing minus private clouds (clusters, basically), and highlights theadvantages both for users and cloud hosts, the user’s bonuses being:

(1) There are no visible hardware constraints, hence there is seeminglyno need to plan ahead, which is a great buffering device for fluctuat-ing demand — you don’t have to build large data centers like Googleto serve peak demand, nor do you have to disappoint potential cus-tomers who cannot be admitted in those times.

(2) Cloud users do not need to commit to ’real’ hardware: They canstart virtually any IT business from an iPad, so to speak, and leavecomputation and storage to ’the cloud’ (meaning, of course, to thatdata-hungry bunch of Google, Microsoft, Amazon etc.).

(3) Pay as you go: You pay only what you need in terms of CPU cy-cles, disk storage and bandwidth, thereby efficient and concise useis encouraged. Nice.

(4) Cost associativity: This means merely, that 1000 hours of one ma-chine cost pretty much the same as one hour of 1000 machines. Alsonice, if you are tighlty scheduled and liquid at the moment.

A few cloud providers are already there and, not very surprisingly, namesbegin to repeat: Amazon EC2 (low-level access for the developer), Mi-crosoft Azure (mainly .NET-bound) and Google AppEngine (caged andover-managed). RAD lab [97] present a primitive formula that helps todecide whether to move your operations from your own data center (yougot one, don’t you?) to ’the cloud’. There are a number of obstacles toovercome3, but tremendous means are directed towards tackling them.

The most fun I had was when I learned that the fastest and most reliablemethod to transfer data Terabyte-wise over large distances is — FedEx. Yes.This is trivially explainable by the fact, that FedEx sends 10 TB just as fastas 100 TB or 1000 TB — the more data, the more compelling the shippingoption becomes. Jim Gray [49] experimented a bit and found only one diskfailure in 400 trials of sending harddisks by conventional mail.

This is actually excellent. Imagine you send three identical disks viathree distinct services, each with a failure probability of p = 1/400, thenthe probability of all three disks being corrupted amounts to p3. Now, ifyou repeat this process n times, the number of total failures is binomiallydistributed with probability p3 and expected value np3. When is thisexpected value greater than one? Precisely if n > 1/p3 = 6.4×107, thatis, in 64 million shippings you expect one complete failure of all disks.If that is too soon the case, send four disks. Of course, if this becomesgeneral practice with sensible data, the bad guys will start disguising asUPS drivers or just buy UPS.

To be entirely just, as I sometimes strive to be, I have to mention at leastone open source cloud computing system targeted at an academic audience:

3Availability of service, data lock-in, data confidentiality and auditability, data trans-fer bottlenecks, performance unpredictability, scalable storage, bugs in large distributedsystems, scaling quickly, reputation fate sharing, software licensing [97].

Page 8: eic

8 HANNES ROLLIN

Eucalyptus [88], which is implemented as an Infrastructure as a Service(IaaS), meaning not much more than its inherent ability to give developerscomplete access to virtual machines combining various, perhaps distributed,physical resources. Yes, virtual machines (VMs) will have the time of theirlife in the forecasted cloudy days.

In the eyes of its pursuers and advocates, cloud computing is fit to liftcomputing to the realm of basic utilities, joining water, electricity, gas andtelephony there [21], and scientists are, naturally, intrigued when consideringthe giant computing power available at cheap money from simple interfaces,especially since popular (expensive) software such Matlab and Mathe-matica has nowadays built-in cloud capabilities [42].

We have seen a steady evolution from simplistic client-server architecture,cluster computing, grid computing to, finally, the holy grail of cloud com-puting, and in contrast to the evolution of species, all of these techniquesco-exist, partly in awkward ways, in the very same ecosystem known as theInternet.

It should be clear by now, and may even further be clarified by a direct’grid vs. cloud’ comparison such as [41], that the good old days of personalcomputing power (which is, after all, empowering the single citoyen) arebound to end in favor of those silly tablets depending on remote servicesthat come under equally silly names. Ahh. They removed the keyboard,I sometimes muse, not merely for aethetical reasons, but to maximize theconsumer-producer ratio.

2.4. Peer-to-Peer Networks. Peer-to-peer networks, as intentional con-nections of users, or servents, having the same rights and being both clientsand servers, have been around for a long time now, their first popular fieldof employment being multiplayer games, one of the main drivers of inno-vation anyhow. The basic structure is always the same and very easy tounderstand: There is a logon site4, where a fresh user gets information ofall or (better) a reasonable fraction of currently online users and their IPaddresses. He then, leaving the logon site to its own devices, directly con-nects to the addresses he is equipped with and does whatever the networkencourages: sharing files, anonymously trading digital cash or shooting oneanother. When the user leaves the network, the logon site is usually informedand marks his status as offline (you know Skype [10]?).

The advantages are instantly clear:(1) There is no single point of failure (excepting the logon site): When

John’s computer in San Diego crashes during a peer-to-peer gamingsession, his fellows in Denmark and Canada can continue shootingeach other as if nothing has happened.

(2) No ordinary central repository and high performance server needed:CPU, disk and bandwidth demand of the logon server is marginaland can be accomplished by a cheap shared server in many cases.

4Have many, since this is the most exposed part of any peer-to-peer network.

Page 9: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 9

(3) No central authority necessary. Here it becomes interesting: Dueto its distributed and inherently self-organizing nature, anyone whodesires so can start a simple peer-to-peer network and manage itindepently. Anyone can, more or less, depending on the originator,enter and leave the network at will.

(4) You can, as is a pretty recent development, treat servers or gridsas members of a peer-to-peer network. There are shy attempts atresearch in that direction, which goes by the name peer-to-peer com-puting [43], but I will not concern myself with that.

As with every promising technology, we do not get away without a fewserious challenges:

(1) Peer-to-peer networks are able to penetrate the remotest corners ofthe Internet, thus an arbitrary peer-to-peer network can easily growto a tremendous size. If communication needs or data generation ofthe network are high, we hace a problem.

(2) Lack of centralization and hierarchy implies equal distribution ofpower in a political sense — how can we achieve fair networks?

(3) Continually entering and leaving users constantly change networktopology (who is connected to whom), and thus change content andload of the network, making routing non-trivial.Most peer-to-peer networks base their communication on flooding, whichmeans that user requests are transmitted over the entire network —Babylonian conditions! This, by protocols modelled after Gnutella v0.4[46], is overcome by adding a time-to-live value to each query, which isdecreased on each hop, but 1. the time-to-live value must be sufficientlyhigh to ensure that you find your mp3 file in the network, thus peer-to-peer network sizes are restricted to less than 100.000 [106]; and 2. somenetworks need a maximum of confirmations from users (such as Bitcoin).In an optimal case, where no query is posted to a node who as alreadyreceived that query (the attentive reader immideately recognized thenetwork topology: a tree!), there are 2(n − 1) messages, namely theedges of the tree counted twice, passed back and forth between n nodes.This poses serious limitations to the Bitcoin project’s ambitions, as Iwill explain perhaps later.

Some networks denote super nodes at logon time, elected for theirsuperior recources, which are utilized as miniature servers for their im-mideate surroundings. Thereby, a secondary hierarchy is added (to theotherwise flat network) and scalability, which here refers to enlargingthe network, is increased. You get an even finer hierarchy, if you linkthe number of connections to, say, bandwidth or CPU speed of eachpeer. But nevertheless, a hierarchically organized network is usuallydominated by a few highly connected members, who are vulnerable tomalicious attacks, requiring sophisticated distributed recovery methodsstill under research [67].

There is interesting research [83] comparing peer-to-peer networkswith complex adaptive systems (CAS), which are used in biology and thelike to model the behavior of collections of simple autonomous agentsinteracting in simple manners (like ants), yet showing complex and oftenunpredictable behavior as a whole.

Page 10: eic

10 HANNES ROLLIN

When this CAS model is utilized to construct peer-to-peer frame-works, developers might be able to embed desirable global behaviorsuch as adaption, self-organization and resilience without actually cod-ing those properties — keyword: swarm intelligence. The alternativeare uncomfortably complex routing protocols [63].

How do you know when to create a peer-to-peer network? There are threesimple conditions to check [11] (sorry for calling the participants nodes):

(1) Is there a number of nodes with some pair-wise dependencies?(2) Does each node depend on many, or better most, nodes?(3) Can those dependencies be reduced to at most a few ’give’ and ’take’

modes?If you answered all questions with yup, you are ripe to set up a peer-

to-peer network. In April 2001, an open source project called JXTA [60],promoted by Sun Microsystems, was launched, which provides open peer-to-peer protocols. If you are more of an experimenter, you could also tryout Anthill [6], a CAS-based peer-to-peer framework.

2.5. Peer-to-Peer as a Tool of Revolution. Yet, the most interestingperspectives open up when you regard peer-to-peer as a direct, and threat-eningly direct, counterdesign to cloud computing. The latter aims at furthercentralization of (computing) power, hence political power, while the formerempowers the people. And believe me: Peer-to-peer networks will not re-main restricted to filesharing and gaming. As fast as you can spell ’Bitcoin’,the message went around the world, that computing power implies politicalpower.

Financial systems, political systems, knowledge systems, legal systemshave always been an illusionary fiction, or semi-conscious agreements keptalive by confidence of a majority, as occasionally we are remembered byminor breakdowns. But since these systems went digital, there is no longera reason not to present better systems — and issue them for competition;we have the means right before hour eyes: It is peer-to-peer technology.

Nevertheless, mere computing power is not enough — the will to achievea political goal is the pivot, as for instance Bitcoin’s desire to replace allconventional currencies.

3. An Unscrupulously Quick Introduction to Cryptography

As soon as the data in peer-to-peer systems is a little more sensitivethan prosaic highscores and the like, cryptography becomes a major issue,meaning all the techniques entailed by the need of communicating in thepresence of some real bad guys, who are, for phonetic reasons, I presume,in cryptographic literature generally named Eve, having a special taste foreavesdropping, network weaknesses and immense computing power5. Com-munication then always happens between two persons called Alice (A) and

5You could employ Google AppEngine to crack Google’s mainframe access, couldn’tyou? Remark: Footnote humor is underestimated.

Page 11: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 11

Bob (B), also a convention since times immemorial. My short account ismainly inspired by Ron Rivest’s (to whom we owe the ’R’ in RSA) acces-sible lecture notes [101] and Goldwasser’s more recent book-sized lecturenotes [48] on cryptography, both from MIT.

3.1. Cryptographic Concepts. Strictly spoken, cryptography is a partof discreet mathematics, yet I am most interested in its application to thedigital environment. It begins with fear. And, as everyone informed aboutsuccessful attacks will agree, you cannot be too paranoid when it comes tocomputer security. Hence, you verbalize your fears, resulting in some se-curity policy, which is then realized by existing or newly developed crypto-graphic concepts, formally known as security mechanisms. These should beimplemented as to guarantee a sufficient degree of confidentiality, integrityand availability of the system, as desired. I may inform you, that securitymechanisms do not only involve number theory and fancy algorithms, butalso social aspects (tell everyone how to pick a safe password and when tochange) and physical aspects (tape your backup USB stick underneath yourdesktop).

As no security system is eternally and unalterably failsafe, it is mostimportant, though at times underestimated, at times shamefully neglected,to integrate some auditing or logging tool. When Eve was in there, we wantto know precisely how she managed to do that and precisely what she did. Ifsome dimwitted script kids crashed our server with a variant of a distributeddenial of service (DDoS) attack — imagine two hundred children evokingthe breakdown of an ice cream vendor by all wanting to be served first —,we want to retrace times and places of the originators etc [101, 1. lecture].

Unlike Shannon [109] and other early, mostly mathematically inclined,cryptographs, who presumed Eve to have unlimited computational power,which is just hilarious, modern cryptography merely expects Eve to operateat the upper region of the state of the art and a little above to compensatefor future hardware development.

This compensation has generally been underestimated, although Moore’slaw and similar statements indicated (temporary — on a finite planet)exponential growth of computation speed. For that reason, once widelyaccepted algorithms have become obsolete, including MD2, MD4, MD5,Panama Hash, DES, ARC4, SEAL 3.0, WAKE, WAKE-OFB, DESX(DES-XEX3), RC2, SAFER, 3-WAY, GOST, SHARK, CAST-128, Square.

Thus, an encryption system is deemed safe if it is computationally in-feasable (meaning annoyingly slow) to break it. At the same time, nothingis gained if not encryption is fast and efficient. This is the conflict that has tobe endured. The process of encryption itself is intuitively clear: Some usefulmessage m is, by way of an encryption function f , turned to a cypthertextc, the latter being absolutely useless to Eve. More formally [48]:

(1) It must be hard for Eve to reconstruct m from the cyphertext.(2) It must even be hard for Eve to reconstruct mere parts of m from

the cyphertext.

Page 12: eic

12 HANNES ROLLIN

(3) It must even more so be hard for poor Eve to detect or extract simpleyet interesting facts about message traffic.

(4) All of the above must hold with high probability.

3.2. Cryptographic Hash Functions. Hash functions are the solution toa whole familiy of questions apparently unrelated:

(1) How do I find a long text in a database, when I don’t want to performa tedious full text search? (indexing)

(2) How do I save a login password on the local harddisk without re-vealing it to Eve, who might have installed a trojan? (passwordhashing)

(3) How do I know that Alice’s message to Bob hasn’t been changed onthe way to Bob? (message authentication)

And many more. Hash functions, of which the cryptographic ones are afamous subset, achieve all that with a surprisingly simple feature: A hashfunction h takes a string of almost any length (restricted by concrete imple-mentations) and maps it to a fixed length bitstring.

h : {0, 1}<264 −→ {0, 1}256

This h, it could be SHA256, the hash function used by Bitcoin, maps allbinary strings of less than 264 digits to a 256bit string. Thus, to answerquestion (1), if you save a text to the database, also save its hash value in adedicated column. When you search a text, simply search for its hash value.This is also known as the dictionary problem, primordial ooze of hashing.

Question (2) is equally quickly solved: When the user enters his password,it is hashed and compared to the value in the password file, where thepassword has been saved in its hashed (hence compressed) version.

Question (3) is no more complicated: Alice just appends a hashed versionof the message to the message itself. Bob then merely hashes the messageand compares with the hash value provided by Alice (which is known asmessage authentication code, or MAC)6.

Hash functions should distribute every possible message evenly over theoutput space, and little changes in the message m should result in massivechanges of the hash h(m). There are two important formal concepts relatedto these desired features:

• Weak collision resistance. For a given hash value h(m1), it should behard to find another message m2 such that h(m1) = h(m2) (if thisis not the case, Eve may find an access-granting password without

6Disclaimer: This simplified method relies upon Eve not having knowledge about thehash function. If she had, she might just catch the message, produce her own and appendthe hash of the new message and send that to Bob. A workaround is to set MAC = h(m||k),that is, to append a shared private key to the message before hashing.

Page 13: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 13

ever knowing yours...7). Hard in this case, I might again redundantlyadd, is usually defined in terms of feasability. The fastest method tofind m2 should be key space search, otherwise known as brute forceattack, where Eve succesively tries out all possible input messages.• Strong collision resistance: For a given hash function h, it should be

hard to find m1 and m2 such that h(m1) = h(m2).Practically the entirety of all known hash functions are derived by the

so-called Merkle-Damgard construction [81]. This means, we start off witha compression function g, which takes two blocks of, say, 256bit and muddlesthem to one block of 256bit, such that each bit is mixed with every other(the infamous avalanche effect).

g : {0, 1}256 × {0, 1}256 −→ {0, 1}256

Then, the hash function is just an iteration of these: split the messagem to blocks of 256bit, do some padding if the last block is too short, andexecute g in an iterated fashion;

f(m) = g(mk, g(mk−1, g(mk−2, . . . , g(m3, g(m2, m1)))))(assuming k blocks). This way of quick hashing is, sadly, not nearly as

secure as was believed [29]. Yet, all the scientists could come up with washotfixing Merkle-Damgard to suit their (updated, I suppose) definition ofsecurity. There is not yet a good monolithic hash function around, maybeyou or I should propose one.

3.3. Cryptographic Challenges. You can use cryptographic hash func-tions to construct so-called cryptographic challenges, that is, you present ananswer (the hash) to the user and let her compute the question (a messagethat is hashed to the desired hash, namely a collision). Since the fastest wayto do that is (allegedly) key space search, you can give a pretty exact guesshow many operations are needed in average, can’t you?

Suppose a 10bit hash, for didactic purposes, hence any input is mappedto a 10bit string, of which there are about 210 = 1024. Now, if weassume, as is generally done to make matters handy, the range of thehash function to be uniformly distributed with respect to its domain(meaning: a set of uniformly distributed strings of the key space ishashed to a uniformly distributed set in {0, 1}10), then you can concludethat one out of 1024 strings of the key space gives the desired hash(p = 1/1024). Define the (binomially distributed) random variable Xas the number of collisions in n trials. Thus

P(X > 0) = 1− P(X = 0) = 1− (1− p)n

If you want at least one collision with a probability greater than 50%,you need to compute

7Adobe strangely constructed its PDF docs encryption such that not the password,but the hash of the password is used for encryption. Thereby, I was able to crack several64bit-encrypted PDF by key space search. Nowadays, longer keys are (generally) used.

Page 14: eic

14 HANNES ROLLIN

P(X > 0) > 0.5 ⇐⇒ n >log 0.5

log 1− p≈ 710

Thus, you need to evoke the hash function at least 710 times, untilthe probability of a collision exceeds 50%. Note that we have nothingsaid explicitly about the key space! Our tacit premise, however, was aninfinite key space. The number of hash function calls depends thus noton the key space size, but on the hash size entirely. The greater thehash size, the smaller p and the greater again n becomes.

And generalized: To find a k-bit collision in one trial, we expect aprobability of p = 1/2k. The expected value of X is np, and if we wantthis expected value to be greater than 1, we need n > 2k trials. Thence,the average effort is exponential in the size of the required collision.

Modern hash functions give strings of 256bit or more, which makes itinfeasable to generate a collision. Thus, people are generally content with apartial collision, meaning: Give me a string whose hash corresponds to thefirst, say, 64bit of this hash here; for instance, I want a hash of 64 leadingzeros. This technique was proposed by Back [7] as a proof-of-work to fighte-mail spam: If every sender has to perform such a proof-of-work as a kindof payment to the mailserver, no script can send thousands of mails just likethat. Sadly, it has been proven that proof-of-work proves not to work [71].

This is mostly due to economic factors: Spammers buy robot time andcalculate their expected gain with repect to the number of sent e-mailsand average hits (profitable recipients, that is). If you want to blockthose activities, proof-of-work has to be more costly then the expectedgain, requiring enormous barriers for all the average e-mail users [ibd].The second reason is, that computing power varies greatly from sec-ond hand cell phones to game developer workstations (and your privatecluster, you may add).

3.4. Symmetric Encryption (Shared Private Key Systems). Sym-metric encryption means just, that your encryption function f , the sharedprivate key, is its own inverse:

f = f−1

Therefore, if Alice sends a cyphertext c = f(m) to Bob, he can simplydecrypt by computing

f(f(m)) = f−1((f(m)) = m

and the other way round. A presumably didactically useful, but neverthe-less impractical example is the one-time pad, where Alice and Bob agreedbeforehand on a list of shared private keys, which are used one by one suchthat none is used twice. This is one of the few constructions allowing com-plete security — in theory. In practice, you wouldn’t want to apply one-timepads for at least three reasons.

(1) How do you share the pad? Encrypt with another one-time pad?Ha, ha. The only way for Alice to communicate something privatelyto Bob is to whisper it in his ear.

Page 15: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 15

(2) If more than two fellows share a pad, the entire system of secretcommunication is temporarily disclosed, if one of the pads is stolenby Eve before anyone noticing it. This happened several times inWWII, when allied forces managed to capture German submarines,including pad.

(3) How do you ensure that Eve didn’t corrupt the encrypted message?Maybe you just didn’t use the correct code page in your pad?

Despite all these disadvantages, which hold for any shared private keysystem, there is one strong advantage: Symmetric encryption is supremelyhelpful if no communication is involved. If you want to conceal some files be-fore the eyes of your boss, who has admin access to your system, or you wantto encrypt your digital wallet file on your USB stick (which I strongly sug-gest), you can simply do that with one of many strong symmetric encryptionalgorithms8. They are pretty secure, fast and easy to use, and you effort-lessly find open source libraries such as Crypto++ [31], pre-implementedfunctions in .NET and ready-to-use encryption applications.

3.5. Asymmetric Encryption (Public Key Systems). The plain ideahere is that the encryption function f is unequal to its inverse9:

f 6= f−1

and both functions f and f−1 should be easy to construct together buthard to invert. The term public key systems comes from the following niftyconstruction: Both Alice and Bob construct a pair of keys for themselves;for instance, Bob designs a public key fB, which he publishes, and a privatekey f−1

B , which he keeps secretly (maybe even symmetrically encrypted).Alice does the same. Now, if Alice wants to send a message to Bob, she justencrypts it with Bobs public key fB, which is publicly available:

fB(m) is sent to BobObserve the elegant fact that only Bob has the means, namely f−1

B , todecrypt messages encrypted with his public key:

f−1B (fB(m)) = m

But public key encryption contains just another nice feature: Authenti-cation, meaning a digital signature basically. Maybe encryption is not anissue, but ownership of the message (which might be a work of art publishedonline). Then Alice could just encode her message, read carefully, with herown private key f−1

A . Now, everyone is able to decrypt that message withthe use of Alice’s public key:

8AES (Rijndael), RC6, MARS, Twofish, Serpent, CAST-256, IDEA, Triple-DES(DES-EDE2 and DES-EDE3), Camellia, SEED, RC5, Blowfish, TEA, XTEA, Skipjack,SHACAL-2

9Hence asymmetric.

Page 16: eic

16 HANNES ROLLIN

fA(f−1A (m)) = m

But when you have decrypted Alice’s message thus encrypted, you knowfor sure that it had been initially encrypted by Alice and by Alice alone10.The best of all is: You can combine both ways to encrypt and sign yourmessage, in either order. Imagine, Alice sends

f−1A (fB(m))

to Bob. Anyeve can get as far as fB(m), since fA is public, but not fur-ther: f−1

B is Bob’s private property, well protected on a hardware-encryptedsmartcard. Thus, only Bob can get to m:

f−1B (fA(f−1

A (fB(m)))) = f−1B (fB(m)) = m

Some suggest to first encrypt and then sign (to prevent unnecessary com-putation when the signature is invalid), others like to sign first and thenencrypt as to make the signature invisible — depends on the context. Thereare a number of famous public key schemes, for example RSA, DSA, ElGa-mal, Nyberg-Rueppel (NR), Rabin, Rabin-Williams (RW), LUC, LUCELG,DLIES (variants of DHAES), ESIGN, yet RSA is probably the most famousone.

RSA relies on the fact that it is way easier to multiply two large primenumbers than to factor their product, when the primes are unknown. Ifyou devise a (probably probabilistic) algorithm achieving this factoringin polynomial time, well, a tremendous amount of software will have tobe reimplemented — the entire Internet, it appears, is built upon publickeys tossed this way and that.

Yeah, and that’s about everything there is to say about cryptography.On less than five pages. If you digested bravely all of this section, or if youdidn’t have to read it because you knew all that already, but then you wouldprobably not be reading this, well, then you are perfectly equipped for themind-boggling challenges that you will stumble upon in the next section,which finally introduces some of those dangerously enabling applications ofpeer-to-peer technology.

4. Distributed Wealth in a Distributed World

4.1. Open Source Development. As with many things under the skies,we have some vague feeling of knowing what exactly open source softwareis, but as soon as we are compelled to clarify our view, we have to admit:We do not know that much. So, under that premise, what, actually, is opensource software? Who is using it, who is making it, what are the obstaclesand opportunities? A good starting point is a recent metastudy (study of

10Warning: Eve could still hack the public key directory, put her own public key there,named ’Alice’, and publish under Alice’s name henceforth. Nothing’s ever that simple.

Page 17: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 17

studies) by Crowston et al. [30], from which I drew most of the recourcescited in this section.

First of all, as the name implies, it is software that comes with its ownsource code, such that anyone may examine, steal, modify or redistributethe hard work of others — to various degrees, depending on the accordinglicense (and there is a veritable zoo of licenses, from laisser-faire BSD licenseto self-propagating copyleft licenses as GPL, which intend to perpetratethe open source ideology par force) —, which appears, possibly, strange tosomeone educated in free market ideology and survival of the richest. Opensource software is often free of charge. Supposedly 87% of US businesses usesoftware of that peculiar kind [120], and most of you equally, for instance

• Linux• Apache Web Server• Mozilla Firefox• OpenOffice• The Gimp• Compilers/Interpreters such as for Perl, Python, C (gcc)• sendmail, bind• Eclipse• Blender• eGroupware, openCRX• Google’s Android

And many more. An incredible amount of open source software is to befound out there. You cannot even substract it from the current electromag-netic civilization — it came to my ears, that some space satellites run aLinux of sorts11. The advantages are obvious: Cheap or free software, of-ten no company involvement (although that particular movement is gainingmomentum rapidly — again, for obvious reasons; keywords: crowdsourcing,cleanwashing), open discussions on security and usability, the comfortablefeeling of belonging to the good ones.

Due to its inbuilt, caused by the urge for utmost precision, slowliness ofresearch, combined with the speed and flexibility of open source develop-ment (reportedly exponential growth of the number of projects [47]), notmuch is scientifically known about the very people behind those numer-ous projects [105]. There are, as of 2007, more than 800.000 programmersinvolved [117], mostly in the United States and, with a much higher per-centage of national populations, in the UK, France, Germany and, not toforget, Finland, homeland of Linus Torvalds. Note: Open source contribu-tors come from rich countries. Programming is either their hobby (whichis, clearly, understatement, for open source programmers regularly achieveworld-wide impact, not being a common property of a hobby) or their em-ployer is resourceful enough to pay them for doing open source development

11http://www.linuxjournal.com/article/7767

Page 18: eic

18 HANNES ROLLIN

— you understand that the employer must be way above the ordinary fightfor economic survival.

In my personal opinion, the open source world came to existence as acountermovement against overpriced (and occasionally poor) software, thatis, the first popular open source projects were clones of already existingcommercial software. Reason: You need only a few computers, operatedby guys with guts and brains, to rebuild just about any software, whilephysical products require exceedingly complex fabrication and internationalcollaboration and unimaginable amounts of energy and money — ever triedto rebuild a Xeon processor or a BMW?

As is detailedly outlaid in [30], most developers are first drawn to opensource by a personal need for some software, be it for financial reasons (therewere much less 3D modellers, if we hadn’t Blender and everyone interestedhad to buy or download (a ripped buggy version of) 3D Studio Max orCinema 4D) or functional reasons. Most contributors stay only a shortwhile, but others make a kind of career, including a certain sequence ofsteps:

(1) Bug reports and feature requests. These first make the newby knownto the core development team (CDT); besides, anyone can do that.

(2) Bug fixes. If the contributor is better known and involved for a time,he may successfully propose bug fixes.

(3) Feature contribution. When you have successfully fixed some bugs,you may well propose new features or functionality, which is imple-mented after careful examination of CDT members.

(4) In the long run, you may become part of the CDT, responsible for re-leases and coordination of contributing members. However, this laststep into the inner circle hinges completely on personal appointmentby the CDT12.

How are open source projects organized? I mean, internationally dis-persed nerds have to be held together somehow. Most projects use siteslike SourceForge13. Large-scale projects as Firefox and Linux mostly havetheir own infrastructure, some relying on Linus Torvald’s git, a fast versioncontrol tool (things get highly non-trivial, when several developers work onthe same file in different locations).

But all the complexities can be boiled down to just three core function-alities an open source hub must possess:

12It appears to be way easier to start your own open source project than to enter CDTof an existing one. Something is not quite right here.

13SourceForge at sourceforge.net is part of Geeknet, which is listed on stock ex-change, and hosts reportedly around 300.000 projects, such as the audio editor Audacityand the online game engine Arianne RPG.

Page 19: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 19

(1) A reliable, secure and version-safe code repository (hosted on a cen-tral server as github14 or stored in a distributed peer-to-peer network;see the section on distributed storage below).

(2) Some tools for communication and coordination (you can get awaywith mailing lists and wikis).

(3) A bug tracking database, according to the principle ’find bugs once’[58].

Despite all the obviously successful projects, doubts might come to yourmind, questions, whether quality can be guaranteed under such, say, unsafecircumstances. Well, if you have not read the introduction, you should dothat. Otherwise, I merely remind you, that humans are possibly much moreclever and fair than their reputation would tell.

Big companies are infamous for their long-term planning of softwareprojects (that turn out having to be refactored multiple times anyhow),but software is not an industrial product you can specify beforehand andthen produce with an army of specialists according to those mind-killingspecifications. There is a recent German book, entitled ’Software entwickelnmit Verstand’ (software development with brains) [35], where it is neatlyexplained, that, as software itself is a problem-solving tool, software de-velopment is an ever new problem-solving process, thus developers have tocreate preliminary representations, play around with that, showing it to theusers, and then, now better understanding the problem, improve their repre-sentation, play around with that... Software development is a cyclic processwhich can never, I repeat: never, be planned or specified before, since theresult is basically unknown. It is not a production process. It is a quest.

And now, I presume, you will have a beginning of an understanding ofthe beauty of the open source concept, as it is built around the humblesearch for an unkown solution. More popular projects have vast numbersof contributors and testers, where testing essentially means usage of a pre-release (Raymond put it right: ’Release early, release often.’ [99]) followedby bug reports and feature requests. A small CDT, often less than a handfullof developers, keeps it all together. And maintenance, conversely to hugesoftware companies, is not merely eternal bugfixing and releasing of dubious’service packs’, but reinvention, a continual re-thinking and re-adaption ofwhere the project is moving.

There is empirical evidence, that open source developers are much moreactively inclined to reuse code, from single lines to entire classes and DLL’s,than commercially employed developers [52]. A Microsoft-sponsored reportfound distributed coding contributing no more failures than localized de-velopment [12] to the final project. Other researchers [111] define a mean

14github.org ostensibly hosts more than 2M repositories. That enterprise gives youfree repositories for open source and charges proprietary pojects. Rightly so!

Page 20: eic

20 HANNES ROLLIN

developer engagement (MDE) as an indicator of agility (a development par-adigm forcing ever-increasing development speed) and found this MDE tostabilize at high values for years for most projects.

So, research is slowly beginning to acknowledge what everyone experi-enced who switched from Internet Explorer to Firefox: Most open sourcesoftware projects are more stable, faster and provide a better look-and-feel than their centralistically conceived commercial counterparts. Besides,many open software projects close a gap that has been left by the industrybecause there was no profit in it.

One more thing: The open source movement anticipates a major shift inthe perception of innovation, which was largely producer-driven troughoutthe industrial era (consisting mainly of a linear process: research, develop-ment, production, diffusion) — but in the digital world, other rules govern.I spoke of cyclic processes. If we could manage to subdivide it into suffi-ciently many orthogonal [58] modules, just about any digital project canbe realized [54]; not only software projects, but also digital works of art,grassroots think tanks and entire revolutions. Yep.

Talking of revolutions, it comes to my mind, that open source projectsare organized comparable to oligarchies or even dictatorships, where oneor a few have absolute power. If these are brilliant, fine, but otherwisethe founding fathers may hinder the project’s lift-off themselves. Whynot developing a generic open source management system, such that ver-sion control, bugtracking and communication are integrated with demo-cratic features — I mean, electing the core development team (designingmandates) and voting for central changes (direct democracy)? If all thatis implemented in a peer-to-peer fashion, the technological and sociopo-litical effects of such a system should not be underestimated...

4.2. Online Voting. Most researchers, at least to my limited knowledge,study online voting as a part of e-governance, denoting the implementationof ’democratic intent’ [98] into some government homepage, hehe. Whilea couple of countries have installed highly secured voting computers at thepolling stations, only a single country — Estonia — has adopted a systemthat allows voters to vote from their PC at home [23], authenticated by anID card and a suitable card reader. While some argue it unfair (those poorones without Internet access!) and insecure, others call it a hallmark ofdemocratic participation, depending on their career interests, I guess. Someeven dare to call online voting a ’stillborn voting technology’ [112], or, a bitmore intelligent, ’pajama voting’ [38].

Nevertheless, I am not so much interested in governments and their do-ings. I mean, you can vote for a lot of things, can’t you? You can elect acore developer team of an open source project, you can vote for candidatechanges in an (does it exist yet?) open arts project, you can vote for (oragainst) a protocol change of the Bitcoin project, etc. While in the opensource discussion the technical aspect is over-emphasized, the discourse ononline voting sometimes lacks such foundation.

Page 21: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 21

Protocol changes of the Bitcoin project are, as far as I know, not sub-ject to democratic elections, but a matter of the small Round Table ofapproved contributors. Change that!.

I give a condensed version of Rivest’s desiderata for an online votingprotocol [101]:

(1) One (authorized) person, one vote.(2) Votes are anonymous.(3) Verification of the count (this is not possible for ordinary elections,

remember Gore vs. Bush).(4) No voting receipts (prevents selling of votes).(5) There is a deadline.(6) Everyone can be a candidate.(7) Voting system authenticates itself to the voter.(8) System is efficient, scalable, and robust.

Try to disconnect your mind from the imagination of polling stationsand secure servers in some CIA basement. Imagine how a voting systemmight look like on the heterarchy of a peer-to-peer network, instead. Morequestions arise:

• Who determines timing of elections? The king? The system?• Where, precisely, is the voting system located?

Of course, these questions have to be pondered anew for any distinct vot-ing system. For a (democratic) open source management system, you coulddefine election intervals on project setup. When these settings have to bechanged later, a majority decision may be in place. The voting system isof course located everywhere, that is, each vote is counted on every partici-pant’s computer. Thus, the uniqueness of the votes can easily be checked.

I may remark that the problem of unique votes is similar, very similar,to the problem of double spending in digital cash systems (you knowhow to copy a text file?). As I will explain later, peer-to-peer networksare especially predilected for fraud detection of that kind.

Voting systems normally demand that the voters are known by their realidentity — real in the sense of the specific environment (who are you really,then?) —, but the votes themselves remain secret. Your identity could bethe login name of the open source management system. Weighing your voteby the number (and maybe rating) of your approved contributions wouldserve as an incentive not to start multiple accounts. Multiple identities arereally a problem for voting systems, but real contributions are a humanproof-of-work that is hard to fake if the user data is saved and verifieddistributedly.

A (first) peer-to-peer voting procedure:

(1) The system generates a voting key pair K0, K−10 and announces the

beginning of elections.(2) Each user with identity idn gives a vote vn, which could contain

the numbers of the candidates for the core development team, and

Page 22: eic

22 HANNES ROLLIN

appends a fixed-length random number rn (the so-called salt15) tohis vote: vn||rn.

(3) The local client of user idn automatically encrypts the vote with K0,then signs it with K−1

n .(4) The encrypted and signed vote with identity attached:

K−1n (K0(vn||rn))||idn

is spread across the network.(5) After some time, most users have a (hopefully complete) list of en-

crypted and signed votes on their computer.(6) When the deadline is due, all the local client programs strip the

votes of their identity and signature16 (by applying the correspondingpublic keys and then deleting the identities) and randomize the newlist: K0(v1||r1), K0(v2||r2), . . . is anonymous.

(7) Now, again on each participating computer, the votes are decryptedvia K−1

0 , the private key of the system, and the random numbersare removed. The votes are binned, counted and compared. Whena vast majority agrees upon the results, new authorization schemesare automatically spread across the network.

The observant reader has, of course, noted the unclarified point: Wheredo K0 and K−1

0 come from, and where is K−10 saved? These questions

are not easily solved. One possibility is to advise (who?) several clients togenerate parts of the key pair, those being copied for redundancy (this couldbe achieved with MDS codes, which I introduce in the next section). Thepublic key is assembled immediately, but creation of K−1

0 must somehow becoupled to a majority key request.

You see, there is yet work to do! Advanced multi-party protocols areintroduced, for instance, in [48].

4.3. Distributed Storage. There are two trivial kinds of distributing datawithin a (peer-to-peer) network, both flawed:

(1) Everyone locally saves what he desires — this is typical for fileshar-ing systems. Works well for mainstream content, but marginal andrarely requested items are possibly undiscoverable (or transferableonly at darn slow rates).

(2) Everyone locally saves everything. Excellent for highly critical con-tent in small networks to ensure resilience and robustness, but youwouldn’t want to do that within huge networks or with masses of

15If two votes are the same, they are encrypted to the same cyphertext, since K0 isdeterministic. We don’t want to reveal information about the votes before counting, thusthe salt. A similar technique is used to make password hashing more secure.

16During this process, unauthorized and doubly voting users are busted — naturally,a majority is needed to verify this and impose sanctions. At the same time, everyone getsfeedback that his vote was counted.

Page 23: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 23

data. The strength of distributed storage is precisely relief of thesingle users.

These examples shall highlight the major conflict of distributed storage:We want both availability (as in 2.) and convenience (as in 1.). The com-mon approach is to introduce generative communication, where ’processescooperate and compete for the use of shared resources’ [20].

As you might or might not know, the Bitcoin network makes exten-sive use of shared resources, since, first, the entire transaction historyof all bitcoins is stored on the network, and, second, every availablenode is (automatically) coerced to participate in the collective verifi-

cation of ongoing transactions17. Third, new bitcoins are ’created’ bycryptographic proof-of-work [86], which persuades people with appro-priate means to invest in fast hardware, sometimes resulting in entiremining pools. These pools are supposed to become the super nodes ofthe future to guarantee fast verification of transactions (for which theyare paid). Thinking about scalability, however, produces a strangelypiercing headache. More about that later.

Italian researchers have pondered upon how to extend existing distributedprotocols as Linda (central repositories accessed by various clients) and Lime(for mobile clients) towards peer-to-peer contexts [20]. This is brand newmatter. Major keywords are:

• Context transparency : Servents do not know where precisely somedatum is located, it’s just there.• Replicable Data: The network replicates data to improve availability

and resilience (resistance against failures and attacks). Anywhich,this kind of auto-replication must be restricted to prevent extremedata spread as in 2. above.

A way to do that are the famous maximum distance separable(MDS) codes, where (given integers k < n) the data block to bestored is split into k parts. Then, via MDS magic (for example,ancient Reed-Salomon codes [100]), these k parts are incorpo-rated into n coded packages, distributed on n nodes, such that,here comes the trick, any k out of these n packages are sufficientto reconstruct the k parts of the actual data. Then, the systemis resilient against up to n−k nodes breaking down. Naturally,single repairs are cheaper than complete recovery [34].

For instance, if you use an (128,8) code, the data block Dis split into D1, D2, . . . , D8 and coded into C1, C2, . . . , C128.Now, only the Ci are saved on the local harddisks of 128 users.If any user wants D, he has to contact just eight users of the128 mentioned ones (each of whom only transfers 1/8 of D).Even in the spectacular case of 120 random nodes going offlineat the same time, the remaining eight, whoever they are, cantogether rebuild D (if desired) or, as should be immediatelydone, rebuild D1, . . . , D8 and compute an new MDS code to

17You actually have to kill the bitcoin client of you want your CPU cycles for your ownpurposes, just exiting is not enough.

Page 24: eic

24 HANNES ROLLIN

be handed around. In this fashion, quick disaster recovery ispossibile.

• Scope restriction: To alleviate the load of each servent, he shouldbe able to limit his scope (in different ways for different actions),meaning the depth of connections he is concerned with.

I talked of the time-to-live (TTL) concept before. I think, itis sensible to give each servent the sovereignty to define TTL-values for each incoming and outgoing activity. For instance, ifyour TTL equals 3 for a specific incoming activity (say, tran-scation verification in the Bitcoin network), all requests thatpassed more than three peers before reaching you are immedi-ately discarded. As of now (June 2011), Bitcoin servents areimplicitly forced to serve and flood the entire dataspace, whichis annoying even in this early stage of the experiment.

4.4. A Short Story of Money. There we are, finally, at the heart ofall concerns — money! No, I’m kidding. Money concerns me less than,for instance, the fulfillment of my destiny. But this is a different topic.Nevertheless, there is a reason why money is attracting so much attention.Disclaimer: I do not know much about economy — but who does, anyway?Printing money to pay my own debts is something even I could have comeup with. If they had asked me, however, I think I would have proposedqualitative easing, that is, coupling the dollar to primary energy availabilityand the amount of newly printed (or destroyed) money to the developmentof that energy availability, since, in my humble opinion, money is, insteadof being just a conveniently accepted convention, stored energy, or rather:Money is the promise of stored energy.

Think about that. Whenever you buy something, you buy either energydirectly (gas, firewood, electricity, human and animal labor, food), or youbuy things that incorporate energy in their making, transport, constructionof according factories (chip factories cost nowadays several billion dollars —translate that to energy!) and the, sometimes, tremendous education of theresearchers and constructors behind those things. And the more advanceda thing you want to posses, the more energy is incorporated therein [89].Think of hexa-core processors.

OK. Now, central banks are infamous for their urge to issue new moneyevery year. They say, it’s important for economic growth. What do theymean by that? Actually, as many of you may be aware of, they don’t justgive the money away, ah, but they give credits — at first to ordinary banks,who, on the next step, give credits to ordinary enterprises and people. For amajority of debitors to be able to pay off their debt, which amounts, thanksto interests, to a much higher sum than they were initially supplied with,there must be a considerable increase of wealth in average, which is of coursedue to an increase in labor and production, since wealth is not equal to theamount of dollars in circulation, but to the amount of goods and servicesavailable.

Page 25: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 25

Next step. Goods and services must increase steadily. How do they dothat? You can increase efficiency, which is profitable when you are still prettyinefficient, but this gets fairly costly later on. No, most importantly, energyflows have to increase! Thus, very much simplified (but true nevertheless),the rate of inflation (newly printed money, essentially) mirrors the hope, Isay: hope, of future development of the energy flows (primary energy, that ismainly oil, coal, natural gas and nuclear power, in this order). Don’t put tomuch confidence in renewables, for they cannot sustain a consumer society[115].

I know I’m moving on thin ice now, but verifiable truth is: Oil production,amounting to 60% of the worlds energy supply and 98% of worlds fuelsused in transportation (gas, diesel, kerosine...), is basically flat since 2005,bumping a bit and never since reached the maximum of June 2008, when,what a coincidence, the (maybe last) financial crisis hit18.

Some say, to prevent a threat of hyperinflation, currencies should againbe coupled to gold reserves. Gold reserves, my! What is the value of gold,really? You cannot eat it, you cannot drive your car with it, you cannotconnect it to the Internet... Its only value, in my opinion, is its indestruc-tability and its beauty19. What happens if your money is a piece of papercorresponding to a fraction of the gold reserves? In theory, you have almostno inflation, since printing new money demands acquisition of new gold,which is a slow business, one hears.

The core problem is: If energy supply increases, since, for instance, a newkind of energy source is brought into business, then goods and services growand grow, but the gold reserves won’t grow that fast, money gets scarce,and you have a dreadful deflation. On the other hand, if noone comes upwith cold fusion in time and oil production, as many geologists propose20,goes down, economic performance decreases — less production, less labor,etc. — and we experience a dreadful inflation without increased amountsof money!

But all this is so obvious that something must be utterly wrong, or whydid nobody implement real energy flows into money printing policies?

4.5. Digital Cash. If you are new to these matters, you may think toyourself: Alright, we have digital cash for years now — online banking,credit card transactions, PayPal...

18If you are interested in a scientifically sound introduction, check outrichardheinberg.com and theoildrum.com. If you are more keen on intellectual sarcasmconcerning our blindness in these matters, follow kunstler.com.

19OK, some report to have healed bone cancer and suicidal depression with (literally)homeopathic doses of gold, and I clearly perceive the mythic and archetypical halo thatmakes gold so precious and desirable. But, fellow alchemists, I have to remind you: Aurumnostram non est aurum vulgi. Gold is a symbol of something else, something probablyunknowable. As long as we are unaware of that, we run after gold like maniacs.

20Visit the Association for the Study of Peak Oil at http://www.peakoil.net.

Page 26: eic

26 HANNES ROLLIN

Then I would have to tell you, well, although all of these systems allowmoney transactions in some digital and remote fashion, they are far awayfrom the properties of cash. Online banking and the like are based upon theassumption that you have a regular bank account; if you haven’t, you arepretty much stigmatized and unable to book a flight online or to use Pay-Pal. Furthermore, ordinary online money transactions lack a distinguishedfeature of regular cash: Anonymity. If you want to sell a stolen car or buydrugs, you, naturally, expect used and unsorted dollar bills as a means ofpayment, for credit card transactions are directly linked to your bank ac-count, which again is directly connected to your identity as a citizen of aparticular state. Hence, no anonymity.

The first, in my knowledge, who fiercly pursued the idea of anonymousdigital cash, was David Chaum [27] in 1985. Yet, he still couldn’t disconnecthis mind from the central role of banks. Nowadays, there are at least fivedifferent kinds of digital cash systems and four kinds of digital cash itself,provided by a surprising number of enterprises, lead probably by RussianWebMoney (an answer to the Russian banking collapse in 1998), whichprovides 200.000 cash-in terminals in and around Moscow (yes, you don’tneed a bank account!) and reports more than 12M accounts, agents andcustomers in 8000+ cities and 70 countries and 59.000 places where you canfund a z-purse, WebMoney’s digital wallet. Constance Wells [123] gives aneat summary of economical and legal issues thereof.

Somehow, I don’t like the idea of WebMoney, which is replicating the oldbanks in a one-to-one fashion, hence I will, for the rest of this essay, restrictmy scope to Bitcoin, a novel fast-growing peer-to-peer digital cash projectalready mentioned, first described in 2008 by Nakamoto [86]. To begin with,I merely assemble desirable properties of digital cash, gained as a union setof [48, 79, 101], and elaborate their implementation in the Bitcoin project.

• Token-based (not account-based).This is clearly the case with Bitcoin. You don’t have, in contrast toWebMoney, for instance, a Bitcoin account, but you have a digital wallet,literally a file named wallet.dat. Though being extremely convenient,it also bears the risk of wallet theft. Thus, have your wallet encrypted,at least, and stored on several devices.

• Anonymous.Bitcoin is as anonymous as digital cash can be. Payer and payee knowonly each other’s public keys and IP addresses, and noone else knowsmore, since no central authorities — banks, governments, enterprises —are involved.

• Easy to use.Actually, you just enter the amount and payee’s Bitcoin address (a publickey of sorts) into the Bitcoin client, press pay, and off you go.

• Transferable between users.Transfer and regular payment are not distinguished — welcome to thepeer-to-peer world!

Page 27: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 27

• Portable.Yes, you can copy or move your wallet.dat to your iPhone or USBstick or what-you-like.

• Infinite duration (until destroyed).If you have a decent backup policy, your wallet might last as long as thedigital age. And you have, if I got that right, the possibility to destroyyour own coins. But why would anyone do that?

• Divisible (transaction of fractions of bitcoins).Bitcoin has that, too. There is a fine inbuilt divisibility (up to 8 decimalplaces), and floating point precision allows for many more, althoughthere is still a lively debate about that in the forum [15].

• Non-repudiation (you cannot withdraw your money once you paid).The coins are ’physically’ removed from your wallet and added to thepayee’s wallet. If you now replace your wallet with an older version andtry to spend those coins again, this is known as double spending.

• No double spending.A community effort guarantees this. A set of new transactions is floodedthrough the network, and each node who received such a block tries tofind a proof-of-work — a nonce (number used once) that, appended tothe transaction block and the hash of the previous block, is hashed to astring with a certain number of leading zeros. One or a few nodes willfind this nonce first and broadcast it to all nodes, who accept the blockif no transaction in that block has been doubly spent. Then, the hashof the accepted block is used as input for the next proof-of-work.

Reportedly, the accumulated CPU power of all the honest partici-pants makes it close to impossible for some bad guys to redo all thoseproofs-of-work, the longer the system works and the more powerful thehonest part of the network is. But this redoing would be necessary toachieve double spending!

The difficulty of that proof-of-work (the number of leading zeros re-quired) is automatically adapted according to the speed of block gen-eration, naturally, to compensate for hardware development (which canbe quite surprising) and fluctuating (cumulative) CPU power of the net-work [86].

• Secure transfer.Public key encryption. If a coin did not reach its designated receiver,it counts as not spent. To ensure secure payment, the transaction canbe secured with an escrow mechanism: They money is transferred, butlocked, and as soon as the ordered drugs or books reach the payer, hesends a key to the payee to unlock the money, so to speak.

• (Un-)traceability.Some want payments to be traceable, Rivest for instance [101], to cap-ture drug retailers and ’terrorists’ and wallet thieves, others argue, that,while people do plot crimes behind closed doors, nobody would argueto force the populace to keep their doors open. A sovereign citizenshiphas its price [123]. Bitcoin transactions are practically untraceable. Inprinciple, IP addresses can be retrieved from the logs, which again leadto the culprit with considerable effort of cyberforensics. But try to go topolice, telling them your bitcoins were stolen! The focus of peer-to-peer

Page 28: eic

28 HANNES ROLLIN

philosophy rests upon individual responsibility and honesty of a vastmajority. If you don’t share that view at least a bit, you are definitelyat the wrong place.

• Cannot be forged.Yes, good question, how do those bitcoins come into existence, anyhow?— They are user-generated. If you do a certain proof-of-work, this isitself a new coin. The difficulty is high, and you must precisely calculatethe speed (measured in hash function invocations per second) per energyconsumption ratio of your computing device. It turned out, that cer-tain openCL-enabled nVidia graphic cards are excellent in this respect.Amazingly, the price of these graphic cards went up sharply in the lastmonths [cite].

This notion is brilliantly conceived, I believe. Discouraging forgeryby allowing minting! Wow.

• Control of cash amount.It will be hard to believe for the newcomer, but the total number ofbitcoins is a priori fixed at 21M. I imagine a line like

#define MAX_COINS 21000000

somewhere in the source code, but I couldn’t find it yet. You see,we (the bitcoiners) have a few problems here. First, as I explained inconnection with gold, there is deflation imminent, disabling bitcoins as ageneral means of payment and inviting speculators, who, together withgeeks and downright exploiters, comprise the set of early adopters of theBitcoin project. If bitcoins gain value just like that, people think, wemust buy now and we must buy much. Then, as the value increases, afew big boys sell large amounts of bitcoins, get rich, cause the value tofall and many ordinary people also sell, with losses, their bitcoins. Now,the speculators come back into business, buy bitcoins, the price rises...

The second problem, in my opinion, is that this maximum number ofcoins is pre-specified and can thus spontaneously be changed (increased!)by the core development team. Who are they? Are they to be trusted?

• Small transaction fees.Great hopes are put into digital cash as to make micro-payments feasable,both in terms of transaction speed and transaction fees. In general, bit-coin transactions are free of charge, but rather slow, and you can buyquick processing of your transactions from those mining pools mentionedelsewhere. Thus, speed and the desire for small transaction fees contra-dict each other.

• Universal.Bitcoins are universal in the sense that you can always redeem your coinsand get real dollars or euros for a small conversion fee. This redemp-tion (what a word!) is at the moment processed by just one Japanesecompany, that determines the exchange rate in an authoritarian style.This is not good. I strongly recommend many independent companiesto participate in the conversion business — for two reasons: 1. A singlemarketplace for bitcoins is an easy target for government restrictions.2. A single marketplace is an easy target for hackers.

Bitcoins are not universal in the sense that they are (accidentally?)linked to the major ordinary (flawed) currencies, that will go on to causeso much pain. And bitcoins flip my notion of an energy-linked means

Page 29: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 29

of payment: They are symbolic of wasted energy (and not of availableenergy).

• Fast and scalable21.Speed matters. Visa processes several thousands of transactions persecond, and as far as I can see, Bitcoin aspires to move into that realm.But, if a transaction is broadcasted (I prefer the term flooded) to theentirety of the nodeship, we get two problems: Imagine a certain second

when 1000 transactions are attempted in a network of 100.000 nodes22

(I am simplifying matters vastly). Then, first, each node is entrustedwith 1000 proofs-of-work (which sucks if you use your computer for morethan mere websurfing — you a gamer? Multimedia artist? Programmer?Webserver?).

Second, 1000 blocks are transmitted to and from 100.000 nodes, thenwe have a lower bound of 200.000.000 transactions (probably much moredue to package loss and complicated network topology) which amountsto, assuming an average delay of 10ms, a waiting time of almost 24days! And I didn’t even take bandwidths and procession times intoconsideration.

This is a major drawback of the current Bitcoin verification proce-dure (which is otherwise incredibly clever). It could, maybe, hopefully,be overcome by introducing scope restriction, which I explained in thesection on distributed storage. Then, each node could define its horizonand thus limit the maximum of traffic and computation it is involvedin (though precautions must be taken to prevent network disintegration— we don’t want the Chinese government to run a disconnected miningpool). And, luckily, there is a number of monstercomputers out thererunning only Firefox.

Maybe we can learn from researchers who deal with peer-to-peersupport for massively multiplayer games [68] — they face scalabilityissues as well.

• No adverse social effects (Rivest).Bitcoin allows generation of wealth, if you will, and a few advantagescan make you part of a novel elite:

(1) You are clever and educated enough to understand all of the above,which alone makes you part of a small elite of, say, a few millionsworldwide.

(2) You can invest in fancy new hardware.(3) You have access to reliable and cheap (or even free) supply of

electricity.

As you can see, Bitcoin magnifies the already immense inequalitybetween the rich and educated, living within a highly integrated digitalinfrastructure, on the one hand, and all the rest on the other hand.Within these elites, however, a power shift is possible from the ordinarypower circles towards individuals with sharp minds.

The second problem is the total exclusion, contrasting WebMoney,of all those who do not possess a bank account. If Bitcoin or one of its

21A true global digital cash must be usable by hundreds of millions to be universal.22At the end of June 2011, there were ca. 9.000 transactions per 24 hours (one trans-

action every 10 seconds).

Page 30: eic

30 HANNES ROLLIN

descendants is to compete physical cash, then a way of directly convert-ing money to bitcoins must be found — such as those cash-in terminalsin Moscow.

• Unit-of-value freedom (Matonis).The economist and libertarian Jon W. Matonis in 1995 wrote an inter-esting short essay on ’digital cash and monetary freedom’ [79]. To thelist of desired properties digital cash should have, he adds unit-of-valuefreedom. What the hell is that?

OK, we all know that you can buy and sell digital cash for dollars,and giving you less dollars for a bitcoin than you can buy one for isan incentive to stick with bitcoins. But how, exactly, is the price of abitcoin determined? They say, by demand and supply, but that doesn’ttell me much.

I read it thus: Bitcoins are (still) illusionary tokens traded in allmajor currencies, a toy of speculators who failed in real life, but not acurrency in its own right. A currency of the future must be backed (notby debt, though) — Matonis’s suggestions are ’...equity mutual funds,commodity funds, precious metals, real estate, universal merchandiseand/or services, and even other units of digital cash. Anything andeverything can be monetized ’ [ibd, emph. mine]. You already know myfavourite backing: Energy supply.

It works like this: The amount of cash is coupled to another item(energy, WebMoney, gold...), preferably one with economic importance,and the price is determined by the market. Don’t get me wrong: I dohave a sense for the beauty of the number 21.000.000.

• Competition.There are a few digital cash systems around, see [123] for a summary.The trick, then, is to start hundreds of digital cash systems and let themcompete. This is an incentive for enterprises as well as grassroots move-ments as Bitcoin to think hard about fast, reliable, secure, convenientand economically sound digital cash systems, and we have already ac-quired a feeling that these properties are partly contradictory. Bitcoinis a beginning, but not the end. Imagine quantum money [84]!

• Confidence.I shall point out that confidence in a means of payment is not a result ofa rational analysis of the specific security and economic aspects. Con-fidence, rather, is an effect of habit and a long history of mostly goodexperiences. You do put your credit card into just about any slit with aVisa sticker nearby, don’t you? The adoption of digital cash is, for nowat least, not built upon confidence, but upon a painfully perceived voidin the digital economy. Bitcoin, WebMoney et al. fill a gap. This gap isnow closing, favoring online poker rooms, commercial animal porn sitesand small-scale dealers of weapons. This, together with the (not quiteunexpected) volatility of its value, makes Bitcoin a touchy subject forordinary folks.

But this could change! If the dreadful scalability issue is solved anda sensible monetary backing is found, Bitcoin v.2 promises to be a rev-olutionary tool not only for e-commerce, but also for ordinary moneytransactions, such as paying your rent, buying a car or a newpaper. It isapt to outperform Western Union in its ability for free worldwide moneytransfers.

Page 31: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 31

5. Accountability vs. Privacy and Speed vs. Reliability

After all that, you have deserved some idle chat. I have written at some(compressed) length about four major applications of peer-to-peer technol-ogy: Open source systems, online voting, distributed storage (and comput-ing) and digital cash, taking a close look at the Bitcoin project. The amountof desired privacy varies in these systems, and, generally spoken, has to bebalanced with the amount of necessary accountability. More precisely ex-pressed, privacy and accountability are the extreme ends on the same axis[19].

The same is true for speed and reliability. We have the mathematical andcomputational means to make a network arbitrarily reliable, but at a highprice: A hefty slowdown (try not to think of hardware costs); compare Bit-coin’s scalability drawbacks. I tried to organize the ’big four’ in a coordinatesystem:

AccountabilityPrivacy

Speed

Reliability

DistributedStorage

Open SourceManagement

Digital Cash E-Voting

The ’big four’ all have the potential to influence one another:

• Open source projects can realize digital cash systems, distributedstorage networks and online voting.• Digital cash can be used to conveniently pay for distributed storage,

remote open source collaborators of rare talent and to bribe onlinevoters in realtime.• Online voting is a suitable way to elect core development teams of

open source projects, to alter monetary backing or transaction proto-cols of a grassroots digital cash environment and to elect ’ministers’

Page 32: eic

32 HANNES ROLLIN

in distributed storage networks, who could play a special role in theadministration of those networks.• Distributed storage, lastly, saves as a failsafe repository for open

source projects, digital cash transaction histories and online votinglogs. Distributed computation can verify file integrity, code version-ing and Bitcoin transactions — but don’t exaggerate!

As you may have recognized, there are a lot of open questions. In terms ofsecurity and network protocols, the topics of this essay repeatedly touchedthe fringe of the scientifically known world. Moreover, to be able to usethese concepts in their full might, you should delve deeply into politicsand economy. At the peak of specialization, generalists are needed againto integrate those innumerable tiny little splinters of knowledge out there.Thanks for reading, and keep it on!

6. Acknowledgements

My acknlowledgements go to Oliver Kommnick, who sparkled my inter-est in this fascinating intersection of society and technology; to SatoshiNakamoto, inventor of Bitcoin, whose paper [86] is a wearisome read; to thevibrant Bitcoin community [15] and to all those scientists, programmers andexperimenters out there, trying to make this world a place full of funny littlemachines.

7. P.S.

For the law enforcement and malicious hacker dudes — here is how toshut down Bitcoin; any would do:

(1) Shut down all logon servers (there are not many)

(2) Shut down all exchange servers (there is just one as of June 2011)

(3) Shut down the Internet(4) Shut down electricity

To circumvent (1), users had to manage IP addresses manually and send-ing e-mails like ’Hey Doug, what’s your IP today?’, which is just a dreadfulimagination. I know of no protocol coping with the dynamically changingIP addresses without a logon server. (2) is the Achilles’ heel. No digitalcash is of any use if it can neither be bought nor redeemed. The End.

Page 33: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 33

References

[1] Most papers are freely available via scholar.google.com.

[2] Mark S. Ackerman: Privacy in Pervasive Environments: Next Generation LabelingProtocols, 2004.

[3] Alessandro Acquisti et al.: Countermeasures Against Government-Scale Mone-tary Forgery, 2009.

[4] Giovanni Aloisio et al.: Grid Computing on the Web Using the Globus Toolkit,2010.

[5] Michael Armbrust et al.: Above the Clouds: A Berkeley View of Cloud Computing,2009.

[6] Ozalp Babaoglu et al.: Anthill: A Framework for the Development of Agent-BasedPeer-to-Peer Systems, 2001.

[7] Adam Back: Hashcash — A Denial of Service Counter-Measure, 2002.[8] Rajesh Balan et al.: mFerio: The Design and Evaluation of a Peer-to-Peer Mobile

Payment System, 2009.[9] Endre Bangerter et al.: A Cyptographic Framework for the Controlled Release of

Certified Data, 2010.[10] Salman A. Baset et al.: An Analysis of the Skype Peer-to-Peer Internet Telephony

Protocol, 2006.[11] D. Bertolini et al.: Designing Peer-to-Peer Applications: an Agent-Oriented Ap-

proach, 2010.[12] Christian Bird et al.: Does Distributed Development Affect Software Quality? An

Emprical Case Study of Windows Vista, 2009.[13] Peter Bisson et al.: The Global Grid, McKinsey Quarterly, 2010.[14] http://bitcoin.org/

[15] http://forum.bitcoin.org/

[16] http://www.bitcoinmoney.com/

[17] http://bitcoinwatch.com/

[18] http://bitcoinweekly.com/

[19] Mike Burmester et al.: Accountable Privacy, 2003.[20] Nadia Busi et al.: Towards a Data-Driven Coordination Infrastructure for Peer-

to-Peer Systems, 2010.[21] Rajkumar Buyya et al.: Cloud Computing and Emerging IT platforms: Vision,

Hype and Reality for Delivering Computing as the 5th Utility, 2009.[22] Bogdan Carbunar et al.: Conditional E-Payments with Transferability, 2011.[23] Alec Charles: The Electronic State: Estonia’s New Media Revolution, 2009.[24] Aw Yoke Cheng et al.: Risk Perception of the E-Payment Systems: A Young

Adult Perspective, 2011.[25] Xiangguo Cheng et al.: A New Approach to Group Signature Schemes, 2011.[26] Brian Chess et al.: Software Security in Practice, 2011.[27] David Chaum: Security Without Identification: Transaction Systems to Make Big

Brother Obsolete, 1985.[28] N.M. Mosharaf Kabir Chowdhuri et al.: A Survey of Network Virtualization,

2008.[29] Jean-Sebastien Coron et al.: Merkle-Damgard Revisited: How to Construct a

Hash Function, 2007.[30] Kevin Crowston et al.: Free/Libre Open Source Software Development: What We

Know and What We Do Not Know, 2010.[31] http://www.cryptopp.com/

[32] Kamalika Das et al.: A Local Asynchronous Distributed Privacy Preserving Fea-ture Selection Algorithm for Large Peer-to-Peer Networks, 2010.

Page 34: eic

34 HANNES ROLLIN

[33] Todd Davies et al.: Online Deliberation, 2009.[34] A.G. Dimakis et al.: A Survey on Network Codes for Distributed Storage, 2011.[35] Jorg Dirbach et al.: Software entwickeln mit Verstand, Book, 2011.[36] Wnliang Du et al.: Uncheatable Grid Computing, 2010.[37] Cynthia Dwork et al.: Pricing via Processing or Combatting Junk Mail, 1993.[38] Jeremy Epstein: Internet Voting, Security and Privacy, 2011.[39] Mandana J. Farsi: Digital Cash, Master’s Thesis, 1997.[40] Qinyuang Feng et al.: Voting Systems with Trust Mechanisms in Cyberspace:

Vulnerabilities and Defenses, 2009.[41] Ian Foster et al.: Cloud Computing and Grid Computing 360-Degree Compared,

2009.[42] Armando Fox: Cloud Computing — What’s in It for Me as a Scientist?, 2011.[43] Lenoidas Galanis et al.: Processing Queries in a Large Peer-to-Peer System, 2010.[44] Alan Gelb et al.: Cash at Your Fingertips: Biometric Technology for Transfers

in Resource-Rich Countries, 2011.[45] The Great Internet Mersenne Prime Search, http://www.mersenne.org/prime.htm.[46] The Gnutella Protocol Specification, v0.4. http://dss.clip2.com/GnutellaProtocol04.pdf,

2000.[47] R.A. Ghosh: Economic Impact of Open Source Software on Innovation and the

Competitiveness of the Information and Communication Technologies (ICT) Sector inthe EU, 2006.

[48] Shafi Goldwasser et al.: Lecture Notes on Cryptography, 2008.[49] Jim Gray: Distributed Computing Economics, 2008.[50] http://www.gridforum.org/

[51] Christian Grothoff: An Excess-Based Economic Model for Resource Allocation inPeer-to-Peer Networks, 2003.

[52] Stefan Haeflinger et al.: Code Reuse in Open Source Software, 2008.[53] Jorg Helbach et al.: Code Voting with Linkable Group Signatures, 2008.[54] Eric van Hippel et al.: Open, Distributed and User-Centered: Towards a Paradigm

Shift in Innovation Policy, 2010.[55] Christopher D. Hoffman: Encrypted Digital Cash Transfers: Why Money Laun-

dering Controls May Fail Without Uniform Cryptography Regulations, 1997.[56] Tad Hogg et al.: Multiple Realtionship Types in Online Communities and Social

Networks, 2008.[57] Xiangpei Hu et al.: Are Mobile Payment and Banking the Killer Appes for Mobile

Commerce?, 2008.[58] Andrew Hunt et al.: The Pragmatic Programmer, Book, 2008.[59] A.M. Anisul Huq: Can Incentives Overcome Mylicious Behavior in Peer-to-Peer

Networks?, 2009.[60] http://www.jxta.org/ and http://spec.jxta.org/v1.0/docbook/JXTAProtocol.html

[61] Information technology Laboratory: Secure Hash Standard, 2008.[62] Suresh Jaganathan et al.: A Study of Protocols for Grid Computing Environment,

2011.[63] A.D. Joseph et al.: Tapestry: An Infrastructure for Fault-Tolerant Wide-Area

Location and Routing, 2001.[64] Sam Joseph: NeuroGrid: Semantically Routing Queries in Peer-to-Peer Networks,

2010.[65] Ari Juels et al.: Security of Blind Digital Signatures, 1997.[66] V. Kalaichelvi et al.: Secured Single Transaction E-Voting Protocol: Design and

Implementation, 2011.[67] Pedram Keyani et al.: Peer Pressure: Distributed Recovery from Attacks in Peer-

to-Peer systems, 2010.

Page 35: eic

EVERYTHING IS CONNECTED — DISTRIBUTED WEALTH ON THE INTERNET 35

[68] Bjorn Knutsson et al.: Peer-to-Peer Support for Massively Multiplayer Games,2004.

[69] Maximilian Kogel: Towards Software Configuration Management for Unified Mod-els, 2008.

[70] Kaoru Kurosawa et al.: Universally Composable Undeniable Signature, 2010.[71] Ben Laurie et al.: ’Proof-of-Work’ Proves Not to Work, 2004.[72] Chris Lesniewski-Laas et al.: Whanau: A Sybil-Proof Distributed Hash Table,

2010.[73] Ralf Lindner et al.: Electronic Petitions and the Relationship between Interna-

tional Contexts, Technology and Political Participitation, 2008.[74] Zhangye Liu et al.: P2P Trading in Social Networks: The Value of Staying Con-

nected, 2010.[75] Stefan Lucks: Design Principles for Iterated Hash Functions, 2004.[76] Anna Lysyanskaya et al.: Group Blind Digital Signatures: A Scalables Solution

to Electronic Cash, 1998.[77] Scott D. Mainwaring et al.: From Meiwaku to Tokushita! Lessons for Digital

Money Design from Japan, 2008.[78] Ronald J. Mann: Adopting, Using and Discarding Paper and Electronic Payment

Instruments: Variations by Age and Race, 2011.[79] Jon W. Matonis: Digital cash and Monetary Freedom, 1995.[80] Sarah Meiklejohn et al.: ZKPDL: A Language-Based System for Efficient Zero-

Knowledge Proofs and Electronic Cash, 2010.[81] Ralph W. Merkle: Method of Providing Digital Signature, 1979.[82] Peter Bro Miltersen: Universal Hashing, Lecture Note, 1998.[83] Alberto Montresor et al.: Towards Adaptive, Resilient and Self-Organizing

Peer-to-Peer Systems, 2010.[84] Michele Mosca et al.: Quantum Coins, 2009.[85] Daniel A. Nagy: On Digityl Cash-Like Payment Systems, 2007.[86] Satoshi Nakamoto: Bitcoin: A Peer-to-Peer Electronic Cash System, 2008.[87] V.D. Nandavadekar: D-Commerce — A Way for Business, 2010.[88] Daniel Nurmi et al.: The Eucalyptus Open-Source Cloud-Computing System, 2009.[89] Howard T. Odum: Environment, Power and Society for the 21st Century, Book,

2001.[90] H. Oros et al.: A Secure and Efficient Offline Electronig Payment System for

Wireless Networks, 2010.[91] Saurabh Panjwani et al.: Usably Secure, Low-Cost Authentication for Mobile

Banking, 2010.[92] Abishek Parakh et al.: Online Data Storage Using Implicit Security, 2009.[93] Haejung Park: Various Aspects of Digital Cash, Master’s Thesis, 2008.[94] Chris Peikert et al.: Lower Bounds for Collusion-Secure Fingerprinting, 2003.[95] Adrian Perrig et al.: SAM: A Flexible and Secure Auction Architecture Using

Trusted Hardware, 2002.[96] G.F. Pfister: In Search of Clusters, Book, 1998.[97] UC Berkeley Reliable Adaptive Distributed Systems Laboratory,

http://radlab.cs.berkeley.edu/.[98] Rathee et al.: E-Governance: Promises and Challenges, 2011.[99] E.S. Raymond: The Cathedral and the Bazaar, 1998.[100] I. Reed et al.: Polynomial Codes Over Certain Finite Fields, 1960.[101] Ron Rivest: Lecture Notes on Cryptography, 1997.[102] A.W. Roscoe et al.: Reverse Authentication in Financial Transactions, 2010.[103] Timothy Roscoe et al.: Transaction-Based Charging in Mnemosyne: A Peer-to-

Peer Steganographic Storage System, 2010.

Page 36: eic

36 HANNES ROLLIN

[104] Vipin Saxena et al.: A Data Mining Technique for a Secure Electronic PaymentTransaction, 2010.

[105] W. Scacchi: Understanding the Requirements for Developing Open Source SoftwareSystems, 2002.

[106] Rudiger Schollmeier et al.: Routing in Mobile Ad Hoc and Peer-to-Peer Net-works. A Comparison, 2010.

[107] Jean-Marc Seigneur et al.: Trust Enhanced Ubiquitous Payment without tooMuch Privacy Loss, 2004.

[108] SETI@home: The Search for Extraterrestrial Intelligence Project,http://setiathome.berkeley.edu/.

[109] C.E. Shannon: A mathematical theory of communication, 1948.[110] The Smallpox Research Grid, http://www-3.ibm.com/solutions/lifesciences/research/smallpox.[111] Diomidis Spinellis et al.: Evaluating the Quality of Open-Source Software, 2008.[112] Charles Steward III: Voting Technologies, 2011.[113] Marc Stiegler et al.: Introduction to Waterken Programming, 2010.[114] Domenico Talia et al.: How Distributed Data Mining Tasks Can Thrive as

Knowledge Services, 2010.[115] Ted Trainer: Renewable Energy Cannot Sustain a Consumer Society, Book, 2007.[116] Gregory D. Troxel et al.: Enabling Open-Source Cognitively-Controlled Collab-

oration Among Software-Defined Radio Nodes, 2008.[117] B. Vass: Migrating to Open Source: Have No Fear, 2007.[118] Girraj K Verma: Probable Security Proof of a Blind Signature Scheme over Braid

Groups, 2011.[119] Matthew Wall et al.: Picking Your Party Online — an Investigation of Ireland’s

First Online Voting Advice Application, 2009.[120] S. Walli et al.: The Growth of Open Source Software in Organizations. A report,

2005.[121] Janna-Lynn Webeer et al.: Usability Study of the Open Audit Voting System

Helios, 2009.[122] Peng Weibing: Research on Money Laundering Crime under Electronic Payment

Background, 2011.[123] Constance J. Wells: Digital Currency Systems: Emerging B2B E-Commerce

Alternative During Monetary Crisis in the United States, 2011.[124] Dominic Widdows et al.: Semantic Vectors: A Scalable Open Source Package

and Online Technology Management Application, 2008.[125] Lizhen Yang et al.: Cryptoanalysis of a Timestamp-Based Password Authentica-

tion Scheme, 2001.[126] Lamia Youseff et al.: Toward a Unified Ontology of Cloud Computing, 2008.[127] Yuliang Zheng: Digital Signcryption, 1997.[128] Yingwu Zhu: Measurement and Analysis of an Online Content Voting Network: A

Case Study of Digg, 2010.

*