A home on the Internet — Ludovic Courtès — Peer-to-Peer Economic Models
  • Choosing Reputable Servents in a P2P Network [cornelli02:choosing]
    Fabrizio Cornelli, Ernesto Damiani, Sabrina De Capitani di Vimercati, Stefano Paraboschi, Pierangela Samarati (Universit? di Milano, Italy, Universit? di Brescia, Italy, Politecnico di Milano, Italy), Proceedings of the 11th World Wide Web Conference, May 2002
  • Disseminating Trust Information in Wearable Communities [schneider00:disseminating]
    Jay Schneider, Gerd Kortuem, Joe Jager, Steve Fickas, Zary Segall (Wearable Computing Group, Department of Computer and Information Science, University of Oregon, Eugene, USA), 2nd International Symposium on Handheld and Ubiquitous Computing (HUC2K), September 2000

    This 5-page document is more of a position paper.

  • Small Worlds in Security Systems: an Analysis of the PGP Certificate Graph [capkun02:sw-pgp]
    Srdjan Capkun, Levente Butty?n, Jean-Pierre Hubaux (Swiss Federal Institute of Technology Lausanne (EPFL), Switzerland), Proceedings of the Workshop on New Security Paradigms, 2002

    The paper studies the characteristics of the 2001 PGP certificate graphs and shows that it has small-world properties, namely a small directed characteristic length, a high clustering coefficient, and slower than logarithmic length scaling. The authors then propose a model of this certificate graph that is more realistic than previously proposed models. See hubaux01:quest for an application of these results in mobile ad hoc networks.

  • Free Riding in BitTorrent is Cheap [locher06:bitthief]
    Thomas Locher, Patrick Moor, Stefan Schmid, Roger Wattenhofer, Proceedings of the Fifth Workshop on Hot Topics in Networks (HotNets V), November 2006

    A demonstration of how weaknesses in BitTorrent's protocol and lack of suitable cooperation incentives could be exploited. Part of the exploits come from the lack of diversity in terms of implementations actually used, which makes the exploitation of simple weaknesses very viable: "Interestingly, during all our tests, our client was not banned by any of the trackers and could thus gather a lot of peers.".

  • The Tragedy of the Commons [hardin68:tragedy]
    Garrett Hardin (University of California, Santa Barbara, USA), appeared in Science, 1968

    The famous article by Hardin. While making great use of rhetoric, the paper opposes a number of rights that many people regard as fundamental, some of which are promoted by the Universal Declaration of Human Rights (e.g., the freedom to make decisions within a family, including about its size). Hardin's basic assumption is the following: "It is a mistake to think that we can control the breeding of mankind in the long run by an appeal to conscience." This dubious (in my opinion) statement serves as the rational of an appeal to centralization (and to state-enforced private property, for that matter). At least, experience has shown that it does not translate so well to the wealth of cooperative activities that have taken place over the Internet: free software, encyclopedias, file sharing, distributed computations, etc.

  • CORE: A Collaborative Reputation Mechanism to Enforce Node Cooperation in Mobile Ad Hoc Networks [michiardi02:core]
    Pietro Michiardi, Refik Molva (Eurecom, France), Proceedings of the Sixth IFIP TC6/TC11 Joint Conference on Communications and Multimedia Security, September 2002

    This paper details a relatively generic reputation mechanism targeted mostly at packet forwarding and route discovery (i.e., mechanisms where observation of a node's behavior is almost instantaneous). An terminology issue is that the authors use the term ``cooperation enforcement'', although cooperation cannot actually be enforced since users belong to difference administrative domains and, as such, are free to not cooperate. They just have a strong incentive to cooperate. The authors distinguish between first-hand and second-hand reputation (which they call, respectively, ``subject reputation'' and ``indirect reputation'', unlike lai03:incentives and buchegger03:rumor). They generalize observation of a node's behavior as the watchdog mechanism. CORE aims to address primarily selfishness attacks (i.e., non-cooperation). It does so by rating a node's behavior and considering non-cooperation as a kind of misbehavior. It allows only positive ratings to be exchanged among nodes, in order to avoid slander (buchegger03:rumor proposes a better solution to this problem). The authors sometimes seem to assume that a single policy is used among all nodes, such as applying a single rating algorithm, making the same decisions as a function of the rating, etc. The paper proposes an application of CORE to route discovery and explains that behavior observation takes place either at request-time or at reply-time. They fail to explain how exactly node misbehavior is detected: by spying on neighboring communications? By trusting a node that says it is willing to cooperate?

  • Verifying Self-Organized Storage with Bilinear Pairings [oualha07:verifying]
    Nouha Oualha, Suna Melek ?nen, Yves Roudier (Eurecom, France), June 2007

    The authors propose a novel storage auditing protocol for cooperative stores. The protocol allows data owners to delegate verification to third-party verifiers, without disclosing the data to be checked to the verifiers. Concretely, a data owner personalizes its data as it replicates it on some data holder, such that the personalized version of the data is bound to the holder it was sent to. At the same time, it assigns a number of verifiers to that data holder, and provides them with the meta-data needed to perform the verification (challenges). The verification takes place between one of the assigned verifiers and the data holder; verifiers can provide the holder with credentials asserting they were delegated verification rights by the data owner. The paper includes a comparison with other challenge and auditing algorithm such as that of lillibridge03:p2p-backup. It is unclear how verification results could be used (e.g., in a reputation system), or how they could be transferred to the data owner and, most importantly, authenticated by the data owner.

  • Incentives for Cooperation in Peer-to-Peer Networks [lai03:incentives]
    Kevin Lai, Michal Feldman, John Chuang, Ion Stoica (U.C. Berkeley, USA), Proceedings of the Workshop on Economics of Peer-to-Peer Systems, 2003

    This papers evaluates the efficiency, in terms of the overall system "score" of different implementations of a reciprocative decision function, i.e., a function that decides whether to cooperate or not with a peer based on prior behavior (i.e. level of collaboration) of the peer. Experiments show that use of private history records (as in GNUnet grothoff03:gnunet-eco, where each peer maintains records about previous interactions with peers) does not scale to large populations and is inefficient with a high turnover. However, use of shared history (i.e. reputation) does scale to large populations. However, it is much harder to implement and subject to collusion. The authors suggest the use of subjective reputation as a means to reduce the effect of collusion (peer A weights peer B's opinions based on how much it trusts it). The paper also addresses the problem of whitewashers, i.e., reuse of identities when those are zero-cost, aka. ``Sybil attacks'' douceur02:sybil. The authors evaluate two solutions, one where each new comer (or stranger) has to suffer at least one defect (thus lowering the overall collaboration level), and an adaptive one which uses an estimate of strangers' cooperativeness. The latter proves to be quite effective, hence showing that zero-cost identities do not preclude cooperation. See also this discussion on gnunet-developers.

  • Peer-to-Peer Resource Trading in a Reliable Distributed System [cooper02:resource-trading]
    Brian F. Cooper, Hector Garcia-Molina, Revised Papers from the First International Workshop on Peer-to-Peer Systems, 2002

    Discusses resource trading for cooperative systems when contributors contribute resource to build a fault-tolerant system.

  • Accountability [dingledine01:accountability]
    Roger Dingledine, Michael J. Freedman, David Molnar (Reputation Technologies, Inc.), Peer-to-Peer: Harnessing the Power of Disruptive Technologies, March 2001

    Very-well written, very extensive discussion of the accountability problem in decentralized systems.

  • The Small World Problem [milgram67:sw]
    Stanley Milgram, appeared in Psychology Today, 1967
  • Stimulating Cooperation in Self-Organizing Mobile Ad Hoc Networks [buttyan03:stimulating]
    Levente Butty?n, Jean-Pierre Hubaux (Laboratory for Computer Communications and Applications Swiss Federal Institute of Technology, Lausanne, Switzerland), appeared in ACM/Kluwer Mobile Networks and Applications, October 2003

    This paper addresses cooperation of mobile ad hoc nodes as far as packet forwarding is concerned. They propose to have each node maintain a nuglet counter "which is decreased when the node wants to send a packet as originator, and increased when the node forwards a packet" and where the "value of the nuglet counter must remain positive", so that cooperation is enforced. However, this solution relies on the installation of a single point of trust, namely via the introduction of a "tamper-resistant" module, where the authors "assume that the security modules are manufactured by a limited number of trusted manufacturers" and that their behavior "cannot be modified".

  • Samsara: Honor Among Thieves in Peer-to-Peer Storage [cox03:samsara]
    Landon P. Cox, Brian D. Noble (Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA), Proceedings 19th ACM Symposium on Operating Systems Principles, October 2003

    Samsara is an extension to Pastiche addressing decentralized resource accounting by providing it with mechanisms give peers cooperation and fairness incentives. This paper is must reading. Samsara relies on making local resource allocations symmetric (which is different from having symmetric storage relationships throughout the whole network since the latter imposes much more constraints on storage location) by means of claims: for each block a node stores for another node, it receives a storage claim (which can be thought of as a capability), a right to use as much storage on the node in question. Actually, claims are data blocks that are stored on the storage-using node and cannot be guessed nor forged by it; if A stores 512 bytes on B, then A shall store a 512 byte claim computed by B (which in effect is equivalent to granting B the right to store 512 bytes on A). Claims can be computed on the fly by nodes (i.e. nodes do not have to store their claims) and the claim contents are a function of the location of the block for which they got that claim and a secret passphrase. In times of resource shortage, nodes may forward claims to other nodes, thus freeing some space locally; however, forwarding weakens data reliability. Each node periodically challenges peers storing its belongings (actual data or claims) by asking it a hash of a random set of objects it claims to be storing. Instead of having a strict deadline for the grace period (as in Elnikety's paper), Samsara has a gradated one which is tolerant to transient failures: data blocks are deleted probabilistically so that nodes suffering from transient failures may still be able to restore their data from live copies, while nodes chronically disconnected may turn out to have lost too many data blocks.

  • Portable Reputations with EgoSphere [bonawitz04:egosphere]
    Keith Bonawitz, Chaitra Chandrasekhar, Rui Viana (MIT Laboratory for Computer Science), 2004

    Student project!

  • A Game Theoretic Model of a Protocol for Data Possession Verification [oualha07:game]
    Nouha Oualha, Pietro Michiardi, Yves Roudier (Eurecom, France), Proceedings of the IEEE International Workshop on Trust, Security, and Privacy for Ubiquitous Computing (TSPUC), June 2007

    The paper provides the evaluation of a data possession verification protocol using a game-theoretic approach. The idea is to allow users of a cooperative storage/backup service to audit contributors, i.e., to check whether they honor their storage promises, in a way similar to what lillibridge03:p2p-backup, cox03:samsara and others did. The evaluation aims to (i) validate the existence of cooperation equilibria, and (ii) help tune the cooperation incentives (reward/punish). The limitation with such modeling is that it assumes that the payoff is directly observable by each party, which may not be the case, e.g., with local trust economic models such as grothoff03:gnunet-eco and with long-term service provision.

  • Allocating Resources in Storage Cooperatives with Pseudo-Currencies [turner04:allocating]
    David A. Turner, Daniel M. Havey, John Ewart (California State University, USA), Proceedings of the International Conference on Computer Science and its Applications, July 2003

    Some sort of a position paper proposing a pseudo-currency-based solution to trade storage and bandwidth resources in a storage cooperative. The basic idea is to allow users to create their own "money" (currency unit) and then have them exchange money in whatever currency they want to deal with. In the end, this reminds me of Samsara's storage claims cox03:samsara which can be thought of as peer-specific currency units.

  • A Reputation-Based Approach for Choosing Reliable Resources in Peer-to-Peer Networks [damiani02:reliable]
    Ernesto Damiani, De Capitani di Vimercati, Stefano Paraboschi, Proceedings of the 9th ACM Conference on Computer and Communications Security, November 2002
  • The Strength of Weak Ties [granovetter73:strength]
    Mark S. Granovetter, appeared in American Journal of Sociology, May 1973

    Study of the connection among groups of people. Weak ties here are similar to the ``shortcuts'' referred to in hubaux01:quest. The so-called ``Granovetter diagrams'' used at http://erights.org/ and in miller06:robust are named after its author.

  • An Excess-Based Economic Model for Resource Allocation in Peer-to-Peer Networks [grothoff03:gnunet-eco]
    Christian Grothoff (Department of Computer Sciences, Purdue University, USA), appeared in Wirtschaftsinformatik, June 2003

    This enlightening paper describes the decentralized, trust-based economic model implemented in GNUnet. This model is excess-based: in times where resources are available (i.e. most of the time) it serves all resource allocation requests (this is how the system is bootstrapped); however, in times of resource shortage, decision is made based upon prior behavior of peers. Basically, as in real life, each node has an opinion on other nodes it has been communicating with. This opinion is represented by a trust level (i.e. an integer). For each query that is sent to a peer, the sender specifies the "amount of trust" it is willing to "pay" for this query to be served (in other words, the query's priority). When serving the query, the server may decrease its trust level for the sender by the priority it gave. Once the client's query has been answered, the client may increase its trust in this server, which in effect is equivalent to giving storage rights to the server. Peers have no interest in having erroneous trust levels (i.e. either too low or too high). This model is simple yet efficient, provided peers constituting the network do not change too often (which may not be the case in mobile networks). See also lai03:incentives and this thread.

  • The Small-World Phenomenon: An Algorithmic Perspective [kleinberg00:sw-algo]
    Jon Kleinberg (Department of Computer Science, Cornell University, Ithaca, NY, USA), Proceedings of the 32nd ACM Symposium on Theory of Computing, 2000

    Very nice paper that tries to answer the question:

    Why should arbitrary pairs of strangers be able to find short chains of acquaintances that link them together?
    The authors proposes a node distribution model family and demonstrates that there exists a unique model within this family for which decentralized ``routing'' algorithms (relying only on node-local information) are ``effective''. This model is a k-dimensional where each node knows all of its neighbors, call local contacts, (e.g., 4 neighbors when k=2) plus a few long-range contacts that are distributed probabilistically as a function of their distance to the node.

    The author show that the unique model within this family where distributed search algorithms are effective is when this probability is proportional to the distance to the node to the power k. In this situation, all long-range contacts are homogeneously distributed over various distances to the node.

  • The Sybil Attack [douceur02:sybil]
    John R. Douceur (Microsoft Research), Revised Papers from the First International Workshop on Peer-to-Peer Systems (IPTPS), 2002

    The paper that introduced the well-known ``Sybil attack'' against systems that used self-managed names. See also marti03:identity for a critical view on the issue.

  • The Effect of Rumor Spreading in Reputation Systems for Mobile Ad-hoc Networks [buchegger03:rumor]
    Sonja Buchegger, Jean-Yves Le Boudec (IBM Zurich Research Laboratory, EPFL-DSC, Switzerland), Proceedings of WiOpt `03: Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, March 2003

    A nice paper on building a reputation system essentially for packet forwarding in MANETs. The paper contains a good section about related work in MANETs. The authors propose a representation of belief based on a Bayesian approach: roughly, a belief is represented by two parameters of the ``Beta'' function that represent the level of confidence for every possible reputation value. They propose a simple decision-making process based on the reputation estimate of the peer at hand. The authors then expose the perils of merging other peer's opinions (referred to as second-hand opinions, where first-hand opinions are essentially the same as what [?bib lai03:incentives] refers to as private history records) and propose a solution to merge Bayesian models that accounts for these threats. In particular, they propose to discard second-hand opinions that deviate significantly from one's first-hand opinions. Simulation results show that:

    • Using second- or even third-hand opinions is always more efficient (i.e., misbehaving nodes are detected faster by all nodes) than using only first-hand opinions, even in the presence of liars. This confirms the results in lai03:incentives.
    • Detection improvement thanks to the use of second-hand opinions (compared to using only first-hand opinions) increases as the network grows.
    • Use of second- or third-hand opinions reinforces one's original opinions because that opinion gets reflected in later reports from other nodes. Thus, the authors suggest that peers could exchange only ``deltas'', thereby avoid this amplification effect. However, this does not seems to have a significant impact on efficiency.
    The simulation assumes that all non-misbehaving nodes run the same reputation algorithm. The ``Future Work'' section also makes this assumption when it describes possible system-wide attacks that result from the use of a single reputation algorithm.

  • Identity Crisis: Anonymity vs. Reputation in P2P Systems [marti03:identity]
    Sergio Marti, Hector Garcia-Molina (Stanford University), IEEE Conference on Peer-to-Peer Computing, September 2003

    The main contribution of this paper is an evaluation of the impact on the identity management method on a reputation system. The two identity management methods compared are self-managed identities (as in GNUnet and most other fully decentralized P2P systems, e.g., using self-signed certificates ? la OpenPGP) and a login server-based technique (where a trusted authority provides participants an identity). The authors compare those methods to that of purely random node selection, following two criteria:

    1. The method's efficiency, defined as the ability to reduce as much as possible the number of documents a user must examine before finding the correct document for their query.
    2. The method's effectiveness, defined as "ability to locate an answer, given that one exists somewhere on the network".
    Summary of the results:
    • With self-managed IDs, the initial reputation has to be zero in order to maximize efficiency, while an initial reputation greater than zero improves the efficiency of the reputation mechanism in the central ID case; for self-managed IDs, this is more or less a conclusion of the Sybil attack.
    • In the login scenario, choosing a non-zero selection threshold (i.e., the minimum level of trust required to select a document) significantly increases efficiency. In the self-managed case, efficiency is worse than in the login case and changing the threshold does not make any difference; this is a consequence of the Sybil attack: since users can change identities, "remembering which nodes have lied in the past is of no use".
    • As the percentage of malicious nodes increases, or as the number of documents in the "subversion set" increases, the login scenario performs better than the self-managed scenario, but the self-managed scenario is order of magnitudes more efficient than the random selection case.
    The mechanisms simulated here are actually not "true" reputation mechanisms since reputation information is not shared among participants. In Section 6.4, they outline an evaluation where nodes would actually exchange reputation information; this idea is similar to that used in lai03:incentives.

  • Security and Trust in Mobile Interactions: A Study of Users Perceptions and Reasoning [kindberg04:trust]
    Tim Kindberg, Abigail Sellen, Erik Geelhoed (Consumer Applications and Systems Laboratory HP Laboratories, Bristol, UK), MobiComp 2004, 2004
(made with skribilo)