Adversarial and Incentive-compatible Content Search
Searching for content is one of the primary operations in any file sharing system. The same problems that plague the web search engines are also relevant in P2P. However, very few researchers are actively working on this topic and we believe in a decentralized environment the problems are significantly harder to address.
We seek to address the issue of pollution, spamming et al for content search with keywords in P2P. Moreover, this should not have an impact on the scalability of search, number of discovered matches, and response time.
We seek to avoid pollution by enabling the incorporation of trust into our system. Various methods of trust have been proposed. So for example, Credence seeks to place a reputation value on the objects, files to be shared, themselves rather than the actual peers. The reasoning behind is given to be that objects are permanent while the reputation of peers keep changing with various subsequent actions as time passes.
In our social network, where the primary emphasis is on social relations and interpersonal friendships or lack thereof, we don’t feel that such an approach, that is having a reputation system for objects rather than peers, is going to be all that effective. We seek to build relationships between peers; in fact we even seek for them to utilize existing real world relationships. In an environment such as ours, the primary distinction between friend and foe, or to put it in another way, distinction between trustworthy and untrustworthy peers is of paramount importance.
We feel, the drawback of having an object-oriented approach is that new objects have no reputation. A person and friendship oriented approach does not have this drawback, but requires more complexity to implement. Survey on freeriding and trust in P2P
One important point regarding trust that is sometimes ignored while implementing ‘virtual trust’ is that trust is not transitive. (E.g see Supporting trust in virtual communities). Just because X trusts Y and Y trusts Z, doesn’t necessarily mean that X would trust Z. The problem is usually one of context. So X can’t be sure in which context Y trusts Z. For this, approaches have been suggested to incorporate context or categories into trust systems. (E.g see Towards a Generic Model of Trust for Electronic Commerce) However, to translate too rigidly or literally from the sociological and psychological domains, is firstly, we feel impossible, and secondly even if it were possible, would lead to high dimensionality and complication. Therefore, a set of categories or contexts would need to be chosen, for which the trust between peers could be transitive for individual categories and could be applied inter-"categorically" where possible.
We could delve into one suggested approach for Trust which says that trust in a peer equates to the degree of friendship with the peer and bandwidth exchanged over a course of time and other factors. However it should be noted that this approach totally eliminates the knowledge for any sort of estimation of malicious behavior on part of a peer. This approach, stemming from the context of social networking, seems to suggest that obviously your friends won’t cheat you and you don’t need to worry about others who might be malicious as you would have a low trust value for them anyway, as they are not your friends.
Here the degree of friendship is not the answer to the problem. Consider, just as in the context of acceptance of certificates by users on the Internet, without even reading the contents, it’s quite possible that people start adding friends indiscriminately or at least without a suitable degree of caution. And if and when these peers engage in malicious behavior such as for example file pollution, there is no way, using the above approach that a peer would be able to identify in advance whether or not to engage with such a peer. There should then be some reputation mechanism maintaining the history of past behavior of peers, using which a particular peer could rely on work already done by other peers in the past to make judgments about future transactions with different peers. While we realize that trust ultimately would be subjective, it would be a much better system if a peer did not rely solely on individual knowledge and judgments and also utilized knowledge gathered by the ‘society’ in which it exists.
The essence of this argument is that there has got to be a mechanism using which peers can determine in advance whether a particular peer can be trusted or not and whether that peer indeed deserves to be added or treated as a friend or for that matter dropped as a foe.
Furthermore, and more generally, there are some major issues related to the design of a reputation mechanism in decentralized environments which need to be looked into(E.g see managing trust in p2p systems using reputation based techniques by ben chin ooi et al for details)
The critical issue regarding a trust mechanism in a decentralized environment is that of integrity. How can it be known that particular bits of reputation information haven’t been manipulated by self serving malicious peers? The research challenge is to design an algorithm which can best aproximate trust levels. Furthermore, this algorithm needs to be robust against attacks where a large number of peers run malicious versions specifically crafted to attack the algorithm.
As Aberer et al sugggest in Managing Trust in a Peer-2-Peer Information System, in a reputation mechanism where complaints are lodged, a peer despite being malicious could easily make a complaint against its victim to appear virtuous. How would it be determined who is telling the truth? Is such a system even deployable? These and many other issues need to be thought through.
Aameek Singh and Ling Liu present TrustMe: Anonymous Management of Trust Relationships in Decentralized P2P Systems, where the focus is on presesnting a secure protocol at message level in addition to a trust model, while maintaining anonymity of requestor and provider peers. The anonymity, it is argued, is needed to prevent the peers requesting or giving information about trust value(s) of other peers from being attacked.
Random peers are chosen by a bootstrap server(s) to maintain trust values of particular peers. The Public key infrastructure is used to maintain the integrity of the trust messages. However, the question arises--if it is an unstructured network, how can it be made sure that a trust query, i.e. a query which a peer makes about the trust value of another peer, will definitely reach the peer(s) responsible for holding the trust value of that particular peer.
Furthemore, in this system, the bootstrap server acts as a "Certification Authority", which upon the arrival of a new peer on the network, selects from its cache, a set of peers that will serve as the Trust Holders of this peer. How the caching mechanism works and how such an approach whereby there is a centralized certification authority, can integrate into a decentralized environment without entaiing the risks associated with centralization, has not been discussed
-- C. Zieglerand & G. Lausen, Analyzing Correlation between Trust and User Similarity in Online Communities
-- K. Walsh & E. Sirer, Thwarting P2P Pollution Using Object Reputation
-- H. Lee & K. Kim, An Adaptive Authentication Protocol based on Reputation for Peer to Peer Systems
-- A.A. Rahman & S. Hailes, Supporting Trust in Virtual Communities
-- B. Ooi et al, Managingg Trust in Peer-to-Peer Systems Usingg Reputation-Based Techniques
-- K. Aberer, Z. Despotovic, Managing Trust in a Peer-2-Peer Information System
-- A. Sing & L. Liu, TrustMe?: Anonymous Management of Trust Relationships in Decentralized P2P Systems
-- N. Christin et al, Content Availability, Pollution and Poisoning in File Sharing Peer-to-Peer Networks
-- T. Repantis & V. Kalogeraki, Decentralized Trust Management for Ad-Hoc Peer-to-Peer Networks
-- S.K. Lam & J. Riedl, Shilling Recommender Systems for Fun and Profit
- early September: start
- 1 Oct: First complete webpage