Skip to content. | Skip to navigation

Personal tools
Sections
You are here: Home / Design / Anonymity and Trust

Anonymity and Trust

by admin last modified Aug 20, 2013 09:31 PM

Trust in Online Communities

One of the difficult things about online communities is figuring out who to trust. What that means exactly varies depending on the community. For most communities it’s some variant of “I trust you not to waste my time”. On a discussion site like Slashdot or Reddit, trusting a user means you’re willing to spend a few seconds reading their comments or a few minutes writing a reply. In online games, it may mean a more significant commitment, like trusting that someone isn’t going to, say, leave in the middle of a several-hour quest because it’s past their bedtime or do something tactically unwise at a critical moment.  In an e-commerce site like Amazon.com, trusting another user means spending money based on their recommendation.

For some communities, such as open source projects, trust can have significant real world consequences that affect many people. Even there, though, trust isn’t usually as dramatic as trusting someone with your credit card number or your nuclear missile launch codes; it’s more about whether the other person is helping the goals of the community and not poisoning the well. Just one bad apple can have a surprisingly big impact.

If you know a person in real life, then the decision is of the same sort as all the other trust decisions in your life, and you can use your real-world relationship to bootstrap your online relationship.  However, outside of communities that are explicitly structured around real-life relationships (i.e. Facebook, LinkedIn, Google+, etc..), most users are going to be frequently interacting with people they don't already know from some other context.

In a community where users are pseudonymous, rather than strictly anonymous (in other words, users have long-lived identities even if we don't know who they are in the real world), the process of establishing trust is much the same as in real life. Maybe you trust someone you barely know in small matters, and then in greater matters once they’ve proven themselves in the small ones. Supposing that someone’s help has saved you N minutes of time across all your previous interactions, and you only engage in a new interactions when, in the worst case, it would waste less than N minutes of your time, then you’ll always end up net ahead. We aren’t always that analytical about the cost-benefit analysis of our interactions, and wasted time isn’t the only potential downside of any interaction, but there’s usually at least some score-keeping element to our interactions with other users.

The problem with this whole process of users earning each other’s trust is that it’s way too slow. All but the smallest communities are too large for a user to have such an ongoing interaction with everyone. More often, you’ll spend much of your time interacting with people who you’ve never met before and will never see again.

(Another way to put it is that participating in an online community is like playing iterated prisoner's dilemma against an ever-changing stream of opponents.)

The situation isn’t as hopeless as it appears. Even if you haven’t directly interacted with some user, odds are good that someone has. What is needed is some convenient mechanism to use the prior experiences of other users to form an initial impression of a user you haven’t interacted with yet.

Introducing Reputation Systems

Many online communities have some sort of built-in mechanism to do just this (broadly referred to as reputation systems), but they tend to be rather ad-hoc, and it turns out that it’s hard to get right.

For every complex problem there is an answer that is clear, simple, and wrong. (H. L. Mencken)

The problems that arise tend to be of two kinds.  Either the reputation system doesn't provide the right kind information the users need to make informed decisions, or the reputation system is so easy to manipulate that users have no confidence in what it says.  Let's look at some examples:

Friend count

One naive approach is the friend count.  A user’s reputation is the number of people that have given some kind of positive endorsement of that user.  (A more sophisticated variant is to also allow users to give each other negative ratings, but we'll set that topic aside and come back to it later.)

count.png

Unfortunately, friend counts cease to be meaningful as soon as someone creates a thousand dummy accounts in order to vote himself king of the Internet. Some communities protect themselves against that by making it harder to create new accounts or by only allowing users to rate other users with whom they’ve entered into some real-world transaction, but those approaches only work in some contexts.  Otherwise, any system that treats all users as equivalent can be expected to fail in the same way. We need some way to discount the ratings of those dummy accounts.

count-sybil.png

Web of trust

A web of trust is one way to divide a population of users into ones you trust and ones you don't. In a web of trust, I explicitly trust myself, and then anyone that I like, or that is liked by someone I like, etc… out to a certain number of hops. (In a more sophisticated variant, trust is not binary, but rather decreases with the number of hops outward.) Everyone else gets a trust rating of zero.

example-weboftrust.png

Notice how reputation is now subjective. Instead of every user having a globally-agreed-on reputation value, reputation now depends on who you ask.  Each user has their own web of trust.

A nice property of a web of trust is that it’s no longer enough for an attacker to create dummy accounts to stuff ballots; now he/she must fool someone I trust (whether directly or indirectly) into trusting that dummy account -- a more difficult job.

Unfortunately, a single bad user in a web of trust can undermine its integrity. They could, for instance, establish a trust relationship with every single user in the system, thereby causing every user in the system to be trusted and rendering the distinction between trusted and untrusted users meaningless. We don’t want the consequences of trusting the wrong person to be quite so drastic.

PageRank

PageRank is a reputation system that applies weights the influence of entities in a sensible way. Used famously by Google as a way of rating documents that link to each other (i.e. web pages), it can be applied just as well to the network of relationships in a social network.

There are several ways to compute PageRank, but the easiest to explain is the random surfer. Given some directed graph, the PageRank of a given node is the probability that a random traversal of the graph will be visiting that node at some particular time. There are some rules that take into account dead-ends and random jumps which cause the traversal to return to some chosen set of explicitly trusted nodes. To compute pagerank, we just perform many random traversals of the graph. The page rank of a node is the number of visits to each node divided by the total number of node visits.

In PageRank, a given link contributes to the target’s reputation in proportion to the source node’s PageRank divided by that node’s number of outlinks.

In other words:

  • A link from a node is worth more in proportion to that node’s PageRank.
  • A link from a node is worth less in proportion to the number of outgoing links that node has.

If the set of trusted nodes is the same as the set of all nodes, then the PageRank calculation will produce a single "neutral" score for each node.  However, one can also use a subset or a single node as the trusted set to produce a subjective score, where only nodes that are connected in some way to the trusted set will have a score greater than zero.

To return to the example I gave before, supposing some user I trust in some way creates a trust link to every single user, the value of each individual link will be very small. They might be able to do some mischief, but at least there is an upper bound on the amount of mischief, and the bound is proportional to how well connected they are to me.

Negative Reputation

What PageRank doesn’t do is account for negative links. In online communities, we may care a great deal about encouraging good behavior, but we also care about discouraging bad behavior. PageRank doesn’t provide a mechanism to do that. Under PageRank, a person could have a high rating because they are universally well liked, or they might have a high rating because they are very popular with some group and despised by everyone else; we can’t tell the difference.

We wish that PageRank were a predictor of quality, but it's actually a measure of popularity.  (Not that we want to single out PageRank -- this is true of any reputation system that only deals in positive ratings and not negative ratings.)

How should we account for negative reputation?  The intuition behind PageRank is the idea that if A has a favorable rating of B, and B has a favorable rating of C, then A would probably also have a favorable rating of C.  Trust is a transitive relation, and that's what allows PageRank to rate entities that are many hops away from any explicitly trusted entity.

transitive-trust.png

We can piggyback on this transitive chain of positive ratings and extend PageRank thus: if A has a favorable rating of B and B has a negative rating of C, then A would probably also have a negative rating of C.  (Such an approach was described by Guha et. al. in Propagation of Trust and Distrust, from the www2004 conference.)  This is something we do in real life -- if our friends don't like someone, we will develop a bias against that person as well.  Sometimes this negative bias is undeserved; if one's friends consistently dislike the wrong people, it may be time to find better friends.

(The phrase “dislike the wrong people” is an uncomfortable one because it implies that there are right people to dislike. Perhaps the word has too many connotations. It may sound unduly harsh when applied to people we may know in real life, but I think most of us wouldn’t have any qualms about indicating a preference not to interact with, say, a spammer in an online forum.)

That gives us multi-step propagation of positive reputation, and a single step of propagation for negative reputation.  That's pretty good, but can we do better?  Is there a way to do a multi-step propagation of negative reputation?

The most obvious approach is wrong.  If A dislikes B, and B dislikes C, there isn’t much we can say about whether A would dislike C.  A really shouldn’t draw any conclusions based on the opinions of the distrusted entity B.

Here's one that does work.  If A dislikes B and C likes B, then we can predict that A wouldn't like C (If you like someone I don't like, then I don't like you).  This is also something that people do instinctively.  If someone respects someone you don't respect, then you tend to respect that person a little less.  (This phenomenon should be familiar to anyone who's spent much time on Facebook during election season.)

The nice thing about this reverse-propagation of negative reputation through positive backlinks is that it punishes people who are flippant about who they like.  If you use reputation system ratings as a way of giving a virtual high-five to everyone you see (possibly in the hope that they will do the same), then you'll find that your own reputation suffers accordingly.

As far as I know, Pariah is the only reputation system with this particular feature, which is why I wrote it in the first place.  In it's original form, it was basically a variation on PageRank with a couple extensions to deal sensibly with negative links.  (You can read about implementation specifics here.)  Though our current algorithm has little resemblance to PageRank, from an end user's point of view it behaves similarly-enough to our earlier algorithm that the differences aren't worth going into here.

For the sake of completeness, it's worth mentioning that another way to do reputation propagation in our previous example is we could predict that C wouldn't like A (If you don't like my friends, then I don't like you).  I'm a little reluctant to build that into Pariah, because I'd rather not punish users for giving negative ratings.  You should always be free to say, "the Emperor has no clothes" without reprisal, in my opinion (though I remain open to the possibility of being convinced otherwise).

Dimensions

We’ve talked about good and bad reputation, but so far we've glossed over an important point.  How does positive reputation relate to negative reputation, and how do we present them to the user in a coherent, meaningful way?

One option is to treat reputation as a signed number.  Certainly, our language of "positive" and "negative" suggests just that, but I think it's a mistake (and a common one).  One problem that arises is how do you aggregate multiple reputation values?  If they're all positive or all negative, you can just add them together.  What about some positive and some negative?  Should an equal number of positive and negative reputation neutralize each other, leaving a reputation of zero?  Should negative reputation be weighted more heavily than positive?

We can sidestep many of these sorts of questions if we think of reputation as inherently multi-dimensional instead of trying to cram it all into a single number.  This not only avoids a host of design problems, it also provides a more nuanced view of reputation to the user.  For instance, it makes it possible to clearly distinguish a user with a huge amount of positive reputation and an equal amount of negative reputation from a user with zero reputation of either kind.

(This idea of reputation as a two-dimensional value was another idea that was introduced to me through the paper Propagation of Trust and Distrust by Guha et. al.)

In addition to trust and distrust, I believe there is room for a third dimension. In the previous section, I referred to punishing a user for liking disreputable users (if you like someone I dislike, then I dislike you). We want to discourage users from being flippant about who they like, but this isn’t bad behavior in the same way that, say, trolling or spamming or harassment are bad behavior. On the other hand, we don't want to ignore it entirely.

My solution is to have a separate kind of negative reputation which I call "gullibility". Unlike trust and distrust, it isn’t determined by other users directly rating a user as gullible, but rather a user accrues gullibility when they highly rate other users with either a high distrust or a high gullibility score.

Supposing I encounter a user with a high gullibility score, it isn’t a hint that I ought to run away or ignore that person, but rather it’s an indication that if I were to identify that person as a friend, in so doing I would be indirectly improving the reputation of users I don’t like as well as harming my own reputation, and I should weigh the matter carefully.

In Pariah, we use a three-dimensional reputation value, with the dimensions of trust, distrust, and gullibility.  This may not be the final word on the many-faceted concept of reputation, but we think it's a reasonable balance between providing enough information to be useful, but not so much that it's confusing to users.

Where do we fit in?

So, to sum up: Designing a reputation system that is both useful and trustworthy isn't particularly difficult, but it requires some careful thought to avoid common mistakes.  (Producing an algorithm that's computationally efficient when dealing with very large social networks is trickier, which is why we think most customers are better off using our system than writing their own in-house.)

If it isn't so hard to do, why isn't everyone building and deploying useful and attack resistant reputation systems in communities that clearly need them?  Why, in 2013, do people still say, "I quit playing <insert the name of absolutely any online game here> because I can't stand the other players"?

This was no less baffling in 2003 or so when I first asked myself this.  There's an implied opportunity in questions of this sort, which is why, after dragging my heels for a decade,  I decided to leave my day job, find a co-founder, and start Metamocracy.

Our product, Pariah, is a reputation system designed to be useful and trustworthy, and to compute reputation values quickly and efficiently on-demand.

Pariah computes a user A's reputation of user B by running a clever graph algorithm that computes the probability of a random walk reaching user B starting at user A.  This is similar to PageRank, but the rules of the random walk and the method of computing probabilities are very different.  In addition, our random walk has special rules to handle the case of traversing a negative link which allow us to model negative reputation and gullibility in the way we describe above.

You can read more about Pariah here, or just skip straight to the User Guide.

For more information on the design of reputation systems in general, we have a collection of design patterns and other assorted notes that describe many of the issues discussed above in greater detail.

And, of course, if you're interested in using Pariah in your own online game or community, please contact us at info@metamocracy.com.