There’s a truism that bothers many (except economists): if there is a good or service that has value to some and can be produced at a cost below that value by someone else, there will be a market. This is disturbing to many because it is as true for areas of dubious morality such as sexual transactions, clear immorality (human trafficking and slavery) as it is for lawn mowing and automobiles.
Likewise for online activities, as I’ve documented many times here. You can buy twitter followers, Yelp reviews, likes on Facebook, votes on Reddit. And, of course, Wikipedia, where you can buy pages or edits, or even (shades of The Sopranos), “protection”.
Here is an article that reports at some length on large scale, commercialized Wikipedia editing and page management services. Surprised? Just another PR service, like social media management services provided by every advertising / marketing / image management service today.
Well, not “everything”. But every measure on which decisions of value depend (e.g., book purchases, dating opportunities, or tenure) can and will be manipulated.
And if the measure depends on user-contributed content distributed on an open platform, the manipulation often will be easy and low cost, and thus we should expect to see it happen a lot. This is a big problem for “big data” applications.
This point has been the theme of many posts I’ve made here. Today, a new example: citations of scholarly work. One of the standard, often highly-valued (as in, makes a real difference to tenure decisions, salary increases and outside job offers) measures of the impact of a scholar’s work is how often it is cited in the published work of other scholars. ISI Thompson has been providing citations indices for many years. ISI is not so easy to manipulate because — though it depends on user-contributed content (articles by one scholar that cite the work of another) — that content is distributed on closed platforms (ISI only indexes citations from a set of published journals that have editorial boards which protect their reputation and brand by screening what they publish).
But over the past several years, scholars have increasingly relied on Google Scholar (and sometimes Microsoft Academic) to count citations. Google Scholar indexes citations from pretty much anything that appears to be a scholarly article that is reachable by the Google spiders crawling the open web. So, for example, it includes citations in self-published articles, or e-prints of articles published elsewhere. Thus, Google Scholar citation counts depends on user-contributed content distributed on an open platform (the open web).
And, lo and behold, it’s relatively easy to manipulate such citation counts, as demonstrated by a recent scholarly paper that did so: Delgado Lopez-Cozar, Emilio; Robinson-Garcia, Nicolas; Torres Salinas, Daniel (2012). Manipulating Google Scholar Citations and Google Scholar Metrics: simple, easy and tempting. EC3 Working Papers 6: 29 May, 2012, available as http://arxiv.org/abs/1212.0638v2.
Their method was simple: they created some fake papers that cited other papers, and published the fake papers on the Web. Google’s spider dutifully found them and increased the citation counts for the real papers that these fake papers “cited”.
The lesson is simple: for every measure that depends on user-contributed content on an open platform, if valuable decisions depend on it, we should assume that it is vulnerable to manipulation. This is a sad and ugly fact about a lot of new opportunities for measurement (“big data”), and one that we must start to address. The economics are unavoidable: the cost of manipulation is low, so if there is much value to doing so, it will be manipulated. We have to think about ways to increase the cost of manipulating, if we don’t want to lose the value of the data.
Here is a recent article about high school students manipulating their Facebook presence to fool college admissions officers. Not terribly surprising: the content is (largely) created and controlled by the target of the background searches (by admissions, prospective employers, prospective dating partners etc) so it’s easy to manipulate. We’ve been seeing this sort of manipulation since the early days of user-contributed content.
People mining user-contributed content should be giving careful thought to this. Social scientists like it when they can observe behavior, because it often reveals something more authentic than simply asking someone a question (about what they like, or what they would have done in a hypothetical situation, etc). Economists, for example, are thrilled when they get to observe “revealed preference”, which are choices people make when faced with a true resource allocation problem. It could be that I purchased A instead of B to fool an observer, but there is a cost to my doing so (I bought and paid for a product that I didn’t want), and as long as the costs are sufficiently salient, it is more likely that we are observing preferences untainted by manipulation.
There are costs to manipulating user-contributed content, like Facebook profiles, of course: some amount of time, at the least, and probably some reduced value from the service (for example, students say that during college application season they hide their “regular” Facebook profile, and create a dummy in which they talk about all of the community service they are doing, and how they love bunnies and want to solve world hunger: all fine, but they are giving up the other uses of Facebook that they normally prefer). But costs of manipulating user-contributed content often may be low, and thus we shouldn’t be surprised if there is substantial manipulation in the data, especially if the users have reason to think they are being observed in a way that will affect an outcome they care about (like college admissions).
Put another way, the way people portray themselves online is behavior and so reveals something, but it may not reveal what the data miner thinks it does.
Actually, I didn’t see this coming, but I wish I had: scholarly authors who see themselves coming by suggesting themselves (via “sybils”) as their own article reviewers (referees)! Lovely case of online information manipulation in response to (fairly intense) incentives to increase one’s publication count.
How could an editor be dumb enough to send an article back to the author for review? The trick is simple (though also it shouldn’t be that hard for editors to see through it, and apparently checking is becoming more commonplace: so what will be the next clever idea as this particular arm’s race escalates?). Submit to a journal that asks authors to suggest potential reviewers. (Many journals do this — one hopes the editor selects some reviewers from an independent list, not just from the author’s suggestions!) Then submit a name and university and a false email address, one to a mailbox you control. Then, bingo, if the editor selects that reviewer, you get to write the review.
To reduce your chances of getting caught, you can suggest a real, and appropriate reviewer, just providing an inocuous but false email address (some variant on his or her name @gmail, for example).
Via The Chronicle of Higher Education.
Yelp!, the local business user-contributed review site, has a well-known set of manipulative incentive problems. First, businesses might want to write overly positive reviews of themselves (under pseudonyms). Second, they might want to write negative reviews of their competitors. Third, they might want to pay Yelp to remove negative reviews of themselves. This last has received a lot of attention, including a class action suit against Yelp alleging that some of its sales people extort businesses into paying to remove unfavorable reviews.
Yelp has always filtered reviews, trying to remove those that it suspects are biased either too positive or too negative. But of course it makes both Type I and Type II errors, and some of the Type IIs (filtering out valid reviews) may be at the root of some of the extortion claims (or not).
Yelp has now made a rather simple, but I suspect quite favorable change: <a href=
"http://mashable.com/2010/04/06/yelp-extortion-claims/"it is making all filtered reviews visible (on another page). This transparency, it hopes, will let users see that it is even-handed in its filtering, and that its errors are not themselves biased (or influenced).
Embracing transparency is a strategy that seems to work more often than not in this Web 2.0 age of the Internet. I think it will here. Most folks will never bother to look at the filtered-out reviews, and thus will rely on the very reviews that Yelp thinks are most reliable. Those who do look, if Yelp is indeed being even-handed, will probably find the filtering interesting, but will ignore these reviews in choosing which business to frequent. The main risk to Yelp is likely to be that imitators will better be able to reverse-engineer their filtering formulae.
(Scott Adams, http://dilbert.com/strips/, 8 May 2009)
I am here engaging in what I usually describe as the “manipulation” subspecies of “pollution”. I am doign this to participate in the Amazon Rank project to Google-bomb Amazon.
Apparently on 12 April 2009, Amazon removed books that it deemed had “adult content” from its sales rankings. Because of the way their systems works, this now means that the books are not found in standard searches. Example: use the main search box to query “Lady Chatterly’s Lover”. I just did this and did not get a hit on the book by D. H. Lawrence until #8, and this is an edition available through Amazon’s used bookseller partners, not a new copy available from Amazon itself. Search on D. H. Lawrence’s name, and his most popular book (LCL) does not come up in the first 16 entries (an audio CD edition pops up at #15).
So, an angry blogger started a Google-bombing campaign to make a search on Amazon Rank turn up a a critical definition (follow the link). As a political matter, I support this particular manipulation.