Category Archives: ICD

Everything (of value) is for sale

There’s a truism that bothers many (except economists): if there is a good or service that has value to some and can be produced at a cost below that value by someone else, there will be a market. This is disturbing to many because it is as true for areas of dubious morality such as sexual transactions, clear immorality (human trafficking and slavery) as it is for lawn mowing and automobiles.
Likewise for online activities, as I’ve documented many times here. You can buy twitter followers, Yelp reviews, likes on Facebook, votes on Reddit. And, of course, Wikipedia, where you can buy pages or edits, or even (shades of The Sopranos), “protection”.
Here is an article that reports at some length on large scale, commercialized Wikipedia editing and page management services. Surprised? Just another PR service, like social media management services provided by every advertising / marketing / image management service today.

New article characterizing crowdsourcing on the web

There is an article in the new issue of CACM on “Crowdsourcing systems on the World-Wide Web”, by Anhai Doan, Raghu Ramakrishnan, and Alon Y. Halevy. In it they offer a definition of crowdsourcing systems, characterize them along nine dimensions, and discuss some of the dimensions as challenges.
It’s a useful review article, with many examples and a good bibliography. The characterization in nine dimensions is clear and I think mostly useful.
I’m particularly pleased to see that they have given prominent attention to the incentive-centered design issues on which I (and this blog) have focused for years. Indeed, they define crowdsourcing systems in terms of four incentive problems that must be solved (distinguishing them from, say, crowd management systems that only address three of the questions). They define crowdsourcing as “A system that enlists humans to help solve a problem defined by the system owners, if in doing so it addresses four fundamental challenges:

  • How to recruit and retain users?
  • What contributions can users make?
  • How to combine user contributions to solve the target problems?
  • How to evaluate users and their contributions?

The first and second are the “getting stuff in” (contribution) problem about which I write. How to get people to make effort to contribute something to the good of others? The fourth is the quality incentive problem, which I usually separate into “getting good stuff in” (positive quality), and “keeping bad stuff out”.

What do ICDers do when they grow up?

Here is a nice article in Wired about Googlenomics, featuring my co-author and friend Hal Varian. This describes a number of ways that Google has combined vast data mining resources with economics to do some incentive-centered design.
Hal is a micreconomist par excellence, who has made important contributions to both theory and empirical work. He was one of the first economists who took the study of the Internet and related phenomena seriously.
He was my colleague at Michigan, and involved in some of the early meetings in which a group of us developed the plan to create the School of Information. The year we launched, however, he departed for Berkeley, where a year later he was dean of their new School of Information (called SIMS at the time, but since renamed). For the past few years he has been on leave from Berkeley to be the Chief Economist at Google.

“Web science”: recognizing integral role of human behavior

Sir Tim Berners-Lee and his colleagues have been advocating for a field of “web science” for several years. They describe it as a multidisciplinary, systems science needed to understand and engineer the future Web.
I’ve not fully grokked what they are proposing: the research questions they suggest to define seem a bit vague, and I don’t quite see what defines this “science” other than a set of topics (the semantic web prominent among them, natch) in which this group of people is interested. I’m not saying I think it’s not science, I’m just not sure what field definition they are proposing (so that, for example, many universities could start offering courses, or even creating departments of web science) — it feels like the space of current interest to a particular research center (and indeed, there is a Web Science Research Institute).
They have published another manifesto (also available from the WSRI site) (there have been several in the past few years), this time in the Communications of the ACM. Maybe I’m just paying more attention, but I think I’m starting to get parts of it. In any case, one thing seems clear to me: incentive-centered design (ICD) fits comfortably in their framework. It’s a subset of their very ambitious agenda, but I think it’s clearly a central piece of. Put another way, they are saying some of the same things our ICD group has been saying independently over the last several years.
For example,

We show there is significant interplay among the social interactions enabled by the Web’s design….However, the study of the relationships among these levels is often hampered by the disciplinary boundaries that tend to separate the study of the underlying networking from the study of the social applications.

I agree, and this has been part of the ICD manifesto from the beginning. It is rather vague, but here’s a clearer statement of the common starting point: “It is the interaction of human beings creating, linking, and consuming information that generates the Web’s behavior.”

They suggest, as do we, that this inquiry should rely (among other things) on the sciences of motivated behavior, such as economics and psychology. However, I think there is one way in which we diverge: for the most part, I have seen these authors talking about computer scientists and web engineers needing to understand how people behave so they will understand the consequences of web design decisions. But, I have not really seen much evidence that they recognize the role of incorporating motivated human behavior directly into the design loop, which is the essence of ICD: design incentives or motivations for the humans who will be interacting with and over the web in order to obtain desired consequences. What we propose is more a more central recognition of the malleability of human behavior, and the social (or commercial) value to be gained from designing for that malleability.
On the other hand, one large area of research (and not the only one) that the “web science” promoters claim that is not inside the boundaries of what we call ICD, is a micro-behavioral science of understanding and explaining observed macro web phenomena. For example, they point out that social network analysts (like my SI colleague Lada Adamic, U Mich’s Mark Newman, or HP Labs’s Bernardo Huberman) rarely explore or test the underlying human behavior that generates the macro phenomena they observe and characterize.

Economics meets social psychology on incentive theory

In another June 2008 American Economic Review article, Ellingsen and Johannesson introduce a standard concept from social psychology into a standard economic model of incentives, and find that it helps explain some well-known empirical puzzles.
This is not at all the first article in the economics literature that explores the role of social motivations, and the authors provide a good discussion of prior work.
In Pride and Prejudice: The Human Side of Incentive Theory“, Ellingsen and Johannesson add two motivational premises to the standard principal-aget model: people value social esteem, and the value they experience depends symmetrically on who provides the esteem: they value esteem more from those who they themselves esteem.
Their main result is to show how an incentive that otherwise would have a positive effect on behavior can have a negative effect for some people because of what the incentive tells the agent about the principal. For example, they suggest this as an explanation for “the incentive intensity puzzle that stronger material incentives and closer control sometimes induce worse performance” (p. 990).

ICD introductory readings from on high

Students often ask me what they can read to learn about ICD. I’ve not had a terribly good answer to that. On the one hand, the foundations — especially mechanism design in economics, and game theory, and engineering design theory, and social psychology — are ancient (well, a few decades old), and have very rich literatures. But I haven’t seen (haven’t really searched for) good intros. And, these are the building blocks of ICD, but the particular area in which we focus — incentive-centered design for information systems — and the particular multi-disciplinary approach we take — is rather new. I don’t know that folks have written any good overviews yet.
However, three quite nice articles just appeared in the American Economic Review that are a step in the right direction. They are focused on mechanism design and microeconomics (not social psychology, computation theory, nor specifically applications to information system design). But they are accessible, short, and written by giants in the field; in fact, they are revised versions of the Nobel lectures given the by three laureates recently cited for creating the foundations of mechanism design theory: Leonard Hurwicz, Eric Maskin and Roger Myerson.
Maskin’s overview, “Mechanism Design: How to Implement Social Goals“, doesn’t require any math. He introduces implementation theory, “which, given a social goal, characterizes when we can design a mechanism whose predicted outcomes (i.e., the set of equilibrium outcomes) coincide with the desirable outcomes” (p. 567).
Myerson’s article, “Perspectives on Mechanism Design in Economic Theory“, begins to introduce some of the basic modeling elements from the theory, so it has a bit more math, but it’s not heavy going for those who have had an intermediate microeconomics class. He introduces some of the classic applications from economics: bilateral trade with advsere selection (hidden information), and project management with moral hazard (hidden action).

Presentation at Yahoo! Research on user-contributed content

Yahoo! Research invited me to speak in their “Big Thinkers” series at the Santa Clara campus on 12 March 2008. My talk was “Incentive-centered design for user-contributed content: Getting the good stuff in, Keeping the bad stuff out.”
My hosts wrote a summary of the talk (that is a bit incorrect in places and skips some of the main points, but is reasonably good), and posted a video they took of the talk. The video, unfortunately, focuses mostly on me without my visual presentation, panning only occasionally to show a handful of the 140 or so illustrations I used. The talk is, I think, much more effective with the visual component. (In particular, it reduces the impact of the amount of time I spend glancing down to check my speaker notes!)
In the talk I present a three-part story: UCC problems are unavoidably ICD problems; ICD offers a principled approach to design; and ICD works in practical settings. I described three main incentives challenges for UCC design: getting people to contribute; motivating quality and variety of contributions; and discouraging “polluters” from using the UCC platform as an opportunity to publish off-topic content (such as commercial ads, or spam). I illustrated with a number of examples in the wild, and a number of emerging research projects on which my students and I are working.

ICD: A 5-step program?

Bob Gibbons presented an intriguing framework for an incentive-centered design program during a talk he gave to our STIET seminar on 31 January. He wasn’t thinking about information system design problems, but organizational design. But his fundamental concern was the same: an organization’s performance depends on the incentives and how agents respond to them. Like us, he took an explicitly multidisciplinary perspective.

Before I summarize his framework,I’ll mention that his multidisciplinary lens is somewhat different than what my group of colleagues and students and I usually do. Our focus so far has been on the interaction between economics (rational choice theory) and computation (information processing). We’ve talked for a while, and a few of us are starting to integration social psychology perspectives as well (especially to think about non-monetary and intrinsic incentives). We also see a role for personality psychology, and maybe cognitive.

Bob, like me, starts from the foundation of economics. (By the way, Bob was hired by MIT as an assistant professor in Economics while I was there as a grad student, so we got to know each other a bit — as he reminded me the other day, we played basketball together. He’s now a full professor in the business economics group of the Sloan School of Management at MIT.) But then he moves into politics and complex systems perspectives. In particular, he relies on March’s approach to control and hierarchies in organizations, and on Winter’s work that predicts path dependence.

On to Bob’s 5-step program:

  1. The formal is flawed
  2. The relational is required
  3. The formal and relational interact
  4. Institutional design
  5. Building & changing relationships

These suggest a way of explaining why cross-disciplinary approaches are important for ICD, and a framework for moving forward. I can barely do these justice in a short note; Bob gave a detailed 80-minute talk with this outline. I’ll try:

Formal is flawed. Formal models that rely on a few narrowly drawn incentive instruments are incapable of doing a very convincing job of describing complex incentive problems. For example, for the standard price model, Bob poses this challenge: “Find an employee with fabulous incentives created solely by a formula.” Formulae are not enough: there are too many measurement problems, contingencies, etc.

Relational is required. A quote from Leamer (2007) summarizes the point better than I can: “Most exchanges take place within the context of long-term relationships that create the language needed for buyer and seller to communicate, that establish the trust needed to carry out the exchange, that allow ongoing servicing of implicit or explicit guarantees, that monitor the truthfulness of both parties, and that punish those who mislead.�? A point that Bob made is that most organizational interactions are more like long-term repeated games than they are like one-shot strategic interactions. As a general matter, long-term relationships can lead to a wide variety of outcomes (cf. the Folk Theorem). And, another general implication of repeated games is that the “shadow of the future” is pivotal, so investing in and respecting the relationship is crucial.

Formal and relational interact. The way that relations are structured can have a strong (even dispositive) impact on the effectiveness of the formal incentives. For example, because of the shadow of the future, it can make sense to include a subjective bonus in a compensation plan (that is, the principal says something like “trust me to honestly assess your performance and pay you an ex post bonus based on it” — trust because the performance is not verifiable by a court and thus not contractible).

(Intermediate summary: Some prices can be chosen, but not the right ones because of gap between performance goals and contractible measures. Relationships help, but not enough. Reliance on relationships affects the desired structure of formal incentives.)

Institutional design. Cyert and March (1963): An organization “is basically a coalition without a generally shared, consistent set of goals. Consequently, we cannot assume that a rational manager can treat the organization as a simple instrument in his dealings with the external world. Just as he needs to predict and attempt to manipulate the ‘external’ environment, he must predict and attempt to manipulate his own firm.�? And here’s where politics, authority and control come in: Pfeffer (1981): “it is necessary to understand who participates in decision making, what determines each player’s stand on the issues, what determines each actor’s relative power, and how the decision process arrives at a decision.�? Gibbons’ conclusion: Choose the formal to facilitate the relational. He didn’t spend much time on this idea, but one of his examples is that the best allocation of control for spot conditions may not be best for relational decisions.

Building and changing relationships. The driving point here is that seemingly similar organizations experience persistent performance differences. Bob explains this as a consequence of path dependence. The paths may differ in (among other things?) the extent to which they rely on formal and on relational incentives. He suggests a stylized, extreme case, in which a concave possibilities frontier between a very controlled firm and a very decentralized firm is traced by relational restructurings, whereas primary reliance on formal incentives carves out a path convex and far inside the frontier. Where the firm ends up, then, depends on the mix of formal and relational incentive structures it employs. (See his slides 49 and 50.)

This is all a bit vague, largely because I don’t have a good grasp of the ideas yet. I’ll follow Bob’s work and see if I can make it more concretely useful for the ICD research programme.

Bridging the behavioral – computational gap

The Washington Post today ran a fluffy piece about academics studying social computing. It appears the author, Monica Hesse, mostly wanted to make fun about the silly academics, but she raised an issue at the beginning that is squarely in the space in which we incentive-centered design folks have been playing:

Who will own the study of the social networking sites? Is it computer science or behavioral science? Is it neuropsychology or artificial intelligence?

A big part of what ICD is about is bridging the gap between behavioral and computer sciences (including psychology and artificial intelligence). We’ve been pioneering that here at Michigan since the late 1990s, and we’re getting traction (and there are now good people at most other universities trying to do something similar, though not under the ICD label, which we just started promoting a couple of years ago).
Our basic theme is that the performance of modern information systems depends critically on the behavior and choices of humans interacting with the system, but in particular, using the system to interact with each other. So the humans are smart “devices” and part of the system. But humans are autonomous and motivated: they can’t be programmed. Necessarily, good design and management increasing requires bringing the sciences of motivated behavior to bear.

CAPTCHA farms in the courts

As soon as a screen is developed to protect a valuable activity, the incentive is on the table to work around it. Screening works by demanding a test or task that is more costly for the undesirables to perform (the technical requirements are a bit more subtle than this). If it is too costly to perform as well as a desirable, the undesirables are identified and can be blocked (or charged a different price, etc.)
Incentive designs spawn incentive designs (I also wrote about this in May 2006). If the service or product or information the undesirables want is sufficiently valuable, it is worth it to them to invest in circumventing the screen to get the cost of performing as well as a desirable low enough.
CAPTCHAs, developed by Luis von Ahn and his colleagues at Carnegie-Mellon, are one such screen for keeping undesirables — in this case software bots — out of certain valuable information services (like free webmail accounts). Or, in the case of Ticketmaster, from robotically buying large numbers of hot tickets.
Ticketmaster has sued RMG (preliminary injunction) for its business selling PurchaseMaster software, which allegedly enables ticket brokers to score large numbers of desirable tickets in the first few minutes the events go on sale. One of Ticketmaster’s protections against bots is a standard CAPTCHA. RMG, in its defense, has publicly stated that it is using one of the now standard low-cost ways of circumventing the CAPTCHA: the bots are hiring low-wage humans (in India in this case) to break the CAPTCHAs, so the bots can get on with their business. (The Matrix is coming.)

RMG answered Ticketmaster’s Captchas — the visual puzzles of distorted letters that a customer must type before buying tickets— not with character recognition software, he said, but with humans: “We pay guys in India $2 an hour to type the answers.? (NY Times, 16 Dec 2007)

Another way bots hire humans to do their CAPTCHA work for them is with porn bribes: set up a site giving free access to porn as long as the human solves a CAPTCHA or three, and feed them CAPTCHAs thrown up by other sites to block the bots entrance.