Psychology of motivation

I today attended a continuing education class for librarians at our university on “Motivation in the Classroom”. It was taught by two psychology professors who had specialized throughout their careers (they are emeriti now) on psychology and teaching. I thought this might be an opportunity to learn more about psychological theories of motivation, and get some ideas of new ways to provide incentives in design to motivate people interacting with technology or with each other through technology.
The assigned reading was a nice introduction: “Motivation in the Classroom”, by Barbara K. Hofer, in W. McKeachie and M. Svinicki, McKeachie’s Teaching Tips: Strategies, Research, and Theory for College and University Teachers, 12th ed. (Houghton Mifflin, 2005).
Hofer writes:

Researchers typically consider three indices of motivation: choice, effort, and persistence; achievement is an outcome of these variables. Accordingly, students who are motivated to learn choose tasks that enhance their learning, work hard at those tasks, and persist in the face of difficulty in order to attain their goals.

She then suggests that many human beings have a fundamental need for autonomy and self-determination (citing Deci & Ryan, 2000), and thus, by offering meaningful opportunities for choice and by otherwise supporting autonomy, we might increase their motivation.
She cites Ryan & Deci (2000) on research indicating that providing external rewards (extrinsic motivation) may diminish intrinsic motivation by undermining self-determination, though she thinks that on balance recent research supports a mix of intrinsic motivation and external rewards.
She then discusses expectancy theory. Citing Wigfield & Eccles (2000), people

direct their behavior toward activities that they
value and in which they have some expectancy of success.

In a loose sense, motivation is the product of these two factors (if one is absent, the motivation is zero).
Motivated behavior is directed towards goals. Hofer identifies two types of goals especially pertinent to teaching: mastery and performance goals. Mastery goals concern a desire to learn and understand the material. Performance goals concern a desire to perform well relative to one’s peers (e.g., if the course is graded on a curve). It’s not immediately obvious to me how to use this distinction in incentive-centered design, but she suggests various implications for teaching to students of one type versus the other.
When individuals experience an unexpected outcome, they seek to explain it by making attributions about the probable causes. Hofer cites Weiner (2001) to suggest that attributions can be characterized along three dimensions: locus, stability and responsibility referring respectively to whether the cause is internal or external, stable, controllable. When someone explains a negative outcome with internal, controllable attributions (“I know I didn’t prepare adequately for the test”) she is likely to be motivated to do what it takes to do better next time. If she uses a stable, uncontrollable cause (“I will never understand statistics”) she will probably be less motivated to change. Thus, for example, providing system feedback that guides users to attribute problems to internal, controllable causes is more likely to motivate them to improve.
Hofer also briefly discusses social motivations, drawing on social psychology. For example, she suggests people want to be socially responsible and form social relationships with peers (citing Patrick et al. 1997; Wentzel & Wigfield, 1998).
Hofer then offers a number of recommendations specific to teaching, which are too specific to that context for me to include here, but which exemplify ways in which these psychological principles of motivation can be incorporated in design.

Bridging the behavioral – computational gap

The Washington Post today ran a fluffy piece about academics studying social computing. It appears the author, Monica Hesse, mostly wanted to make fun about the silly academics, but she raised an issue at the beginning that is squarely in the space in which we incentive-centered design folks have been playing:

Who will own the study of the social networking sites? Is it computer science or behavioral science? Is it neuropsychology or artificial intelligence?

A big part of what ICD is about is bridging the gap between behavioral and computer sciences (including psychology and artificial intelligence). We’ve been pioneering that here at Michigan since the late 1990s, and we’re getting traction (and there are now good people at most other universities trying to do something similar, though not under the ICD label, which we just started promoting a couple of years ago).
Our basic theme is that the performance of modern information systems depends critically on the behavior and choices of humans interacting with the system, but in particular, using the system to interact with each other. So the humans are smart “devices” and part of the system. But humans are autonomous and motivated: they can’t be programmed. Necessarily, good design and management increasing requires bringing the sciences of motivated behavior to bear.

CAPTCHA farms in the courts

As soon as a screen is developed to protect a valuable activity, the incentive is on the table to work around it. Screening works by demanding a test or task that is more costly for the undesirables to perform (the technical requirements are a bit more subtle than this). If it is too costly to perform as well as a desirable, the undesirables are identified and can be blocked (or charged a different price, etc.)
Incentive designs spawn incentive designs (I also wrote about this in May 2006). If the service or product or information the undesirables want is sufficiently valuable, it is worth it to them to invest in circumventing the screen to get the cost of performing as well as a desirable low enough.
CAPTCHAs, developed by Luis von Ahn and his colleagues at Carnegie-Mellon, are one such screen for keeping undesirables — in this case software bots — out of certain valuable information services (like free webmail accounts). Or, in the case of Ticketmaster, from robotically buying large numbers of hot tickets.
Ticketmaster has sued RMG (preliminary injunction) for its business selling PurchaseMaster software, which allegedly enables ticket brokers to score large numbers of desirable tickets in the first few minutes the events go on sale. One of Ticketmaster’s protections against bots is a standard CAPTCHA. RMG, in its defense, has publicly stated that it is using one of the now standard low-cost ways of circumventing the CAPTCHA: the bots are hiring low-wage humans (in India in this case) to break the CAPTCHAs, so the bots can get on with their business. (The Matrix is coming.)

RMG answered Ticketmaster’s Captchas — the visual puzzles of distorted letters that a customer must type before buying tickets— not with character recognition software, he said, but with humans: “We pay guys in India $2 an hour to type the answers.? (NY Times, 16 Dec 2007)

Another way bots hire humans to do their CAPTCHA work for them is with porn bribes: set up a site giving free access to porn as long as the human solves a CAPTCHA or three, and feed them CAPTCHAs thrown up by other sites to block the bots entrance.

UCC: Hyperlinking the world’s books

Last year Kevin Kelly wrote a long New York Times Magazine article about Google Books and other massive-scale digitization projects. The Google Books project, for example, is working on scanning over 10 million university (and New York Public) library books in just a few years. One of the main sites is the University of Michigan Digitization Project, at which Google is working on scanning all 7 million volumes.
In the middle of his article, Kelly writes about the opportunities for user-contributed content to operate on these “universal library” digital collections:

In recent years, hundreds of thousands of enthusiastic amateurs have written and cross-referenced an entire online encyclopedia called Wikipedia. Buoyed by this success, many nerds believe that a billion readers can reliably weave together the pages of old books, one hyperlink at a time. Those with a passion for a special subject, obscure author or favorite book will, over time, link up its important parts. Multiply that simple generous act by millions of readers, and the universal library can be integrated in full, by fans for fans.

In addition to a link, which explicitly connects one word or sentence or book to another, readers will also be able to add tags, a recent innovation on the Web but already a popular one.

When books are digitized, reading becomes a community activity. Bookmarks can be shared with fellow readers. Marginalia can be broadcast. Bibliographies swapped. You might get an alert that your friend Carl has annotated a favorite book of yours. A moment later, his links are yours. In a curious way, the universal library becomes one very, very, very large single text: the world’s only book.

To build successful user-contributed content projects start on top of digitized book collections, we must attend to the incentive-issues, hopefully learning lessons from first-generation UCC projects. For example, as soon as one can annotate, how long will it be before someone starts annotating books with ads offering to sell college students pre-written term papers for sale related to that book? And of course, even sooner we will see Viagra ads.
What about the trustworthiness of annotations? What motivations are provided to encourage people to write (good) annotations at all, and why should they share these with others rather than keep a private collection of marginal notes?