Category Archives: Networks

ICD for home computer security

Ph.D. student Rick Wash and I are applying ICD design tools to the problem of home computer security. Metromode (online magazine) recently published an article featuring our project.

One of the major threats to home computers are viruses that install bots, creating botnets. These bots are code that use the computer’s resources to perform something on behalf of the bot owner. Most commonly, the bots become spam sending engines, so that spammers can send mail from thousands of home computers, making it harder to block the spam by originating IP (and also saving them the cost of buying and maintaining a server farm). Bots, of course, may also log keystrokes and try to capture bank passwords and credit card numbers.
The problem is crawling with incentives issues. Unlike first generation viruses, bots tend to be smarter about detection. In particular, they watch the process table, and limit themselves to using CPU cycles when other programs are not using many. That way, a normal home user may not see any evidence that he or she has a virus: the computer does not seem to noticeably slow down (but while they are away from the machine the bot may be running full tilt sending out spam). So, the bot doesn’t harm its host much, but it harms others (spreading spam, the bot virus itself, possibly other harmful activity like denial-of-service attacks on other hosts). This is a classic negative externality: the computer owner has little incentive (and often little appropriate knowledge) to stop the bot, but others suffer. How to get the home computer user to protect his or her machine better?
We are developing a social firewall that integrates with standard personal firewall services to provide the user additional benefits (motivating them to use the service), while simultaneously providing improved security information to the firewalls employed by other users.
We don’t have any papers released on this new system yet, but for some of the foundational ideas, see “Incentive-Centered Design for Information Security“, ICEC-07.

Volunteer grid computing projects

Most people have heard of SETI@Home, the volunteer distributed grid computing project in which computer owners let software run on their machine when it is idle (especially at night) that helps search through electromagnetic data from space in an effort to find communications from extra-terrestials. But this is only one of many such projects; over a dozen are described in “Volunteer Computer Grids: Beyond SETI@home” by Michael Muchmore, many of them devoted to health applications.
Why do people donate their computer cycles. At first glance, why not? These programs, most of which run BOINC (Berkeley Open iNfrastructure for Networked Computing), are careful to only use CPU cycles not in demand by the computer owner’s software, so the cycles donated are free, right? Well, sort of, but it takes time to download and install the software, there is some risk of infecting one’s machine with a virus, many users may perceive some risk that the CPU demands will infringe on their own use, etc. Most users will believe there is some amount of cost.
With certain projects, volunteers may get some pleasure or entertainment value out of participating: for example, the search for large Mersennes primes is exciting to those who enjoy number theory; searching for alien intelligence probably provides a thrill to many.
I suspect a related motivation is sufficient for most volunteers: the projects generally have a socially valuable goal, so people can feel like they are helping make the world a better place, at a rather small cost to themselves. For example there are projects to screen cancer drugs, search for medications for tuberous sclerosis, and help calibrate the Large Hadron Collider (for physics research). As Muchmore writes, “a couple of the projects—Ubero and Gómez—will pay you a pittance for your processing time. But wouldn’t you feel better curing cancer or AIDS?”
These projects appear to attract a lot of volunteerism. Muchmore reports estimates of participation that range from one to over five million computers at any given moment. According to the BOINC project, volunteers are generating about 400 teraflops/second of processing, far more than the 280 tps that the largest operational supercomputer can provide.

ICD with a twist

In a comment on Felten’s blog article about false congestion as an incentive to send less traffic, Jim Horning reminded me of a classic article by Coffman and Kleinrock about incentives in scheduling computer resources:

E.G. Coffman and L. Kleinrock. “Computer scheduling methods and their countermeasures.? In AFIPS conference proceedings, volume 32, pages 11–21, 1968.

Coffman and Kleinrock argue that users will adapt to any scheduling rule implemented. Therefore, they argue, an incentive-compatible designer would decide which new behavior she wants users to adopt, and then implement a scheduling rule that to which that behavior is the best countermeasure. That’s a very apt and clever way to express the principle of incentive-centered design!

Creating false congestion to selectively discourage sending on the Internet?

My first significant foray into research on incentive-centered design was my work with Hal Varian, Liam Murphy and others in the early-mid 1990s on incentives for congestion control in the Internet. Ed Felten in his popular “Freedom to Tinker” blog has brought up one of the key issues we in the networking community debated back then — but the issue is still valid today!http://battellemedia.com/archives/002391.php

Continue reading Creating false congestion to selectively discourage sending on the Internet?