Category Archives: Security

ICD for home computer security

Ph.D. student Rick Wash and I are applying ICD design tools to the problem of home computer security. Metromode (online magazine) recently published an article featuring our project.

One of the major threats to home computers are viruses that install bots, creating botnets. These bots are code that use the computer’s resources to perform something on behalf of the bot owner. Most commonly, the bots become spam sending engines, so that spammers can send mail from thousands of home computers, making it harder to block the spam by originating IP (and also saving them the cost of buying and maintaining a server farm). Bots, of course, may also log keystrokes and try to capture bank passwords and credit card numbers.
The problem is crawling with incentives issues. Unlike first generation viruses, bots tend to be smarter about detection. In particular, they watch the process table, and limit themselves to using CPU cycles when other programs are not using many. That way, a normal home user may not see any evidence that he or she has a virus: the computer does not seem to noticeably slow down (but while they are away from the machine the bot may be running full tilt sending out spam). So, the bot doesn’t harm its host much, but it harms others (spreading spam, the bot virus itself, possibly other harmful activity like denial-of-service attacks on other hosts). This is a classic negative externality: the computer owner has little incentive (and often little appropriate knowledge) to stop the bot, but others suffer. How to get the home computer user to protect his or her machine better?
We are developing a social firewall that integrates with standard personal firewall services to provide the user additional benefits (motivating them to use the service), while simultaneously providing improved security information to the firewalls employed by other users.
We don’t have any papers released on this new system yet, but for some of the foundational ideas, see “Incentive-Centered Design for Information Security“, ICEC-07.

Spam as security problem

Here is the blurb Rick Wash and I wrote for the USENIX paper (slightly edited for later re-use) about spam as a security problem ripe for ICD treatment. I’ve written a lot about spam elsewhere in this blog!

Spam (and its siblings spim, splog, spit, etc.) exhibits a classic hidden information problem. Before a message is read, the sender knows much more about its likely value to the recipient than does the recipient herself. The incentives of spammers encourage them to hide the relevant information from the recipient to get through the technological and human filters.

While commercial spam is not a traditional security problem, it is closely related due to the adversarial relationship between spammers and email users. Further, much spam carries security-threatening payloads: phishing and viruses are two examples. In the latter case, the email channel is just one more back door access to system resources, so spam can have more than a passing resemblance to hacking problems.


Here’s a paragraph Rick Wash and I wrote for the USENIX paper, somewhat revised for later use, concerning spyware:

An installer program acts on behalf of the computer owner to install desired software. However, the installer program is also acting on behalf of its author, who may have different incentives than the computer owner. The author may surreptitiously include installation of undesired software such as spyware, zombies, or keystroke loggers. Rogue installation is a hidden action problem: the actions of one party (the installer) are not easy to observe. One typical design response is to require a bond that can be seized if unwanted behavior is discovered (an escrowed warranty, in essence), or a mechanism that screens unwanted behavior by providing incentives that induce legitimate installers to take actions distinguishable from those who are illegitimate.

ICD for information security

Rick Wash and I published a paper at the USENIX Hot Topics in Security Workshop in August 2006 titled “Incentive-Centered Design for Information Security“. From the abstract:

Humans are “smart components” in a system, but cannot be directly programmed to perform; rather, their autonomy must be respected as a design constraint and incentives provided to induce desired behavior. Sometimes these incentives are properly aligned, and the humans don’t represent a vulnerability. But often, a misalignment of incentives causes a weakness in the system that can be exploited by clever attackers. Incentive-centered design tools help us understand these problems, and provide design principles to alleviate them. We describe incentive-centered design and some tools it provides. We provide a number of examples of security problems for which incentive- centered design might be helpful. We elaborate with a general screening model that offers strong design principles for a class of security problems.

I will start posting some short examples from this position paper concerning thoughts about the relevance of ICD to a variety of information security problems.