Marshall van Alstyne’s joke of a solution to spam

NonsenseInformation Week is running a story on how micropayments will stop spam. The idea, tipped by Marshall van Alstyne, is to make the sender pay if the message annoys the reader. It goes: A spammer sends you trash and wastes your time, he pays to compensate your time.

Sounds right? Think again! When was the last time you received a spam whose ‘from’ field was that of the real sender?

Comments on Marshall van Alstyne’s joke of a solution to spam

  1. You miss the point! Strangers can’t get through at all — it’s a challenge response system — unless they’re willing to promise not to send spam.

    Go read the article from a year ago on ssrn.com if you want to say silly stuff…

  2. @Tiger Lily: I can’t wait, then. Random people will start getting email like… “Hey dude! Pay me 10 bucks or I won’t read your email!” And upon receiving them, spoofed people will start sending hate mail in return to even more random people. I’m thrilled at the very idea of it. :)

    Anyway, if you can’t authenticate the sender, this “pay me to read” spam prevention idea is a joke that will generate just more trash traffic on networks. And if you assume that you’ve properly authenticated the sender, the spam problem is essentially solved. There’s no need to introduce money in the decision process.

  3. A friend alerted me to this post so I’d like to respond. We all seem to agree spam is a problem, but we also disagree on the merits of different approaches.

    Sadly, this critique based on authentication misses the mark. If authentication were sufficient, we’d happily be done!

    Authentication can reduce “spoofing,” the practice of forging sender identity, but it can’t stop spam. The problem is that spammers simply resort to sending from authenticated but unrecognized accounts. They just generate as many new accounts as they wish, then use each one until it’s blacklisted (or its “reputation capital” spent), at which point they start over.

    Generating accounts on free services is surprisingly easy. To pass new account tests, spammers just use the same AI technology used to recognize spam, or they outsource to truly low cost countries, or as pointed out on Slashdot they give away free porn to people who solve CAPTCHAS for them!

    The Register ran a good article awhile back stating that spammers adopted authentication technology much faster than legitimate emailers. For reference, the reason why authentication alone can’t work was anticipated in a nice peer-reviewed paper “The Social Cost of Cheap Pseudonyms” back in 2001.

    Authentication will help but it’s no panacea. Part of the solution is to make spam differentially costly relative to non-spam, which is what we designed our proposal to do.

    Cheers,
    MVA

  4. If I may, though, you still haven’t convinced me of the validity of your approach. ;-)

    Nowadays, spammers hit directly into insecure network proxys and into insecure smtp servers to send their junk. They fill the from: field randomly by either using a dictionary (valid domain + realistic name) or selecting emails from the spammed list itself (which got one spammer sued by a US lawyer a few years back).

    Some kind of authentication system would arguably prompt them to change method, and use either of hotmail or infected PCs to send “valid” emails. Your point on the lackings of authentication methods used today is thus very receivable.

    Nonetheless, I still stand by the argument that if — big if, as this would assume it is even possible — proper and efficient authentication were in place, the spam problem is pretty much resolved. By authentication here, I mean being 99% certain we know the guy who sent the email is indeed a real person (or a robot that is authorized to send it on his behalf, as would be the case for a receipt sent by email after an order on amazon.com).

    On a more general note, the idea of introducing a cost isn’t a good one imho.

    To start with, if any money ought to be around, one could argue that it would be better located on the sender’s end, before the message is even sent, much like in a real life postal system. This would spare telco networks from needless acknowledgment traffic and they’d be more than happy if someone actually paid for what typically amounts to half of their total network traffic.

    That being said, the presence of any amount of money in the system will likely make the issue even worse. Be it your way or the postal service’s way, picture an infested PC (there are many) that not only says “Yes, I sent this!” but also uses stolen credit card numbers (or the end user’s internet or cell phone bill) to make the micropayments.

    One could argue that an immediate benefit would be to prompt computer illiterates and system admins alike to keep their machines void of security holes. Nonetheless, it’s a very strange way of achieving this, and the cost incurred by the collectivity as a whole (=perpetually arguing with telcos or banks) would be tremendous.

    Along the same lines, if a cell phone or critical medical equipment in a hospital stop working because of the burden on their CPU, we’d get economic chaos and potentially life threatening situations. Using CPU cycles to solve NP-complex problems as a means to introduce a virtual cost consequently suffers from the same problem as real cash.

    In the end, I’ll happily admit I haven’t much of an idea of how to solve the spam problem in practice. As it stands, however, authentication methods in ways that are transparent to end users strike me as the healthier options, and by very far.

    IBM’s approach in particular, which analyzes an email’s path in the network, is my personal favorite. One could arguably forge the data if he were to hack into core network equipment, but as the only means to do so would be to introduce oneself into the maintenance network, the odds of achieving any noticeable success are slim.

  5. Hi Denis, to playfully use your own language, you haven’t yet convinced me of the validity of your objection ;-)

    In particular, I’ll challenge your assumption that a introducing a cost makes the problem worse because spammers will just steal micropayments.

    In fact, I’ll give you three reasons why requiring strangers to bond their messages should not only clean up recipients’ inboxes it should stop spam at the sending source. The first two have long been recognized by security experts but the third is unique to our proposal.

    First, almost everyone recognizes spam as an economic problem. Since digital messages cost almost nothing to send, even miniscule response rates make spamming profitable. If we can make spammers pay more than legit mailers, we can make it stop. Assuming, for the moment, that we can authenticate where bonds come from, an economic problem necessarily requires an economic answer. So we must start there.

    Secod, you and others quite correctly observe that if virally infected PCs send bonded spam, then these 3rd party hosts would be motivated to fix their machines since their money will be at risk. Viewing this slightly differently, the bonding mechanism creates a rather extraordinary information benefit. It surfaces the infection and creates an audit trail. Previously, owners of infected machines didn’t even realize they had a disease — stealing CPU cycles can easily stay hidden, while stealing someone’s money quickly gets noticed; and there is a clear record of what happened. So, infections, if they happen at all, cannot last long.

    Third, and this is the insight unique to our proposal, most people whose machines are used fraudulently will never have to pay a dime. Consider your own example: the stolen credit card. In the US, when a thief misuses your account, the bank indemnifies you against fraud provided that you report it in 24 hours. The expected value of your transactions is so much greater than the expected value of your losses that the bank insures you in order that you’ll use their card. The same thing will happen here, only better. Now the ISP will insure you against fraud provided that it holds your accounts and gets to maintain the antivirus software. Bingo, problem solved! Now, not only is the individual user not at risk from fraud, but using the audit trail I highlighted above, an ISP can trace infections and spread a cure or patch to the machines that have a problem. This makes it harder to infect PCs in the first place.

    So, not only does our proposal clean up recipient inboxes, it also helps prevent infections that are another source of the problem. Classification systems, no matter their mechanism, don’t close this information feedback loop.

    Just for fun, you should know that one of the world’s leading security experts made the exact same mistaken objection on his blog, namely, he assumed that the prospect of fraud made bonding infeasible. I bet him that this was not the case and he wisely chose not to accept my bet. :-)

    If you want more details on our bonding proposal, you can either check out our academic proof or you can just watch the technical talk I gave at Google last year on Google videos.

    Best,
    MVA

  6. Much like the security expert, I’ll hope you’re right and that I’m wrong with respect to how introducing money in there will ultimately lead to less insecure machines. ;-)

    Anyway, thanks for stopping by!

    D.