I've accepted an answer, but sadly, I believe we're stuck with our original worst case scenario: CAPTCHA everyone on purchase attempts of the crap. Short explanation: caching / web farms make it impossible to track hits, and any workaround (sending a non-cached web-beacon, writing to a unified table, etc.) slows the site down worse than the bots would. There is likely some pricey hardware from Cisco or the like that can help at a high level, but it's hard to justify the cost if CAPTCHA-ing everyone is an alternative. I'll attempt a more full explanation later, as well as cleaning this up for future searchers (though others are welcome to try, as it's community wiki).

情况

这是关于woot.com上的垃圾销售。我是Woot Workshop的总统,Woot Workshop是Woot的子公司,负责设计,撰写产品描述,播客,博客文章,并主持论坛。我使用CSS/HTML,对其他技术几乎不熟悉。我与开发人员密切合作,在这里讨论了所有的答案(以及我们的许多其他想法)。

可用性是我工作的重要组成部分,而让网站变得令人兴奋和有趣则是剩下的大部分工作。这就是下面三个目标的来源。验证码损害了可用性,机器人从我们的垃圾销售中偷走了乐趣和兴奋。

机器人一秒钟就会在我们的首页上猛击数十次屏幕抓取(和/或扫描我们的RSS),以寻找随机垃圾销售。他们一看到这个,就会触发程序的第二阶段登录,点击“我要一个”,填好表格,然后买下这些垃圾。

评价

lc:在stackoverflow和其他使用此方法的站点上,他们几乎总是处理已验证(登录)的用户,因为正在尝试的任务需要这样。

在Woot上,匿名(未登录)用户可以查看我们的主页。换句话说,撞击机器人可以不经过身份验证(除了IP地址之外基本上无法跟踪)。

所以我们又回到了扫描IP, a)在这个云网络和垃圾邮件僵尸的时代是相当无用的,b)考虑到来自一个IP地址的业务数量,捕获了太多无辜的人(更不用说非静态IP isp的问题和试图跟踪它的潜在性能影响)。

还有,让别人给我们打电话是最糟糕的情况。我们能让他们给你打电话吗?

布拉德克:内德·巴切德的方法看起来很酷,但它们是专门设计来击败为网络站点构建的机器人的。我们的问题是机器人是专门用来破坏我们网站的。其中一些方法可能只在很短的时间内有效,直到脚本编写人员将他们的机器人进化为忽略蜜罐,从屏幕上抓取附近的标签名称而不是表单id,并使用支持javascript的浏览器控件。

 

lc再次说道:“当然,除非炒作是你们营销计划的一部分。”是的,绝对是。当物品出现时的惊喜,以及当你设法得到一件物品时的兴奋,可能比你实际得到的垃圾一样重要,甚至更重要。任何消除先到/先得的东西都不利于“赢”的快感。

 

novatrust:就我个人而言,欢迎我们新的机器人霸主。我们实际上提供RSSfeeds,允许第三方应用程序扫描我们的网站的产品信息,但不是在主站HTML之前。如果我的理解正确的话,你的解决方案通过完全牺牲目标1来帮助目标2(性能问题),并放弃机器人将购买大部分垃圾的事实。我给你的回答投了赞成票,因为你最后一段的悲观情绪对我来说是准确的。这里似乎没有什么灵丹妙药。

其余的响应通常依赖于IP跟踪,这似乎是无用的(僵尸网络/僵尸/云网络)和有害的(捕获许多来自相同IP目的地的无辜的人)。

还有其他方法/想法吗?我的开发人员一直在说“让我们只做验证码”,但我希望有更少的侵入性方法,让所有真正想要我们的垃圾的人。

最初的问题

假设你卖的东西很便宜,但有很高的感知价值,而你的数量非常有限。没有人确切地知道你什么时候会卖这个东西。超过一百万人经常来看你卖什么。

你最终会发现脚本和机器人试图通过编程方式[a]找出你何时出售该道具,[b]确保他们是第一批购买该道具的人。这很糟糕,有两个原因:

你的网站被非人类攻击,拖慢了所有人的速度。 编剧最终“赢得”了产品,让常客感到被骗了。

一个看似显而易见的解决方案是为用户在下单前设置一些障碍,但这至少有三个问题:

The user experience sucks for humans, as they have to decipher CAPTCHA, pick out the cat, or solve a math problem. If the perceived benefit is high enough, and the crowd large enough, some group will find their way around any tweak, leading to an arms race. (This is especially true the simpler the tweak is; hidden 'comments' form, re-arranging the form elements, mis-labeling them, hidden 'gotcha' text all will work once and then need to be changed to fight targeting this specific form.) Even if the scripters can't 'solve' your tweak it doesn't prevent them from slamming your front page, and then sounding an alarm for the scripter to fill out the order, manually. Given they get the advantage from solving [a], they will likely still win [b] since they'll be the first humans reaching the order page. Additionally, 1. still happens, causing server errors and a decreased performance for everyone.

另一种解决方案是经常监视ip攻击,阻止它们进入防火墙,或以其他方式阻止它们排序。这个可以解2。和阻止[b],但扫描ip对性能的影响是巨大的,可能会导致更多像1这样的问题。比编剧自己造成的还要严重。此外,云网络和垃圾邮件僵尸的可能性使得IP检查相当无用。

第三个想法,强迫订单表单加载一段时间(比如半秒),可能会减慢快速订单的进度,但同样,脚本编写人员仍然是第一个进入的人,在任何速度下都不会对实际用户造成损害。

目标

将道具卖给非脚本人。 保持网站运行的速度不被机器人减慢。 不要让“正常”用户完成任何任务来证明他们是人类。


当前回答

构建一个更好的机器人

市场告诉你一些事情。他们想要那袋垃圾。所以与其与脚本斗争(RIAA vs文件共享任何人?)构建一个更好的机器人。

为每个人提供一个安装的应用程序,它与脚本kidide可以组合的任何东西一样好,甚至更好。用户会安装你的品牌应用,而每次出现这种情况。应用程序会自动尝试购买。如果错过了当前的b-o-c,应用程序就有一张“门票”,让它有更好的机会进行下一次b-o-c销售。因此,如果用户滚动自己的脚本,他们就无法获得下一个b-o-c销售的“门票”,而官方应用程序的用户则可以。

在b-o-c销售之间,应用程序可以显示当前出售的物品。见鬼,让用户可以告诉woot应用程序寻找“记忆棒”

当官方的woot b-o-c+脚本应用程序一样好或不好时,谁会构建自己的脚本?

此外,woot还有另一种与客户联系的方式。

你的客户告诉你他们想要什么。

其他回答

我所读到的大多数解决方案最终都退化为一种反移动的情况,并不能抑制人们按照“公平竞争规则”进行游戏的积极性。

你有没有想过有目的的性能限制/影子禁止?如果你检测到来自一个IP的大量命中,你的CGI脚本是否故意延迟响应-或者让他们没有资格赢得该物品?

So If you say set a cookie on the system and you see it hitting you more than X per interval of time, you start delaying the responses more and more. If you see cookie X continue this behavior for some interval of time, you set the dreaded 'Can not win today come back tomorrow flag' and dont tell them - that way, even if they win, they still loose. If you have several parameters like this that would be easily/randomly tweekable, you could be changing the rules all the time in such a way that it would keep out the bots - but humans wouldnt even notice. Maybe you could have a login have a delay of X seconds - where the delay is depended on that IP addresses history of hits/logins :)

只是一两个想法

创建一个简单的ip防火墙规则,如果检测到超过最大值,就将ip地址列入黑名单。每秒输入的请求数。

一些想法:

Simple: don't name it "Random Crap." Change the name of the item every time so that the bots will have a harder time identifying it. They may still look for the $1.00 items, in which case I suggest occasionally selling $1 sticks of gum for a few minutes. The $5 shipping should make it worth your while. Harder: don't make the users do anything extra - make the users' computers do something extra. Write a JavaScript function that performs an intensive calculation taking a good amount of processing power - say, the ten-millionth prime number - and have the user's computer calculate that value and pass it back before you accept the order (perhaps even to create the "place order" URL). Change the function for every BoC so that bots can't pre-calculate and cache results (but so that you can). The calculation overhead might just slow down the bots enough to keep them off your backs - if nothing else, it would slow the hits on your servers so that they could breathe. You could also vary the depth of the calculation - ten-millionth prime versus hundred-millionth - at random so that the ordering process is no longer strictly first-come, first served, and to avoid penalizing customers with slower computers. E

可能没有一个神奇的银弹来照顾机器人,但这些建议的组合可能有助于阻止他们,并将他们减少到一个更易于管理的数量。 如果您需要对这些建议进行任何澄清,请告诉我:

Any images that depict the item should be either always the same image name (such as "current_item.jpg") or should be a random name that changes for each request. The server should know what the current item is and will deliver the appropriate image. This image should also have a random amount of padding to reduce bots comparing image sizes. (Possibly changing a watermark of some sort to deter more sophisticated bots). Remove the ALT text from these images. This text is usually redundant information that can be found elsewhere on the pages, or make them generic alt text (such as "Current item image would be here"). The description could change each time a Bag of Crap comes up. It could rotate (randomly) between a number of different names: "Random Crap", "BoC", "Crappy Crap", etc... Woot could also offer more items at the "Random Crap" price, or have the price be a random amount between $0.95 and $1.05 (only change price once for each time the Crap comes up, not for each user, for fairness) The Price, Description, and other areas that differentiate a BoC from other Woots could be images instead of text. These fields could also be Java (not javaScript) or Flash. While dependent on a third-party plug-in, it would make it more difficult for the bots to scrape your site in a useful manner. Using a combination of Images, Java, Flash, and maybe other technologies would be another way to make it more difficult for the bots. This would be a little more difficult to manage, as administrators would have to know many different platforms. There are other ways to obfuscate this information. Using a combination of client-side scripting (javascript, etc) and server-side obfuscation (random image names) would be the most likely way to do it without affecting the user experience. Adding some obfuscating Java and/or Flash, or similar would make it more difficult, while possibly minimally impacting some users. Combine some of these tactics with some that were mentioned above: if a page is reloaded more than x times per minute, then change the image name (if you had a static image name suggested above), or give them a two minute old cached page. There are some very sophisticated things you could do on the back end with user behavior tracking that might not take too much processing. You could off-load that work to a dedicated server to minimize the performance impact. Take some data from the request and send it to a dedicated server that can process that data. If it finds a suspected bot, based on its behavior, it can send a hook to another server (front end routing firewall, server, router, etc OR back-end web or content server) to add some additional security to these users. maybe add Java applets for these users, or require additional information from the user (do not pre-fill all fields in the order page, making a different field empty each time randomly, etc).

我有一个解决方案(可能)没有列出(因为我还没有读完这里所有的……)

您可以通过浏览器的User Agent字符串跟踪唯一用户。从本质上讲,通过检查哪些信息是可用的“唯一”,你将能够获得足够的信息来区分不同的人(即使是在相同的IP地址上)。

看看EFF写的这篇文章 以及这个网站(也由EFF),将“测试”你的独特只是基于你的用户代理从浏览器。

为了获得更好的唯一性,你可以在唯一性和ip地址的信息位之间进行比较,从而真正得到罪犯/机器人的可能性。


也从EFF签出这个pdf