I've accepted an answer, but sadly, I believe we're stuck with our original worst case scenario: CAPTCHA everyone on purchase attempts of the crap. Short explanation: caching / web farms make it impossible to track hits, and any workaround (sending a non-cached web-beacon, writing to a unified table, etc.) slows the site down worse than the bots would. There is likely some pricey hardware from Cisco or the like that can help at a high level, but it's hard to justify the cost if CAPTCHA-ing everyone is an alternative. I'll attempt a more full explanation later, as well as cleaning this up for future searchers (though others are welcome to try, as it's community wiki).

情况

这是关于woot.com上的垃圾销售。我是Woot Workshop的总统,Woot Workshop是Woot的子公司,负责设计,撰写产品描述,播客,博客文章,并主持论坛。我使用CSS/HTML,对其他技术几乎不熟悉。我与开发人员密切合作,在这里讨论了所有的答案(以及我们的许多其他想法)。

可用性是我工作的重要组成部分,而让网站变得令人兴奋和有趣则是剩下的大部分工作。这就是下面三个目标的来源。验证码损害了可用性,机器人从我们的垃圾销售中偷走了乐趣和兴奋。

机器人一秒钟就会在我们的首页上猛击数十次屏幕抓取(和/或扫描我们的RSS),以寻找随机垃圾销售。他们一看到这个,就会触发程序的第二阶段登录,点击“我要一个”,填好表格,然后买下这些垃圾。

评价

lc:在stackoverflow和其他使用此方法的站点上,他们几乎总是处理已验证(登录)的用户,因为正在尝试的任务需要这样。

在Woot上,匿名(未登录)用户可以查看我们的主页。换句话说,撞击机器人可以不经过身份验证(除了IP地址之外基本上无法跟踪)。

所以我们又回到了扫描IP, a)在这个云网络和垃圾邮件僵尸的时代是相当无用的,b)考虑到来自一个IP地址的业务数量,捕获了太多无辜的人(更不用说非静态IP isp的问题和试图跟踪它的潜在性能影响)。

还有,让别人给我们打电话是最糟糕的情况。我们能让他们给你打电话吗?

布拉德克:内德·巴切德的方法看起来很酷,但它们是专门设计来击败为网络站点构建的机器人的。我们的问题是机器人是专门用来破坏我们网站的。其中一些方法可能只在很短的时间内有效,直到脚本编写人员将他们的机器人进化为忽略蜜罐,从屏幕上抓取附近的标签名称而不是表单id,并使用支持javascript的浏览器控件。

 

lc再次说道:“当然,除非炒作是你们营销计划的一部分。”是的,绝对是。当物品出现时的惊喜,以及当你设法得到一件物品时的兴奋,可能比你实际得到的垃圾一样重要,甚至更重要。任何消除先到/先得的东西都不利于“赢”的快感。

 

novatrust:就我个人而言,欢迎我们新的机器人霸主。我们实际上提供RSSfeeds,允许第三方应用程序扫描我们的网站的产品信息,但不是在主站HTML之前。如果我的理解正确的话,你的解决方案通过完全牺牲目标1来帮助目标2(性能问题),并放弃机器人将购买大部分垃圾的事实。我给你的回答投了赞成票,因为你最后一段的悲观情绪对我来说是准确的。这里似乎没有什么灵丹妙药。

其余的响应通常依赖于IP跟踪,这似乎是无用的(僵尸网络/僵尸/云网络)和有害的(捕获许多来自相同IP目的地的无辜的人)。

还有其他方法/想法吗?我的开发人员一直在说“让我们只做验证码”,但我希望有更少的侵入性方法,让所有真正想要我们的垃圾的人。

最初的问题

假设你卖的东西很便宜,但有很高的感知价值,而你的数量非常有限。没有人确切地知道你什么时候会卖这个东西。超过一百万人经常来看你卖什么。

你最终会发现脚本和机器人试图通过编程方式[a]找出你何时出售该道具,[b]确保他们是第一批购买该道具的人。这很糟糕,有两个原因:

你的网站被非人类攻击,拖慢了所有人的速度。 编剧最终“赢得”了产品,让常客感到被骗了。

一个看似显而易见的解决方案是为用户在下单前设置一些障碍,但这至少有三个问题:

The user experience sucks for humans, as they have to decipher CAPTCHA, pick out the cat, or solve a math problem. If the perceived benefit is high enough, and the crowd large enough, some group will find their way around any tweak, leading to an arms race. (This is especially true the simpler the tweak is; hidden 'comments' form, re-arranging the form elements, mis-labeling them, hidden 'gotcha' text all will work once and then need to be changed to fight targeting this specific form.) Even if the scripters can't 'solve' your tweak it doesn't prevent them from slamming your front page, and then sounding an alarm for the scripter to fill out the order, manually. Given they get the advantage from solving [a], they will likely still win [b] since they'll be the first humans reaching the order page. Additionally, 1. still happens, causing server errors and a decreased performance for everyone.

另一种解决方案是经常监视ip攻击,阻止它们进入防火墙,或以其他方式阻止它们排序。这个可以解2。和阻止[b],但扫描ip对性能的影响是巨大的,可能会导致更多像1这样的问题。比编剧自己造成的还要严重。此外,云网络和垃圾邮件僵尸的可能性使得IP检查相当无用。

第三个想法,强迫订单表单加载一段时间(比如半秒),可能会减慢快速订单的进度,但同样,脚本编写人员仍然是第一个进入的人,在任何速度下都不会对实际用户造成损害。

目标

将道具卖给非脚本人。 保持网站运行的速度不被机器人减慢。 不要让“正常”用户完成任何任务来证明他们是人类。


当前回答

根据您想要进入的复杂程度,您可以采取一些解决方案。

这些都是基于IP跟踪,在僵尸网络和云计算下有些崩溃,但应该能挫败绝大多数的僵尸。乔·兰登拥有大量机器人的可能性远远低于他只是运行一个从某处下载的Woot机器人来获取他的垃圾的可能性。

普通的节流

At a very basic, crude level, you could throttle requests per IP per time period. Do some analysis and determine that a legitimate user will access the site no more than X times per hour. Cap requests per IP per hour at that number, and bots will have to drastically reduce their polling frequency, or they'll lock themselves out for the next 58 minutes and be completely blind. That doesn't address the bot problem by itself, but it does reduce load, and increases the chance that legitimate users will have a shot at the item.

自适应调节

An variant on that solution might be to implement a load balancing queue, where the number of requests that one has made recently counts against your position in the queue. That is, if you keep slamming the site, your requests become lower priority. In a high-traffic situation like the bag of crap sales, this would give legitimate users an advantage over the bots in that they would have a higher connection priority, and would be getting pages back more quickly, while the bots continue to wait and wait until traffic dies down enough that their number comes up.

废料验证码

Third, while you don't want to bother with captchas, a captcha at the very end of the process, right before the transaction is completed, may not be a bad idea. At that point, people have committed to the sale, and are likely to go through with it even with the mild added annoyance. It prevents bots from completing the sale, which means that at a minimum all they can do is hammer your site to try to alert a human about the sale as quickly as possible. That doesn't solve the problem, but it does mean that the humans have a far, far better chance of obtaining sales than the bots do currently. It's not a solution, but it's an improvement.

以上的组合

实施基本的、慷慨的限制来阻止最滥用的机器人,同时考虑到单个公司IP背后的多个合法用户的潜力。截止数字将非常高——你引用了bot攻击你的网站10次/秒,即216万次/小时,这显然远远高于任何合法的使用量,即使是最大的公司网络或共享ip。

实现负载平衡队列,这样如果占用的服务器连接和带宽超过自己的份额,就会受到惩罚。这将惩罚共享公司池中的人,但不会阻止他们使用站点,而且他们的违规行为应该远没有您的装瓶者那么可怕,因此他们的惩罚应该不那么严重。

最后,如果您超过了每小时请求的某个阈值(这个阈值可能远远低于“自动断开连接”的截止值),那么就要求用户使用验证码进行验证。

这样,合法使用网站的用户每小时只有84个请求,即使他们非常兴奋,也不会注意到网站速度变慢了。然而,乔·波特发现自己陷入了两难境地。他可以:

Blow out his request quota with his current behavior and not be able to access the site at all, or Request just enough to not blow the request quota, which gives him realtime information at lower traffic levels, but causes him to have massive delays between requests during high-traffic times, which severely compromises his ability to complete a sale before inventory is exhausted, or Request more than the average user and end up getting stuck behind a captcha, or Request no more than the average user, and thus have no advantage over the average user.

只有滥用的用户才会受到服务降级或复杂性增加的影响。合法用户不会注意到任何变化,除了他们更容易购买他们的垃圾包。

齿顶高

以远低于注册用户的速率限制未注册用户的请求。这样,机器人所有者就必须通过一个经过身份验证的帐户来运行机器人,以通过应该是相对严格的节流率。

然后,有创造力的装瓶者将注册多个用户id,并使用这些id来实现他们想要的查询率;您可以通过将给定时间段内来自同一IP的任何ID视为相同的ID,并接受共享节流来解决这个问题。

这使得装瓶商别无选择,只能运行一个机器人网络,每个IP一个机器人,每个机器人注册一个Woot账户。不幸的是,这实际上无法与大量未关联的合法用户区分开来。

您可以将此策略与上述一种或多种策略结合使用,目的是为没有滥用使用模式的注册用户提供最佳服务,同时根据他们的状态(匿名或已注册)以及由流量指标决定的滥用程度逐步惩罚其他用户,包括注册用户和未注册用户。

其他回答

将页面的某些部分转换为图像,这样机器人就无法理解它们。

例如,创建整数0-9、美元符号和小数点的小图像。当页面加载时将图像缓存到客户端计算机上…然后显示价格使用通过运行服务器端代码选择的图像。大多数人类用户不会注意到其中的差别,机器人也不知道任何商品的价格。

首先,让我回顾一下我们需要做的事情。我意识到我只是在转述最初的问题,但重要的是我们要百分之百地理解这个问题,因为有很多很好的建议,4个中有2个或3个是正确的,但正如我将演示的那样,您将需要一个多方面的方法来满足所有的需求。

要求1:摆脱“机器人抨击”:

首页的快速“猛击”正在损害网站的性能,这是问题的核心。这种“猛击”既来自单ip机器人,也可能来自僵尸网络。我们想把两者都去掉。

要求二:不要破坏用户体验:

我们可以通过实施一个讨厌的验证程序来有效地解决机器人的情况,比如打电话给操作员,解决一堆验证码,或类似的问题,但这就像强迫每个无辜的飞机乘客跳过疯狂的安全圈,只是为了抓住最愚蠢的恐怖分子的渺茫机会。哦,等等,我们真的这么做了。但让我们看看在woot.com上能否做到这一点。

要求三:避免“军备竞赛”

正如您所提到的,您不希望卷入垃圾邮件机器人军备竞赛。因此,您不能使用隐藏或混乱的表单字段、数学问题等简单的调整,因为它们本质上是可以简单地自动检测和规避的模糊度量。

需求4:挫败“警报”机器人:

这可能是您的要求中最困难的。即使我们可以进行有效的人类验证挑战,机器人仍然可以在你的首页上投票,并在有新报价时提醒编剧。我们也想让那些机器人变得不可行。这是第一个需求的更强版本,因为机器人不仅不能发出破坏性能的快速请求——它们甚至不能发出足够多的重复请求来及时向脚本人员发送“警报”以赢得报价。


好,让我们看看是否能满足这四个条件。首先,正如我提到的,没有一种测量方法能达到目的。你将不得不结合一些技巧来实现它,你将不得不忍受两个烦恼:

少数用户将被要求经历重重考验 少数用户将无法获得特别优惠

我知道这些都很烦人,但如果我们能让“小”的数字足够小,我希望你会同意利大于弊。

第一个措施:基于用户的节流:

这个很简单,我相信你已经做过了。如果用户已登录,并保持每秒刷新600次(或其他),您将停止响应并告诉他冷却。事实上,你可能会更早地限制他的请求,但你明白我的意思。这样,一个登录的机器人将被禁止/节流一旦它开始投票你的网站。这是简单的部分。那些未经验证的机器人才是我们真正的问题,下面就来谈谈它们:

第二个措施:某种形式的知识产权限制,正如几乎所有人都建议的那样:

No matter what, you will have to do some IP based throttling to thwart the 'bot slamming'. Since it seems important to you to allow unauthenticated (non-logged-in) visitors to get the special offers, you only have IPs to go by initially, and although they're not perfect, they do work against single-IP bots. Botnets are a different beast, but I'll come back to those. For now, we will do some simple throttling to beat rapid-fire single-IP bots. The performance hit is negligable if you run the IP check before all other processing, use a proxy server for the throttling logic, and store the IPs in a memcached lookup-optimized tree structure.

第三种方法:用缓存的响应掩盖油门:

With rapid-fire single-IP bots throttled, we still have to address slow single-IP bots, ie. bots that are specifically tweaked to 'fly under the radar' by spacing requests slightly further apart than the throttling prevents. To instantly render slow single-IP bots useless, simply use the strategy suggested by abelenky: serve 10-minute-old cached pages to all IPs that have been spotted in the last 24 hours (or so). That way, every IP gets one 'chance' per day/hour/week (depending on the period you choose), and there will be no visible annoyance to real users who are just hitting 'reload', except that they don't win the offer. The beauty of this measure is that is also thwarts 'alarm bots', as long as they don't originate from a botnet. (I know you would probably prefer it if real users were allowed to refresh over and over, but there is no way to tell a refresh-spamming human from a request-spamming bot apart without a CAPTCHA or similar)

第四项措施:

You are right that CAPTCHAs hurt the user experience and should be avoided. However, in _one_ situation they can be your best friend: If you've designed a very restrictive system to thwart bots, that - because of its restrictiveness - also catches a number of false positives; then a CAPTCHA served as a last resort will allow those real users who get caught to slip by your throttling (thus avoiding annoying DoS situations). The sweet spot, of course, is when ALL the bots get caught in your net, while extremely few real users get bothered by the CAPTCHA. If you, when serving up the 10-minute-old cached pages, also offer an alternative, optional, CAPTCHA-verified 'front page refresher', then humans who really want to keep refreshing, can still do so without getting the old cached page, but at the cost of having to solve a CAPTCHA for each refresh. That is an annoyance, but an optional one just for the die-hard users, who tend to be more forgiving because they know they're gaming the system to improve their chances, and that improved chances don't come free.

第五种方法:诱饵废话:

Christopher Mahan had an idea that I rather liked, but I would put a different spin on it. Every time you are preparing a new offer, prepare two other 'offers' as well, that no human would pick, like a 12mm wingnut for $20. When the offer appears on the front page, put all three 'offers' in the same picture, with numbers corresponding to each offer. When the user/bot actually goes on to order the item, they will have to pick (a radio button) which offer they want, and since most bots would merely be guessing, in two out of three cases, the bots would be buying worthless junk. Naturally, this doesn't address 'alarm bots', and there is a (slim) chance that someone could build a bot that was able to pick the correct item. However, the risk of accidentally buying junk should make scripters turn entirely from the fully automated bots.

第六项措施:僵尸网络节流:

(删除)

Okay............ I've now spent most of my evening thinking about this, trying different approaches.... global delays.... cookie-based tokens.. queued serving... 'stranger throttling'.... And it just doesn't work. It doesn't. I realized the main reason why you hadn't accepted any answer yet was that noone had proposed a way to thwart a distributed/zombie net/botnet attack.... so I really wanted to crack it. I believe I cracked the botnet problem for authentication in a different thread, so I had high hopes for your problem as well. But my approach doesn't translate to this. You only have IPs to go by, and a large enough botnet doesn't reveal itself in any analysis based on IP addresses.

所以你知道了,第六小节是零。没什么。邮政编码。除非僵尸网络很小,或者足够快,可以被通常的IP限制,否则我不认为有任何有效的措施可以对抗僵尸网络,而不涉及明确的人类验证,如CAPTHAs。我很抱歉,但我认为结合以上五种方法是你最好的选择。你可能只需要abelenky的10分钟缓存技巧就可以了。

I think that sandboxing certain IPs is worth looking into. Once an IP has gone over a threshold, when they hit your site, redirect them to a webserver that has a multi-second delay before serving out a file. I've written Linux servers that can handle open 50K connections with hardly any CPU, so it wouldn't be too hard to slow down a very large number of bots. All the server would need to do is hold the connection open for N seconds before acting as a proxy to your regular site. This would still let regular users use the site even if they were really aggressive, just at a slightly degraded experience.

您可以使用这里描述的memcached,以较低的成本跟踪每个IP的命中数。

防止DoS会挫败@davebug上面所概述的第二个目标,“保持网站的速度不被机器人减慢”,但不一定能解决第一个目标,“把项目卖给非脚本编制的人”。

我敢肯定,脚本编写人员可以编写一些东西来在过度的限制下滑行,这仍然比人类完成排序表单的速度要快。

我不是一个网页开发人员,所以对此持保留态度,但这是我的建议-

每个用户都有一个cookie(包含一串随机数据),决定他们是否看到当前的垃圾销售。

(如果你没有饼干,你就看不到它们。因此,不启用cookie的用户永远不会看到糟糕的销量;新用户在第一次浏览页面时永远不会看到它们,但之后会看到)。

每次用户刷新网站时,他都会将当前的cookie传递给服务器,服务器会根据这个cookie来决定是给他一个新的cookie还是保持当前的cookie不变;并以此为基础,决定是否在页面上展示垃圾促销。

为了保持服务器端的简单性,你可以说在任何时候,只有一个cookie会让你看到糟糕的销售情况;还有一些其他的cookie被标记为“在最近2秒内生成的”,这些cookie将始终保持不变。因此,如果刷新页面的速度快于此,则无法获得新的页面。

(…啊,好吧,我想这并不能阻止机器人恢复旧的饼干并把它传给你。不过,也许在某个地方还是有解决方案的。)