I've accepted an answer, but sadly, I believe we're stuck with our original worst case scenario: CAPTCHA everyone on purchase attempts of the crap. Short explanation: caching / web farms make it impossible to track hits, and any workaround (sending a non-cached web-beacon, writing to a unified table, etc.) slows the site down worse than the bots would. There is likely some pricey hardware from Cisco or the like that can help at a high level, but it's hard to justify the cost if CAPTCHA-ing everyone is an alternative. I'll attempt a more full explanation later, as well as cleaning this up for future searchers (though others are welcome to try, as it's community wiki).

情况

这是关于woot.com上的垃圾销售。我是Woot Workshop的总统,Woot Workshop是Woot的子公司,负责设计,撰写产品描述,播客,博客文章,并主持论坛。我使用CSS/HTML,对其他技术几乎不熟悉。我与开发人员密切合作,在这里讨论了所有的答案(以及我们的许多其他想法)。

可用性是我工作的重要组成部分,而让网站变得令人兴奋和有趣则是剩下的大部分工作。这就是下面三个目标的来源。验证码损害了可用性,机器人从我们的垃圾销售中偷走了乐趣和兴奋。

机器人一秒钟就会在我们的首页上猛击数十次屏幕抓取(和/或扫描我们的RSS),以寻找随机垃圾销售。他们一看到这个,就会触发程序的第二阶段登录,点击“我要一个”,填好表格,然后买下这些垃圾。

评价

lc:在stackoverflow和其他使用此方法的站点上,他们几乎总是处理已验证(登录)的用户,因为正在尝试的任务需要这样。

在Woot上,匿名(未登录)用户可以查看我们的主页。换句话说,撞击机器人可以不经过身份验证(除了IP地址之外基本上无法跟踪)。

所以我们又回到了扫描IP, a)在这个云网络和垃圾邮件僵尸的时代是相当无用的,b)考虑到来自一个IP地址的业务数量,捕获了太多无辜的人(更不用说非静态IP isp的问题和试图跟踪它的潜在性能影响)。

还有,让别人给我们打电话是最糟糕的情况。我们能让他们给你打电话吗?

布拉德克:内德·巴切德的方法看起来很酷,但它们是专门设计来击败为网络站点构建的机器人的。我们的问题是机器人是专门用来破坏我们网站的。其中一些方法可能只在很短的时间内有效,直到脚本编写人员将他们的机器人进化为忽略蜜罐,从屏幕上抓取附近的标签名称而不是表单id,并使用支持javascript的浏览器控件。

 

lc再次说道:“当然,除非炒作是你们营销计划的一部分。”是的,绝对是。当物品出现时的惊喜,以及当你设法得到一件物品时的兴奋,可能比你实际得到的垃圾一样重要,甚至更重要。任何消除先到/先得的东西都不利于“赢”的快感。

 

novatrust:就我个人而言,欢迎我们新的机器人霸主。我们实际上提供RSSfeeds,允许第三方应用程序扫描我们的网站的产品信息,但不是在主站HTML之前。如果我的理解正确的话,你的解决方案通过完全牺牲目标1来帮助目标2(性能问题),并放弃机器人将购买大部分垃圾的事实。我给你的回答投了赞成票,因为你最后一段的悲观情绪对我来说是准确的。这里似乎没有什么灵丹妙药。

其余的响应通常依赖于IP跟踪,这似乎是无用的(僵尸网络/僵尸/云网络)和有害的(捕获许多来自相同IP目的地的无辜的人)。

还有其他方法/想法吗?我的开发人员一直在说“让我们只做验证码”,但我希望有更少的侵入性方法,让所有真正想要我们的垃圾的人。

最初的问题

假设你卖的东西很便宜,但有很高的感知价值,而你的数量非常有限。没有人确切地知道你什么时候会卖这个东西。超过一百万人经常来看你卖什么。

你最终会发现脚本和机器人试图通过编程方式[a]找出你何时出售该道具,[b]确保他们是第一批购买该道具的人。这很糟糕,有两个原因:

你的网站被非人类攻击,拖慢了所有人的速度。 编剧最终“赢得”了产品,让常客感到被骗了。

一个看似显而易见的解决方案是为用户在下单前设置一些障碍,但这至少有三个问题:

The user experience sucks for humans, as they have to decipher CAPTCHA, pick out the cat, or solve a math problem. If the perceived benefit is high enough, and the crowd large enough, some group will find their way around any tweak, leading to an arms race. (This is especially true the simpler the tweak is; hidden 'comments' form, re-arranging the form elements, mis-labeling them, hidden 'gotcha' text all will work once and then need to be changed to fight targeting this specific form.) Even if the scripters can't 'solve' your tweak it doesn't prevent them from slamming your front page, and then sounding an alarm for the scripter to fill out the order, manually. Given they get the advantage from solving [a], they will likely still win [b] since they'll be the first humans reaching the order page. Additionally, 1. still happens, causing server errors and a decreased performance for everyone.

另一种解决方案是经常监视ip攻击,阻止它们进入防火墙,或以其他方式阻止它们排序。这个可以解2。和阻止[b],但扫描ip对性能的影响是巨大的,可能会导致更多像1这样的问题。比编剧自己造成的还要严重。此外,云网络和垃圾邮件僵尸的可能性使得IP检查相当无用。

第三个想法,强迫订单表单加载一段时间(比如半秒),可能会减慢快速订单的进度,但同样,脚本编写人员仍然是第一个进入的人,在任何速度下都不会对实际用户造成损害。

目标

将道具卖给非脚本人。 保持网站运行的速度不被机器人减慢。 不要让“正常”用户完成任何任务来证明他们是人类。


当前回答

我不能百分之百地确定这是否可行,至少不尝试一下是不行的。

But it seems as if it should be possible, although technically challenging, to write a server-side HTML/CSS scrambler that takes as its input a normal html page + associated files, and outputs a more or less blank html page, along with an obfuscated javascript file that is capable of reconstructing the page. The javascript couldn't just print out straightforward DOM nodes, of course... but it could spit out a complex set of overlapping, absolute-positioned divs and paragraphs, each containing one letter, so it comes out perfectly readable.

机器人将无法阅读它,除非它们使用完整的渲染引擎和足够的人工智能来重建人类将看到的内容。

然后,因为这是一个自动化的过程,您可以根据您的计算能力经常重新打乱站点-每分钟,或每十分钟,或每小时,甚至每次页面加载。

当然,写这样一篇含糊不清的文章会很困难,而且可能不值得。但这只是一个想法。

其他回答

以下是我的看法。攻击机器人所有者的投资回报率,这样他们就会做你想让他们做的合法事情,而不是欺骗。让我们从他们的角度来看。他们的资产是什么?显然,无数的一次性机器、IP地址,甚至可能还有大量不熟练的人愿意做无聊的工作。他们想要什么?总是在其他合法的人得到它之前得到你提供的特别交易。

The good news is that they only have a limited window of time in which to win the race. And what I don't think they have is an unlimited number of smart people who are on call to reverse engineer your site at the moment you unleash a deal. So if you can make them jump through a specific hoop that is hard for them to figure out, but automatic for your legitimate customers (they won't even know it's there), you can delay their efforts just enough that they get beat by the massive number of real people who are just dying to get your hot deal.

The first step is to make your notion of authentication non-binary, by which I mean that, for any given user, you have a probability assigned to them that they are a real person or a bot. You can use a number of hints to build up this probability, many of which have been discussed already on this thread: suspicious rate activity, IP addresses, foreign country geolocation, cookies, etc. My favorite is to just pay attention to the exact version of windows they are using. More importantly, you can give your long-term customers a clear way to authenticate with strong hints: by engaging with the site, making purchases, contributing to forums, etc. It's not required that you do those things, but if you do then you'll have a slight advantage when it comes time to see special deals.

Whenever you are called upon to make an authentication decision, use this probability to make the computer you're talking to do more-or-less work before you will give them what they want. For example, perhaps some javascript on your site requires the client to perform a computationally expensive task in the background, and only when that task completes will you let them know about the special deal. For a regular customer, this can be pretty quick and painless, but for a scammer it means they need a lot more computers to maintain constant coverage (since each computer has to do more work). Then you can use your probability score from above to increase the amount of work they have to do.

To make sure this delay doesn't cause any fairness problems, I'd recommend making it be some kind of encryption task that includes the current time of day from the person's computer. Since the scammer doesn't know what time the deal will start, he can't just make something up, he has to use something close to the real time of day (you can ignore any requests that claim to come in before the deal started). Then you can use these times to adjust the first-come-first-served rule, without the real people ever having to know anything about it.

The last idea is to change the algorithm required to generate the work whenever you post a new deal (and at random other times). Every time you do that, normal humans will be unaffected, but bots will stop working. They'll have to get a human to get to work on the reverse-engineering, which hopefully will take longer than your deal window. Even better is if you never tell them if they submitted the right result, so that they don't get any kind of alert that they are doing things wrong. To defeat this solution, they will have to actually automate a real browser (or at least a real javascript interpreter) and then you are really jacking up the cost of scamming. Plus, with a real browser, you can do tricks like those suggested elsewhere in this thread like timing the keystrokes of each entry and looking for other suspicious behaviors.

So for anyone who you know you've seen before (a common IP, session, cookie, etc) you have a way to make each request a little more expensive. That means the scammers will want to always present you with your hardest case - a brand-new computer/browser/IP combo that you've never seen before. But by putting some extra work into being able to even know if they have the bot working right, you force them to waste a lot of these precious resources. Although they may really have an infinite number, generating them is not without cost, and again you are driving up the cost part of their ROI equation. Eventually, it'll be more profitable for them to just do what you want :)

希望这对你们有帮助,

Eric

让我们换个角度看问题——你让机器人买你想让真人买的东西,那给机器人一个机会买你不想让真人买的东西怎么样?

有一个随机的机会,一些非显示的html,抓取机器人会认为是真实的情况,但现实的人不会看到(不要忘记,现实的人包括盲人,所以考虑屏幕阅读器等),这通过购买一些非常昂贵的东西(或不进行实际购买,但获得付款细节,让你放在列表上)。

即使机器人切换到“提醒用户”而不是“购买”,如果你能得到足够多的假警报,你可能就能让人们(也许不是所有人,但减少一些诈骗总比没有强)不去打扰它。

看看ned Batchelder的这篇文章。他的文章是关于阻止垃圾邮件机器人的,但是同样的技术可以很容易地应用到您的站点。

而不是阻止机器人 人们可以识别自己,我们可以 通过增加难度来阻止机器人 为了让他们的帖子成功,或者 通过让他们无意中识别 他们自己就是机器人。这将删除 负担从人而来,而离去 评论形式免费可见的反垃圾邮件 措施。 这个技巧就是我预防的方法 这个网站上的垃圾邮件。它的工作原理。的 这里描述的方法不查看 根本就是内容。

其他一些想法:

创建一个官方的自动通知机制(RSS feed?Twitter?)当你的产品上市时,人们可以订阅它。这减少了人们编写脚本的需求。 在新商品上市前改变你的迷惑技巧。因此,即使编剧可以升级军备竞赛,他们也总是落后一天。


编辑:为了完全清楚,Ned在上面的文章中描述了通过防止BOT通过表单提交订单来防止自动购买物品的方法。他的技术并不能阻止机器人通过抓取主页来判断什么时候有Bandoleer of carrot出售。我不确定防止这种情况是否真的可能。

关于你对内德策略有效性的评论:是的,他讨论了蜜罐,但我不认为那是他最强的策略。他对SPINNER的讨论是我提到他的文章的最初原因。对不起,我在最初的帖子中没有说清楚:

转轮是一个隐藏字段,用于 一些东西:它将A哈希在一起 防止的值的数目 篡改和重播,并已习惯 模糊的字段名。旋转器是 MD5哈希值: 时间戳, 客户端的IP地址, 被评论的博客条目的条目id,以及 一个秘密。

以下是你如何在WOOT.com上实现它:

每次有新商品出售时,更改作为散列一部分的“secret”值。这意味着,如果有人打算设计一个BOT来自动购买物品,它只会在下一个物品开始销售之前起作用!!

即使有人能够快速重建他们的机器人,所有其他实际用户都已经购买了BOC,你的问题就解决了!

他讨论的另一个策略是不时地改变蜜罐技巧(同样,当新产品上市时改变它):

Use CSS classes (randomized of course) to set the fields or a containing element to display:none. Color the fields the same (or very similar to) the background of the page. Use positioning to move a field off of the visible area of the page. Make an element too small to show the contained honeypot field. Leave the fields visible, but use positioning to cover them with an obscuring element. Use Javascript to effect any of these changes, requiring a bot to have a full Javascript engine. Leave the honeypots displayed like the other fields, but tell people not to enter anything into them.

我想我的总体想法是在每个新产品上市时改变形式设计。或者至少,当一款新的中行上市时,它会有所改变。

一个月几次吗?

在应用程序前面的apache服务器上编写一个反向代理,实现一个Tarpit (Wikipedia文章)来惩罚机器人。它将简单地管理最近几秒内连接的IP地址列表。您检测到来自单个IP地址的请求爆发,然后在响应之前以指数方式延迟这些请求。

当然,如果多人使用NAT网络连接,他们可以来自同一个IP地址,但人类不太可能介意你的响应时间从2毫秒到4毫秒(甚至400毫秒),而机器人则会很快受到不断增加的延迟的阻碍。

我不能百分之百地确定这是否可行,至少不尝试一下是不行的。

But it seems as if it should be possible, although technically challenging, to write a server-side HTML/CSS scrambler that takes as its input a normal html page + associated files, and outputs a more or less blank html page, along with an obfuscated javascript file that is capable of reconstructing the page. The javascript couldn't just print out straightforward DOM nodes, of course... but it could spit out a complex set of overlapping, absolute-positioned divs and paragraphs, each containing one letter, so it comes out perfectly readable.

机器人将无法阅读它,除非它们使用完整的渲染引擎和足够的人工智能来重建人类将看到的内容。

然后,因为这是一个自动化的过程,您可以根据您的计算能力经常重新打乱站点-每分钟,或每十分钟,或每小时,甚至每次页面加载。

当然,写这样一篇含糊不清的文章会很困难,而且可能不值得。但这只是一个想法。