我有一个相当大的音乐网站,有一个很大的艺术家数据库。我一直注意到其他音乐网站在窃取我们网站的数据(我在这里和那里输入假艺人的名字,然后进行谷歌搜索)。
如何防止屏幕刮擦?这可能吗?
我有一个相当大的音乐网站,有一个很大的艺术家数据库。我一直注意到其他音乐网站在窃取我们网站的数据(我在这里和那里输入假艺人的名字,然后进行谷歌搜索)。
如何防止屏幕刮擦?这可能吗?
当前回答
我已经做了很多网络抓取,并在我的博客上总结了一些技巧来阻止网络抓取,基于我觉得讨厌的东西。
这是你的用户和刮刀者之间的权衡。如果你限制IP,使用验证码,要求登录,等等,你会给刮刀器带来困难。但这也可能会赶走真正的用户。
其他回答
快速解决这个问题的方法是设置一个陷阱。
Make a page that if it's opened a certain amount of times or even opened at all, will collect certain information like the IP and whatnot (you can also consider irregularities or patterns but this page shouldn't have to be opened at all). Make a link to this in your page that is hidden with CSS display:none; or left:-9999px; positon:absolute; try to place it in places that are less unlikely to be ignored like where your content falls under and not your footer as sometimes bots can choose to forget about certain parts of a page. In your robots.txt file set a whole bunch of disallow rules to pages you don't want friendly bots (LOL, like they have happy faces!) to gather information on and set this page as one of them. Now, If a friendly bot comes through it should ignore that page. Right but that still isn't good enough. Make a couple more of these pages or somehow re-route a page to accept differnt names. and then place more disallow rules to these trap pages in your robots.txt file alongside pages you want ignored. Collect the IP of these bots or anyone that enters into these pages, don't ban them but make a function to display noodled text in your content like random numbers, copyright notices, specific text strings, display scary pictures, basically anything to hinder your good content. You can also set links that point to a page which will take forever to load ie. in php you can use the sleep() function. This will fight the crawler back if it has some sort of detection to bypass pages that take way too long to load as some well written bots are set to process X amount of links at a time. If you have made specific text strings/sentences why not go to your favorite search engine and search for them, it might show you where your content is ending up.
无论如何,如果你从战术和创造性的角度思考,这可能是一个很好的起点。最好的办法就是学习机器人是如何工作的。
我还会考虑打乱一些ID或页面元素上的属性显示方式:
<a class="someclass" href="../xyz/abc" rel="nofollow" title="sometitle">
每次都会改变它的形式,因为一些机器人可能会在你的页面或目标元素中寻找特定的模式。
<a title="sometitle" href="../xyz/abc" rel="nofollow" class="someclass">
id="p-12802" > id="p-00392"
将你的内容放在验证码后面意味着机器人将很难访问你的内容。然而,人类会不方便,所以这可能是不可取的。
你可以做一些事情来防止屏幕抓取。有些不是很有效,而另一些(验证码)是,但阻碍可用性。你必须记住,它也可能阻碍合法的网站刮刀,如搜索引擎索引。
然而,我认为如果你不希望它被删除,这意味着你也不希望搜索引擎索引它。
这里有一些你可以尝试的方法:
Show the text in an image. This is quite reliable, and is less of a pain on the user than a CAPTCHA, but means they won't be able to cut and paste and it won't scale prettily or be accessible. Use a CAPTCHA and require it to be completed before returning the page. This is a reliable method, but also the biggest pain to impose on a user. Require the user to sign up for an account before viewing the pages, and confirm their email address. This will be pretty effective, but not totally - a screen-scraper might set up an account and might cleverly program their script to log in for them. If the client's user-agent string is empty, block access. A site-scraping script will often be lazily programmed and won't set a user-agent string, whereas all web browsers will. You can set up a black list of known screen scraper user-agent strings as you discover them. Again, this will only help the lazily-coded ones; a programmer who knows what he's doing can set a user-agent string to impersonate a web browser. Change the URL path often. When you change it, make sure the old one keeps working, but only for as long as one user is likely to have their browser open. Make it hard to predict what the new URL path will be. This will make it difficult for scripts to grab it if their URL is hard-coded. It'd be best to do this with some kind of script.
如果我必须这样做,我可能会结合使用后三种方法,因为它们最大限度地减少了对合法用户的不便。然而,你必须接受这样的事实:你不可能用这种方式屏蔽所有人,一旦有人想出了绕过它的方法,他们就可以永远地刮掉它。我猜你可以在发现他们的时候屏蔽他们的IP地址。
一种方法是将内容作为XML属性、URL编码的字符串、使用HTML编码的JSON预格式化的文本或数据uri提供,然后在客户机上将其转换为HTML。以下是一些这样做的网站:
Skechers: XML <document filename="" height="" width="" title="SKECHERS" linkType="" linkUrl="" imageMap="" href="http://www.bobsfromskechers.com" alt="BOBS from Skechers" title="BOBS from Skechers" /> Chrome Web Store: JSON <script type="text/javascript" src="https://apis.google.com/js/plusone.js">{"lang": "en", "parsetags": "explicit"}</script> Bing News: data URL <script type="text/javascript"> //<![CDATA[ (function() { var x;x=_ge('emb7'); if(x) { x.src='data:image/jpeg;base64,/*...*/'; } }() ) Protopage: URL Encoded Strings unescape('Rolling%20Stone%20%3a%20Rock%20and%20Roll%20Daily') TiddlyWiki : HTML Entities + preformatted JSON <pre> {"tiddlers": { "GettingStarted": { "title": "GettingStarted", "text": "Welcome to TiddlyWiki, } } } </pre> Amazon: Lazy Loading amzn.copilot.jQuery=i;amzn.copilot.jQuery(document).ready(function(){d(b);f(c,function() {amzn.copilot.setup({serviceEndPoint:h.vipUrl,isContinuedSession:true})})})},f=function(i,h){var j=document.createElement("script");j.type="text/javascript";j.src=i;j.async=true;j.onload=h;a.appendChild(j)},d=function(h){var i=document.createElement("link");i.type="text/css";i.rel="stylesheet";i.href=h;a.appendChild(i)}})(); amzn.copilot.checkCoPilotSession({jsUrl : 'http://z-ecx.images-amazon.com/images/G/01/browser-scripts/cs-copilot-customer-js/cs-copilot-customer-js-min-1875890922._V1_.js', cssUrl : 'http://z-ecx.images-amazon.com/images/G/01/browser-scripts/cs-copilot-customer-css/cs-copilot-customer-css-min-2367001420._V1_.css', vipUrl : 'https://copilot.amazon.com' XMLCalabash: Namespaced XML + Custom MIME type + Custom File extension <p:declare-step type="pxp:zip"> <p:input port="source" sequence="true" primary="true"/> <p:input port="manifest"/> <p:output port="result"/> <p:option name="href" required="true" cx:type="xsd:anyURI"/> <p:option name="compression-method" cx:type="stored|deflated"/> <p:option name="compression-level" cx:type="smallest|fastest|default|huffman|none"/> <p:option name="command" select="'update'" cx:type="update|freshen|create|delete"/> </p:declare-step>
如果查看上述任何一个的源代码,就会看到抓取只会返回元数据和导航。
当然,这是可能的。为了100%的成功,让你的网站离线。
在现实中,你可以做一些事情,让抓取变得更加困难。谷歌进行浏览器检查,以确保您不是一个抓取搜索结果的机器人(尽管这和大多数其他事情一样,可以被欺骗)。
你可以做一些事情,比如在第一次连接到你的网站和随后的点击之间需要几秒钟。我不确定理想的时间是什么,也不知道具体怎么做,但这是另一个想法。
我相信还有其他一些人有更多的经验,但我希望这些想法至少有一定的帮助。