我有一个相当大的音乐网站,有一个很大的艺术家数据库。我一直注意到其他音乐网站在窃取我们网站的数据(我在这里和那里输入假艺人的名字,然后进行谷歌搜索)。
如何防止屏幕刮擦?这可能吗?
我有一个相当大的音乐网站,有一个很大的艺术家数据库。我一直注意到其他音乐网站在窃取我们网站的数据(我在这里和那里输入假艺人的名字,然后进行谷歌搜索)。
如何防止屏幕刮擦?这可能吗?
当前回答
不幸的是,您最好的选择是手动的:寻找您认为指示抓取和禁止其IP地址的流量模式。
既然你谈论的是一个公共网站,那么让网站对搜索引擎友好也会让网站对抓取友好。如果一个搜索引擎可以抓取和刮除你的网站,那么恶意刮除者也可以。这是一条微妙的界线。
其他回答
快速解决这个问题的方法是设置一个陷阱。
Make a page that if it's opened a certain amount of times or even opened at all, will collect certain information like the IP and whatnot (you can also consider irregularities or patterns but this page shouldn't have to be opened at all). Make a link to this in your page that is hidden with CSS display:none; or left:-9999px; positon:absolute; try to place it in places that are less unlikely to be ignored like where your content falls under and not your footer as sometimes bots can choose to forget about certain parts of a page. In your robots.txt file set a whole bunch of disallow rules to pages you don't want friendly bots (LOL, like they have happy faces!) to gather information on and set this page as one of them. Now, If a friendly bot comes through it should ignore that page. Right but that still isn't good enough. Make a couple more of these pages or somehow re-route a page to accept differnt names. and then place more disallow rules to these trap pages in your robots.txt file alongside pages you want ignored. Collect the IP of these bots or anyone that enters into these pages, don't ban them but make a function to display noodled text in your content like random numbers, copyright notices, specific text strings, display scary pictures, basically anything to hinder your good content. You can also set links that point to a page which will take forever to load ie. in php you can use the sleep() function. This will fight the crawler back if it has some sort of detection to bypass pages that take way too long to load as some well written bots are set to process X amount of links at a time. If you have made specific text strings/sentences why not go to your favorite search engine and search for them, it might show you where your content is ending up.
无论如何,如果你从战术和创造性的角度思考,这可能是一个很好的起点。最好的办法就是学习机器人是如何工作的。
我还会考虑打乱一些ID或页面元素上的属性显示方式:
<a class="someclass" href="../xyz/abc" rel="nofollow" title="sometitle">
每次都会改变它的形式,因为一些机器人可能会在你的页面或目标元素中寻找特定的模式。
<a title="sometitle" href="../xyz/abc" rel="nofollow" class="someclass">
id="p-12802" > id="p-00392"
当然,这是可能的。为了100%的成功,让你的网站离线。
在现实中,你可以做一些事情,让抓取变得更加困难。谷歌进行浏览器检查,以确保您不是一个抓取搜索结果的机器人(尽管这和大多数其他事情一样,可以被欺骗)。
你可以做一些事情,比如在第一次连接到你的网站和随后的点击之间需要几秒钟。我不确定理想的时间是什么,也不知道具体怎么做,但这是另一个想法。
我相信还有其他一些人有更多的经验,但我希望这些想法至少有一定的帮助。
苏' em。
说正经的:如果你有钱,就找个懂网络的年轻律师谈谈。你真的可以在这里有所作为。根据网站所在地的不同,你可以让律师在你的国家写一份终止协议或类似的文件。你至少能吓到那些混蛋。
记录插入的虚拟值。插入明确(但模糊)指向你的虚拟值。我认为这是电话簿公司的普遍做法,在德国,我想已经有几个例子,抄袭者通过1:1复制的虚假条目被破获。
如果这将导致您弄乱HTML代码,拖低SEO,有效性和其他事情,那将是一种耻辱(即使一个模板系统在对相同页面的每个请求使用略微不同的HTML结构可能已经帮助了很多抓取程序,总是依赖HTML结构和类/ID名称来获取内容)。
这类案件正是版权法所擅长的。剽窃别人的诚实工作来赚钱是你应该能够反对的事情。
一种方法是将内容作为XML属性、URL编码的字符串、使用HTML编码的JSON预格式化的文本或数据uri提供,然后在客户机上将其转换为HTML。以下是一些这样做的网站:
Skechers: XML <document filename="" height="" width="" title="SKECHERS" linkType="" linkUrl="" imageMap="" href="http://www.bobsfromskechers.com" alt="BOBS from Skechers" title="BOBS from Skechers" /> Chrome Web Store: JSON <script type="text/javascript" src="https://apis.google.com/js/plusone.js">{"lang": "en", "parsetags": "explicit"}</script> Bing News: data URL <script type="text/javascript"> //<![CDATA[ (function() { var x;x=_ge('emb7'); if(x) { x.src='data:image/jpeg;base64,/*...*/'; } }() ) Protopage: URL Encoded Strings unescape('Rolling%20Stone%20%3a%20Rock%20and%20Roll%20Daily') TiddlyWiki : HTML Entities + preformatted JSON <pre> {"tiddlers": { "GettingStarted": { "title": "GettingStarted", "text": "Welcome to TiddlyWiki, } } } </pre> Amazon: Lazy Loading amzn.copilot.jQuery=i;amzn.copilot.jQuery(document).ready(function(){d(b);f(c,function() {amzn.copilot.setup({serviceEndPoint:h.vipUrl,isContinuedSession:true})})})},f=function(i,h){var j=document.createElement("script");j.type="text/javascript";j.src=i;j.async=true;j.onload=h;a.appendChild(j)},d=function(h){var i=document.createElement("link");i.type="text/css";i.rel="stylesheet";i.href=h;a.appendChild(i)}})(); amzn.copilot.checkCoPilotSession({jsUrl : 'http://z-ecx.images-amazon.com/images/G/01/browser-scripts/cs-copilot-customer-js/cs-copilot-customer-js-min-1875890922._V1_.js', cssUrl : 'http://z-ecx.images-amazon.com/images/G/01/browser-scripts/cs-copilot-customer-css/cs-copilot-customer-css-min-2367001420._V1_.css', vipUrl : 'https://copilot.amazon.com' XMLCalabash: Namespaced XML + Custom MIME type + Custom File extension <p:declare-step type="pxp:zip"> <p:input port="source" sequence="true" primary="true"/> <p:input port="manifest"/> <p:output port="result"/> <p:option name="href" required="true" cx:type="xsd:anyURI"/> <p:option name="compression-method" cx:type="stored|deflated"/> <p:option name="compression-level" cx:type="smallest|fastest|default|huffman|none"/> <p:option name="command" select="'update'" cx:type="update|freshen|create|delete"/> </p:declare-step>
如果查看上述任何一个的源代码,就会看到抓取只会返回元数据和导航。
你可以做一些事情来防止屏幕抓取。有些不是很有效,而另一些(验证码)是,但阻碍可用性。你必须记住,它也可能阻碍合法的网站刮刀,如搜索引擎索引。
然而,我认为如果你不希望它被删除,这意味着你也不希望搜索引擎索引它。
这里有一些你可以尝试的方法:
Show the text in an image. This is quite reliable, and is less of a pain on the user than a CAPTCHA, but means they won't be able to cut and paste and it won't scale prettily or be accessible. Use a CAPTCHA and require it to be completed before returning the page. This is a reliable method, but also the biggest pain to impose on a user. Require the user to sign up for an account before viewing the pages, and confirm their email address. This will be pretty effective, but not totally - a screen-scraper might set up an account and might cleverly program their script to log in for them. If the client's user-agent string is empty, block access. A site-scraping script will often be lazily programmed and won't set a user-agent string, whereas all web browsers will. You can set up a black list of known screen scraper user-agent strings as you discover them. Again, this will only help the lazily-coded ones; a programmer who knows what he's doing can set a user-agent string to impersonate a web browser. Change the URL path often. When you change it, make sure the old one keeps working, but only for as long as one user is likely to have their browser open. Make it hard to predict what the new URL path will be. This will make it difficult for scripts to grab it if their URL is hard-coded. It'd be best to do this with some kind of script.
如果我必须这样做,我可能会结合使用后三种方法,因为它们最大限度地减少了对合法用户的不便。然而,你必须接受这样的事实:你不可能用这种方式屏蔽所有人,一旦有人想出了绕过它的方法,他们就可以永远地刮掉它。我猜你可以在发现他们的时候屏蔽他们的IP地址。