基于网络爬虫的有效URL缓存外文翻译资料
2022-11-26 20:07:01
外文资料原文
Efficient URL Caching for World Wide Web Crawling
Marc Najork
BMJ (International Edition) 2013
Crawling the web is deceptively simple: the basic algorithm is (a)Fetch a page (b) Parse it to extract all linked URLs (c) For all the URLs not seen before, repeat (a)–(c). However, the size of the web (estimated at over 4 billion pages) and its rate of change (estimated at 7% per week) move this plan from a trivial programming exercise to a serious algorithmic and system design challenge. Indeed, these two factors alone imply that for a reasonably fresh and complete crawl of the web, step (a) must be executed about a thousand times per second, and thus the membership test (c) must be done well over ten thousand times per second against a set too large to store in main memory. This requires a distributed architecture, which further complicates the membership test.
A crucial way to speed up the test is to cache, that is, to store in main memory a (dynamic) subset of the “seen” URLs. The main goal of this paper is to carefully investigate several URL caching techniques for web crawling. We consider both practical algorithms: random replacement, static cache, LRU, and CLOCK, and theoretical limits: clairvoyant caching and infinite cache. We performed about 1,800 simulations using these algorithms with various cache sizes, using actual log data extracted from a massive 33 day web crawl that issued over one billion HTTP requests. Our main conclusion is that caching is very effective – in our setup, a cache of roughly 50,000 entries can achieve a hit rate of almost 80%. Interestingly, this cache size falls at a critical point: a substantially smaller cache is much less effective while a substantially larger cache brings little additional benefit. We conjecture that such critical points are inherent to our problem and venture an explanation for this phenomenon.
1. INTRODUCTION
A recent Pew Foundation study [31] states that “Search engines have become an indispensable utility for Internet users” and estimates that as of mid-2002, slightly over 50% of all Americans have used web search to find information. Hence, the technology that powers web search is of enormous practical interest. In this paper, we concentrate on one aspect of the search technology, namely the process of collecting web pages that eventually constitute the search engine corpus.
Search engines collect pages in many ways, among them direct URL submission, paid inclusion, and URL extraction from nonweb sources, but the bulk of the corpus is obtained by recursively exploring the web, a process known as crawling or SPIDERing. The basic algorithm is
(a) Fetch a page
(b) Parse it to extract all linked URLs
(c) For all the URLs not seen before, repeat (a)–(c)
Crawling typically starts from a set of seed URLs, made up of URLs obtained by other means as described above and/or made up of URLs collected during previous crawls. Sometimes crawls are started from a single well connected page, or a directory such as yahoo.com, but in this case a relatively large portion of the web (estimated at over 20%) is never reached. See [9] for a discussion of the graph structure of the web that leads to this phenomenon.
If we view web pages as nodes in a graph, and hyperlinks as directed edges among these nodes, then crawling becomes a process known in mathematical circles as graph traversal. Various strategies for graph traversal differ in their choice of which node among the nodes not yet explored to explore next. Two standard strategies for graph traversal are Depth First Search (DFS) and Breadth First Search (BFS) – they are easy to implement and taught in many introductory algorithms classes. (See for instance [34]).
However, crawling the web is not a trivial programming exercise but a serious algorithmic and system design challenge because of the following two factors.
1. The web is very large. Currently, Google [20] claims to have indexed over 3 billion pages. Various studies [3, 27, 28] have indicated that, historically, the web has doubled every 9-12 months.
2. Web pages are changing rapidly. If “change” means “any change”, then about 40% of all web pages change weekly [12]. Even if we consider only pages that change by a third or more, about 7% of all web pages change weekly [17].
These two factors imply that to obtain a reasonably fresh and 679 complete snapshot of the web, a search engine must crawl at least 100 million pages per day. Therefore, step (a) must be executed about 1,000 times per second, and the membership test in step (c) must be done well over ten thousand times per second, against a set of URLs that is too large to store in main memory. In addition, crawlers typically use a distributed architecture to crawl more pages in parallel, which further complicates the membership test: it is possible that the membership question can only be answered by a peer node, not locally.
A crucial way to speed up the membership test is to cache a (dynamic) subset of the “seen” URLs in main memory. The main goal of this paper is to investigate in depth several URL caching techniques for web crawling. We examined four practical techniques: random replacement, static cache, LRU, and CLOCK, and compared them against two theoretical limits: clairvoyant caching and infinite cache when run against a trace of a web crawl that issued over one billion HTTP requests. We found that simple caching techniques are extremely effective even at relatively small cache sizes such as 50,000 entries and show how these caches can be implemented very efficiently.
The paper is organized as follows: Section 2 discusses the various crawling solutions proposed in the literature and how caching fits in thei
剩余内容已隐藏,支付完成后下载完整资料
基于网络爬虫的有效URL缓存
马克
BMJ (国际版) 2013
概要:
要在网络上爬行非常简单:基本的算法是:(a)取得一个网页(b)解析它提取所有的链接URLs(c)对于所有没有见过的URLs重复执行(a)-(c)。但是,网络的大小(估计有超过40亿的网页)和他们变化的频率(估计每周有7%的变化)使这个计划由一个微不足道的设计习题变成一个非常严峻的算法和系统设计挑战。实际上,光是这两个要素就意味着如果要进行及时地,完全地爬行网络,步骤(a)必须每秒钟执行大约1000次,因此,成员检测(c)必须每秒钟执行超过10000次,并有非常大的数据储存到主内存中。这个要求有一个分布式构造,使得成员检测更加复杂。
一个非常重要的方法加速这个检测就是用cache(高速缓存),这个是把见过的URLs存入主内存中的一个(动态)子集中。这个论文最主要的成果就是仔细的研究了几种关于网络爬虫的URL缓存技术。我们考虑所有实际的算法:随机置换,静态cache,LRU,和CLOCK,和理论极限:透视cache和极大的cache。我们执行了大约1800次模拟,用不同的cache大小执行这些算法,用真实的log日志数据,获取自一个非常大的33天的网络爬行,大约执行了超过10亿次的http请求。
我们的主要的结论是 cache是非常高效的-在我们的机制里,一个有大约50000个入口的cache可以完成80%的速率。有趣的是,这cache的大小下降到一个临界点:一个足够的小一点的cache更有效当一个足够的大一点的cache只能带来很小的额外好处。我们推测这个临界点是固有的并且冒昧的解释一下这个现象。
1.介绍
皮尤基金会最新的研究指出:“搜索引擎已经成为互联网用户不可或缺的工具”,估计在2002年中期,初略有超过1半的美国人用网络搜索获取信息。因此,一个强大的搜索引擎技术有巨大的实际利益,在这个论文中,我们集中于一方面的搜索技术,也就是搜集网页的过程,最终组成一个搜索引擎的文集。
搜索引擎搜集网页通过很多途径,他们中,直接提交URL,回馈内含物,然后从非web源文件中提取URL,但是大量的文集包含一个进程叫 crawling 或者 SPIDERing,他们递归的探索互联网。基本的算法是:
Fetch a page
Parse it to extract all linked URLs
For all the URLs not seen before,repeat(a)-(c)
网络爬虫一般开始于一些 种子URLs。有些时候网络爬虫开始于一个正确连接的页面,或者一个目录就像:yahoo.com,但是因为这个原因相关的巨大的部分网络资源无法被访问到。(估计有超过20%)
如果把网页看作图中的节点,把超链接看作定向的移动在这些节点之间,那么网络爬虫就变成了一个进程就像数学中的图的遍历一样。不同的遍历策略决定着先不访问哪个节点,下一个访问哪个节点。2种标准的策略是深度优先算法和广度优先算法-他们容易被实现所以在很多入门的算法课中都有教。
但是,在网络上爬行并不是一个微不足道的设计习题,而是一个非常严峻的算法和系统设计挑战因为以下2点原因:
网络非常的庞大。现在,Google需要索引超过30亿的网页。很多研究都指出,在历史上,网络每9-12个月都会增长一倍。
网络的页面改变很频繁。如果这个改变指的是任何改变,那么有40%的网页每周会改变。如果我们认为页面改变三分之一或者更多,那么有大约7%的页面每周会变。
这2个要素意味着,要获得及时的,完全的网页快照,一个搜索引擎必须访问1亿个网页每天。因此,步骤(a)必须执行大约每秒1000次,成员检测的步骤(c)必须每秒执行超过10000次,并有非常大的数据储存到主内存中。另外,网络爬虫一般使用一个分布式的构造来平行地爬行更多的网页,这使成员检测更为复杂:这是可能的成员问题只能回答了一个同行节点,而不是当地。
一个非常重要的方法加速这个检测就是用cache(高速缓存),这个是把见过的URLs存入主内存中的一个(动态)子集中。这个论文最主要的成果就是仔细的研究了几种关于网络爬虫的URL缓存技术。我们考虑所有实际的算法:随机置换,静态cache,LRU,和CLOCK,和理论极限:透视cache和极大的cache。我们执行了大约1800次模拟,用不同的cache大小执行这些算法,用真实的log日志数据,获取自一个非常大的33天的网络爬行,大约执行了超过10亿次的http请求。
这个论文像这样组织的:第2部分讨论在文学著作中几种不同的爬行解决方案和什么样的cache最适合他们。第3部分介绍关于一些cache的技术和介绍了关于cache几种理论和实际算法。第4部分我们实现这些算法,在实验机制中。第5部分描述和讨论模拟的结果。第6部分是我们推荐的实际算法和数据结构关于URLcache。第7部分是结论和指导关于促进研究。
2.CRAWLING
网络爬虫的出现几乎和网络同期,而且有很多的文献描述了网络爬虫。在这个部分,我们呈现一个摘要关于这些爬虫程序,并讨论问什么大多数的网络爬虫会受益于URL cache。
网络爬虫用网络存档雇员多个爬行进程,每个一次性完成一个彻底的爬行对于64个hosts 。爬虫进程储存非本地的URLs到磁盘;在爬行的最后,一批工作将这些URLs加入到下个爬虫的每个host的种子sets中。
最初的google 爬虫,实现不同的爬虫组件通过不同的进程。一个单独的URL服务器进行维护需要下载的URL的集合;爬虫程序获取的网页;索引进程提取关键字和超链接;URL解决进程将相对路径转换给绝对路径。这些不同的进程通过文件系统通信。
这个论文的中实验我们使用的meractor网络爬虫。Mercator使用了一个独立的集合,通信网络爬虫进程。每个爬虫进程都是一个有效的web服务器子集;URLs的分配基于URLs主机组件。没有责任通过TCP传送这个URL给网络爬虫,有责任把这些URLs绑在一起减少TCP开销。我们描述mercator很多的细节在第4部分。
任何网络爬虫必须维护一个集合,装那些需要被下载的URLs。此外,不能重复地下载同一个URL,必须要个方法避免加入URLs到集合中超过一次。一般的,达到避免可以用维护一个发现URLs的集合。如果数据太多,可以存入磁盘,或者储存经常被访问的URLs。
3.CACHING
在大多数的计算机系统里面,内存是分等级的,意思是,存在2级或更多级的内存,表现出不同的空间和速度。举个例,在一个典型的工作站里,有一个非常小但是非常快的内存,一个大,但是比较慢的RAM内存,一个非常大胆是很慢的disk内存。在一个网络环境中,也是分层的。Caching就是一种想法储存经常用到的项目从慢速内存到快速内存。
Caching术语就像下面:cache是内存用来储存同等大小的元素。一个cache有k的大小,那么可以储存k个项目.在每个时间段,cache接受到来自一个项目的请求.如果这个请求项目在这个cache中,这种情况将会引发一个碰撞并且不需要进一步的动作。另一方面,这种情况叫做 丢失或者失败。如果cache没有k个项目,那个丢失的项目被加入cache。另一方面,算法必须选择驱逐一个项目来空出空间来存放那个丢失的项目,或者不加入那个丢失的项目。Caching算法的目标是最小化丢失的个数。
清楚的,cache越大,越容易避免丢失。因此,一个caching算法的性能要在看在一个给定大小的cache中的丢失率。
一般的,caching成功有2个原因:
不一致的请求。一些请求比其他一些请求多。
时间相关性或地方的职权范围。
剩余内容已隐藏,支付完成后下载完整资料
资料编号:[29805],资料为PDF文档或Word文档,PDF文档可免费转换为Word