A description of google created by larry page and sergey brin in 1998

Larry Page was considering Stanford for grad school and Sergey Brin, a student there, was assigned to show him around. By some accounts, they disagreed about nearly everything during that first meeting, but by the following year they struck a partnership. Working from their dorm rooms, they built a search engine that used links to determine the importance of individual pages on the World Wide Web. They called this search engine Backrub.

A description of google created by larry page and sergey brin in 1998

Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http: Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms.

They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago.

This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results.

This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext.

Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want. There are two versions of this paper -- a longer full version and a shorter printed version.

The web creates new challenges for information retrieval. The amount of information on the web is growing rapidly, as well as the number of new users inexperienced in the art of web research.

People are likely to surf the web using its link graph, often starting with high quality human maintained indices such as Yahoo! Human maintained lists cover popular topics effectively but are subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics.

Automated search engines that rely on keyword matching usually return too many low quality matches.

The Anatomy of a Search Engine

We have built a large-scale search engine which addresses many of the problems of existing systems. It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results.

We chose our system name, Google, because it is a common spelling of googol, or and fits well with our goal of building very large-scale search engines.

As of November,the top search engines claim to index from 2 million WebCrawler to million web documents from Search Engine Watch. It is foreseeable that by the yeara comprehensive index of the Web will contain over a billion documents. At the same time, the number of queries search engines handle has grown incredibly too.

In NovemberAltavista claimed it handled roughly 20 million queries per day. With the increasing number of users on the web, and automated systems which query search engines, it is likely that top search engines will handle hundreds of millions of queries per day by the year The goal of our system is to address many of the problems, both in quality and scalability, introduced by scaling search engine technology to such extraordinary numbers.

Fast crawling technology is needed to gather the web documents and keep them up to date.

The Anatomy of a Search Engine

Storage space must be used efficiently to store indices and, optionally, the documents themselves. The indexing system must process hundreds of gigabytes of data efficiently. Queries must be handled quickly, at a rate of hundreds to thousands per second.

These tasks are becoming increasingly difficult as the Web grows. However, hardware performance and cost have improved dramatically to partially offset the difficulty.

There are, however, several notable exceptions to this progress such as disk seek time and operating system robustness.

A description of google created by larry page and sergey brin in 1998

In designing Google, we have considered both the rate of growth of the Web and technological changes. Google is designed to scale well to extremely large data sets. It makes efficient use of storage space to store the index.The Google search engine has two important features that help it produce high precision results.

First, it makes use of the link structure of the Web to calculate a quality ranking for each web page. This ranking is called PageRank and is described in detail in [Page 98]. Google was founded by two men with the names Larry Page and Sergey Brin and they were often known as the "Google Guys" while both .

Google is designed to be a scalable search engine. The primary goal is to provide high quality search results over a rapidly growing World Wide Web.

[BINGSNIPMIX-3

Google employs a number of techniques to improve search quality including page rank, anchor text, and proximity information. Google was the brain child of Larry Page and Sergey Brin. They created Google while they were both studying at Stanford University as part of their research project in the mid-late s.

> Larry Page > Sergey Brin Now, they didn’t invent search e. Google was founded in by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California. Together they own about 14 percent of its shares and control 56 percent of the stockholder voting power through supervoting stock.

Internet entrepreneur and computer scientist Larry Page teamed up with grad school buddy Sergey Brin to launch the search engine Google in

Google - Wikipedia