Robots, Spiders and Crawlers

A robot, spider, or crawler is a piece of software that is programmed to “crawl” from one web page to another based on the links on those pages. As this
crawler makes it way around the Internet, it collects content (such as text and links) from web sites and saves those in a database that is indexed and ranked according to the search engine algorithm.

When a crawler is first released on the Web, it’s usually seeded with a few web sites and it begins on one of those sites. The first thing it does on that first site is to take note of the links on the page. Then it “reads” the text and begins to follow the links that it collected previously. This network of links is
called the crawl frontier; it’s the territory that the crawler is exploring in a very systematic way.

The crawler sends a request to the web server where the web site resides, requesting pages to be delivered to it in the same manner that your web browser
requests pages that you review. The difference between what your browser sees and what the crawler sees is that the crawler is viewing the pages in a completely text interface. No graphics or other types of media files are displayed. It’s all text, and it’s encoded in HTML. So to you it might look like gibberish.

Here’s a quick list of some of the crawler names that you’re likely to see in that web server log:

  • Google: Googlebot
  • MSN: MSNbot
  • Yahoo! Web Search: Yahoo SLURP or just SLURP
  • Ask: Teoma
  • AltaVista: Scooter
  • LookSmart: MantraAgent
  • WebCrawler: WebCrawler
  • SearchHippo: Fluffy the Spider
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s