Friday, February 18, 2011

Crawler of search engine in SEO

Crawling is the process by which Googlebot (Software by Google) discovers new and updated pages to be added to the Google index.

Cralwer comes to the website and reads all pages linked within website, it also goes to all other links diverting towards other websites. Crawler has a scheduled visit plan and it is set by Google. However you can specify the significant of web page by telling it to crawler. You can do it by specifying the significance in sitemap.

Google uses a huge set of computers to fetch (or "crawl") billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider).

Googlebot or crawler uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Google's crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

Google doesn't accept payment to crawl a site more frequently, and we keep the search side of our business separate from our revenue-generating AdWords service.



Also See:

What is Search Engine Friendly website

No comments:

Post a Comment