Saturday, February 19, 2011

Pagerank in 2011

 The Next Generation of Ranking Signals
Every 3-4 years, there’s a big shift or addition to the key metrics Google (and, to a lesser extent MSN/Bing and Yahoo!) uses to order competitive search results.

1996-1999: On-page keyword usage + meta data

1999 – 2002: PageRank + On-page

2002 – 2005: Anchor text + Domain name + PageRank + On-Page

2005 – 2009: Domain authority + Diversity of linking domains + Topic modeling + Anchor text + Domain name + PageRank + On-Page

In 2010 and 2011, we’ve already seen the entry of social signals from Facebook and Twitter. The recent clickstream stories revealed that both Google and Bing employ clickstream data (Bing has done so publicly for the last 3 years, Google more quietly and probably longer), though this likely is a relatively small data point for both.

It’s my belief that the next generation of ranking signals will rely on social networking sites.

Important to note that pagerank and search engine result page SERP ranking has no relationship neither similar in phenomenon.

Friday, February 18, 2011

Pagerank by search engines

PageRank is the measure of the importance of a page based on the incoming links from other pages.  Every page has its own page rank. Most important page rank should be the home page or the product page. Any page can be optimized to get better ranking in search engine.

Pagerank value starts from 0 up to 10. Google tool bar is available to view the page rank value and is free to download.

In simple terms, each link to a page on your site from another site adds to your site's PageRank. Not all links are equal: Search Engine works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. If your site link is placed on some important site this will add rank to your website. Important sites are measured usually on basis of .gov, .edu, .pk, .cn or .uk. These URLs are authentic and search engine always consider such sites as important sites. Placing your link on such website will boost the ranking of your page instantly. Moreover such sites which capture huge number of hits are also referred as important sites.

The best types of links are those that are given based on the quality of your content. In order for your site to rank well in search results pages, it's important to make sure that search engine can crawl and index your site correctly.Your site should be search engine friendly.

Important to note that pagerank and search engine result page SERP ranking has no relationship neither similar in phenomenon.

Indexing in SEO training

Indexing is a process of compiling the fetched information from internet. It is just like the index of the book.

Crawler processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. Crawler can process many, but not all, content types.

The processing of crawling means how often the page should be crawled, nature of page, relevant or irrelevant links, meta descriptions, content, geographical location and of course ALT tags, titles, headings etc. Such information is used to display results. The results are determined on the basis of fetched data or indexed data within search engine. Search engine uses crawler to do the job.

Crawler is an algorithmic based software designed to fetch information about pages over internet to produce most efficient search results.


Also See:

How to build Search Engine Friendly web page
Understanding HTML for SEO

Crawler of search engine in SEO

Crawling is the process by which Googlebot (Software by Google) discovers new and updated pages to be added to the Google index.

Cralwer comes to the website and reads all pages linked within website, it also goes to all other links diverting towards other websites. Crawler has a scheduled visit plan and it is set by Google. However you can specify the significant of web page by telling it to crawler. You can do it by specifying the significance in sitemap.

Google uses a huge set of computers to fetch (or "crawl") billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider).

Googlebot or crawler uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Google's crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

Google doesn't accept payment to crawl a site more frequently, and we keep the search side of our business separate from our revenue-generating AdWords service.



Also See:

What is Search Engine Friendly website