Getting new pages indexed and ranked

New content: Search Engines must be able to index new webpages in order to be able to keep its index up to date with current content. Some search engines such as google and yahoo have provided a new facility whereby webmasters can upload an xml sitemap that is updated with new webpages whenever they arise. By incorporating an XML data feed to your site via such tools as google sitemaps or yahoo site explorer, a webmaster can help crawlers keep up to date with the growth of your website. Webclinic packages provide an automatic XML sitefeed that updates automatically whenever a new products page or static page is created. This means that changes to your site are picked up more quickly that otherwise possible without an XML feed.

Directory depth: Search engines do not always crawl websites in depth (ie many directories deep). Popular sites with many external links to pages that are embedded in directories are more likely to be crawled in depth than websites that only have one or two links pointing to their homepage. For this reason, it is important that your internal pages are popular enough to be link to so that search engines will be more likely to index those pages.

Manual Submission of Pages: Many search engines allow for the manual submission of pages. This is not an ideal way to get a page indexed and is often abused by spammers. It is much more effective to ensure that pages are indexed by a search engine by including them in your xml sitemap or to create an HTML sitemap on your website so that the search engine crawlers may find them naturally on their next crawl.

Paid Inclusion: Search engines do not usually charge a fee to have your website included in their index. However some search engines such as Yahoo offer a paid inclusion service in their directories. In these instances, a paid inclusion will give your website higher priority than free inclusion and will gurantee that your site is included in the search engine’s paid directory. Yahoo directory charges an annual fee for inclusion into their directory but do not require payment for inclusion into their search engine index.

After a search engine’s crawling procedure is complete, it will know the document’s title, modification date and size. Google used to display such webpages in its index as “Supplemental Results” (discontinued as of 2008) and used keywords in the webpage’s title and incomming links to associate it with keywords. A copy of the document is stored in the search engine’s database and the search engine indexer will process the page into a mathematical represention – a process that varies between search engines and is kept secret.

Although kept secret, it is reasonable to assume that search engines records for each webpage the following attributes…

Each word, The URL where each word appears, the position of the word in the document, element in which the word appeared (Heading, title, hyperlink, bold, italics, etc…). This information would be stored in the search engine database in a similar format to a table…
Word     URL     Position     Type
Business     index.htm     1     Title
First     index.htm     2     Title
Ecommerce     index.htm     3     Title

The pucntuations are ignored as search engines do not index punctuation. From the basic example above, you can see how search engines will match keywords to their positions, format and position on the page. Title tags will tend to be given highest weight.

Hyperlinks that are found within documents are crawled by the search engines. When a URL is found in a document, then the words in the hyperlink are associated with the document that the hyperlink is pointing to. This means that a new webpage may appear in google’s search results associated with the anchor text of the link(s) pointing to the document, without that document ever having been processed by the search engines. In google, you can see which pages in google’s index have specific keywords in anchor text pointing them by typing allinanchor:keyword (eg “allinanchor:search engine optimisation”).

After a new webpage has been crawled, indexed and processed by the search engines, it should be returned in search results. In google, you can check the pages that have been indexed by typeing site:domain (eg ””).

  • Share/Save/Bookmark

Related posts:

  1. PDF document creation & SEO For past many years PDF documents have been used by...
  2. Search engine queries Given below are some Search Queries • Dictionary: To see...
  3. Relevance of Yahoo directory submission Being ranked high and positioned in the front in Yahoo...
  4. Manipulating Search Engine User Agents A search engine optimisation specialist can instruct a search engine...
  5. How erroneous is it to ignore spelling errors in web pages? When there are attractive images, excellent structure and easy navigation,...


Leave a Reply

Follow our SEO experts in Twitter
  • What our SEO Experts says

  • Archives

  • SEO Keyword Cloud