Posts Tagged ‘Search Engines’

Web Directories & Specialised Search Engines

Thursday, December 31st, 2009

When you are trying to market your product online your first choice is always to bring it to the front page of Google. This is hardly surprising considering the fact that Google has a huge monopoly over online search. However, it would be a good idea to keep yourself informed of what is the alternative you have if you think you can’t make it to the front page in Google within the time you have in hand. And, of course, its not news that it’s a difficult task.

So what are the other ways in which you can make your product or service popular? One is to submit your site to other search engines like Yahoo!, MSN, Alta vista, etc. Still, that’s not going to compensate for not coming on the front page of Google. What’s the next step? One could be web directories. Or the other, specialised search engines. Wondering how they can help? Or even what they are? Alright, let’s check out.

Web directories

Web directories are a collection of links that are arranged in different categories. Web directories help the user if they search for a particular topic in the directory. They’ll find the necessary link to the site on the topic they are searching for. Though the traffic to your site through such web directories would be lesser than that from a major search engine, the quality would be high as those who reach your site is reaching after searching exactly what your site is talking about.

Specialised search engines

Specialised search engines work the same way the normal search engines do but their index would have websites dealing with a particular topic. Thus there would be search engines that deal with music, books, sports, etc. This again is beneficial as the users are in search of exactly that particular topic or product. The probability of conversion would be high.

Do you get the feeling of any similarities in both? They do have a common point which is that they provide links to websites belonging to specific categories. However, web directories and specialised search engines are more different than they are similar. Here is a list of differences between both.

Web directories

  • Would contain a review of the site content along with the link to the site.
  • Links are categorised and edited by webmasters.
  • Categorisation is done after the checking the whole page.
  • Sometimes links are submitted after payment of some amount
  • Less chance of the links added in the wrong category as the categorisation is done by webmasters
  • Require manual effort and maintenance

Specialised search engines

  • Would have only a short description along with the link, as in the usual search engines.
  • Links are crawled by robots
  • Categorisation is done after only checking for the presence of keywords
  • No payment is necessary as the crawlers would automatically reach the website
  • More chances of wrong categorisation as the process is automated
  • No human interference required. The search engine crawlers would find the site according to the number of keywords

There! That should put an end to your confusions regarding web directories and specialised search engines. The tip for you here is that even while you carry out the normal optimisation for search engines you can add your link to different directories and specialised search engines. After all, any publicity, big or small, is good publicity.

  • Share/Save/Bookmark

Search Engine Marketing Explained

Tuesday, July 7th, 2009

Which Search Engine is Important?

All major search engines provide a way for internet users to find websites about a certain topic quickly and easily by simply typing in a few keywords. Simply put, a search engine can send potential customers to your ecommerce website. This is why search engine optimisation should form the core of any online marketing venture. If you know effective methods of internet marketing, then you can improve your online shop website’s search engine positioning and web presence, which can send you significantly more prospective clients every day than otherwise. Google is currently the most widely used internet search engine while Yahoo is an established search engine company that predates google but is now less popular but is likely to merge with large companies like Microsoft (Feb 2008) to regain their market share.
How Google Indexes Websites

Many people think that once a site is created, it will be indexed fairly quickly. This is not true. The way Google indexes a new website is to follow links from the site’s homepage to the rest of the site using a web bot. For this to happen, the site’s homepage must also be cited from another website that is already indexed by google or manually submitted to google sitemaps using an XML feed. It’s possible that the new pages with original content will rank well initially because Google traditionally likes new content but then those pages that fail to continue to justify their higher ranking will then be dropped at the next crawl.
How To Determine an Indexing State

To check if your website is indexed by google, simply put in its URL into google search box and hit enter. If the search engine results page (SERP) contains a listing of the page that the URL is pointing to, then it is indexed. The snippet information in SERPs are usually taken from the meta description tag or the DMOZ directory description of your website. The title is normally the title of the page as it appears between the tags.

  • Share/Save/Bookmark

Getting new pages indexed and ranked

Wednesday, March 25th, 2009

New content: Search Engines must be able to index new webpages in order to be able to keep its index up to date with current content. Some search engines such as google and yahoo have provided a new facility whereby webmasters can upload an xml sitemap that is updated with new webpages whenever they arise. By incorporating an XML data feed to your site via such tools as google sitemaps or yahoo site explorer, a webmaster can help crawlers keep up to date with the growth of your website. Webclinic packages provide an automatic XML sitefeed that updates automatically whenever a new products page or static page is created. This means that changes to your site are picked up more quickly that otherwise possible without an XML feed.

Directory depth: Search engines do not always crawl websites in depth (ie many directories deep). Popular sites with many external links to pages that are embedded in directories are more likely to be crawled in depth than websites that only have one or two links pointing to their homepage. For this reason, it is important that your internal pages are popular enough to be link to so that search engines will be more likely to index those pages.

Manual Submission of Pages: Many search engines allow for the manual submission of pages. This is not an ideal way to get a page indexed and is often abused by spammers. It is much more effective to ensure that pages are indexed by a search engine by including them in your xml sitemap or to create an HTML sitemap on your website so that the search engine crawlers may find them naturally on their next crawl.

Paid Inclusion: Search engines do not usually charge a fee to have your website included in their index. However some search engines such as Yahoo offer a paid inclusion service in their directories. In these instances, a paid inclusion will give your website higher priority than free inclusion and will gurantee that your site is included in the search engine’s paid directory. Yahoo directory charges an annual fee for inclusion into their directory but do not require payment for inclusion into their search engine index.

After a search engine’s crawling procedure is complete, it will know the document’s title, modification date and size. Google used to display such webpages in its index as “Supplemental Results” (discontinued as of 2008) and used keywords in the webpage’s title and incomming links to associate it with keywords. A copy of the document is stored in the search engine’s database and the search engine indexer will process the page into a mathematical represention – a process that varies between search engines and is kept secret.

Although kept secret, it is reasonable to assume that search engines records for each webpage the following attributes…

Each word, The URL where each word appears, the position of the word in the document, element in which the word appeared (Heading, title, hyperlink, bold, italics, etc…). This information would be stored in the search engine database in a similar format to a table…
http://www.thewebclinic.co.uk
Word     URL     Position     Type
Business     index.htm     1     Title
First     index.htm     2     Title
Ecommerce     index.htm     3     Title

The pucntuations are ignored as search engines do not index punctuation. From the basic example above, you can see how search engines will match keywords to their positions, format and position on the page. Title tags will tend to be given highest weight.

Hyperlinks that are found within documents are crawled by the search engines. When a URL is found in a document, then the words in the hyperlink are associated with the document that the hyperlink is pointing to. This means that a new webpage may appear in google’s search results associated with the anchor text of the link(s) pointing to the document, without that document ever having been processed by the search engines. In google, you can see which pages in google’s index have specific keywords in anchor text pointing them by typing allinanchor:keyword (eg “allinanchor:search engine optimisation”).

After a new webpage has been crawled, indexed and processed by the search engines, it should be returned in search results. In google, you can check the pages that have been indexed by typeing site:domain (eg ” site:www.thewebclinic.co.uk”).

  • Share/Save/Bookmark
Follow our SEO experts in Twitter
  • What our SEO Experts says

  • Archives

  • SEO Keyword Cloud