Let's recap the basic methods of steering and supporting search engine crawling and ranking:
# Provide unique content. A lot of unique content. Add fresh content frequently.
# Acquire valuable inbound links from related pages on foreign servers, regardless of their search engine ranking. Actively acquire deep inbound links to content pages, but accept home page links. Do not run massive link campaigns if your site is rather new. Let the amount of relevant inbound links grow smoothly and steadily to avoid red-flagging.
# Put in carefully selected outbound links to on-topic authority pages on each content page. Ask for reciprocal links, but do not dump your links if the other site does not link back.
# Implement a surfer friendly, themed navigation. Go for text links to support deep crawling. Provide each page at least one internal link from a static page, for example from a site map page.
# Encourage other sites to make use of your RSS feeds and alike. To protect the uniqueness of your site's content, do not put text snippets from your site into feeds or submitted articles. Write short summaries instead and use a different wording.
# Use search engine friendly, short but keyword rich URLs. Hide user tracking from search engine crawlers.
# Log each crawler visit and keep these data forever. Develop smart reports querying your logs and study them frequently. Use these logs to improve your internal linking.
# Make use of the robots exclusion protocol to keep spiders away from internal areas. Do not try to hide your CSS files from robots.
# Make use of the robots META tag to ensure that only one version of each page on your server gets indexed. When it comes to pages carrying partial content of other pages, make your decision based on common sense, not on any SEO bible.
# Use rel="nofollow" in your links, when you cannot vote for the linked page (user submitted content in guestbooks, blogs ...). Do not hoard PageRank™.
# Make use of Google SiteMaps as a 'robots inclusion protocol'.
# Do not cheat the search engines.
Methods to Support Search Engines in Crawling and Ranking
Friday, October 31, 2008 at 3:55 AM Posted by Vasu
Subscribe to:
Post Comments (Atom)
Blog Archive
-
►
2009
(1)
- ► 01/04 - 01/11 (1)
-
▼
2008
(153)
- ► 12/14 - 12/21 (2)
- ► 12/07 - 12/14 (13)
- ► 11/30 - 12/07 (11)
- ► 11/23 - 11/30 (8)
- ► 11/16 - 11/23 (7)
- ► 11/09 - 11/16 (5)
- ► 11/02 - 11/09 (2)
-
▼
10/26 - 11/02
(20)
- Optimizing Web Site Navigation
- Methods to Support Search Engines in Crawling and ...
- About /robots.txt
- Robots Exclusion Protocol: now with even more flex...
- I Robot | Robots.txt Help | SebastianX of Sebastia...
- Internal Links - Only The First Link Counts in Goo...
- 25 Web Form Optimization Tips
- Image Optimization Part 1: The Importance of Images
- Google Adds RSS Feeds For Web Search Results
- Removing your entire website using a robots.txt file
- Source Code for Web Robot Spiders
- Checklist for Search Robot Crawling and Indexing
- How To Handle Redirecting default.asp in IIS? Dupl...
- 5 Tools for On-page Image Usage Analysis
- Beyond Link Building Tools
- 8 Social Media Sites for Local Networking
- Google Penalty Myths
- General Search Ranking Penalties
- Reinclusion Requests: How to perform successful re...
- Effective Keyword Discovery and Traffic-on-Investment
- ► 10/19 - 10/26 (6)
- ► 10/12 - 10/19 (79)
0 comments:
Post a Comment