As the number of sites has increased, there is a lot of competition among webmasters to rank their site at top level on the search engine. These webmasters use different activities to promote their sites and many a times use false methods to do the same, such as copying content from other's site, over usage of keywords in the content etc.Thus, Google released the Farmer/Panda update 1.1 version of search algorithm to monitor and strip off these sites, which copy content from high quality sites.
People searching for different companies like companies providing services for outsourced software development and others were getting duplicate searches with poor quality content and due to that, the standard of Google was doubtful. Thus, Google introduced the panda update which enables it to trace those pages. Google started disparaging and penalizing these sites that provided poor quality web pages and failed the search algorithm test, thus these sites lose a lot of traffic, their position were dropped. The panda version 1.1 mainly focused on the duplicity of the content and how to prevent other sites from getting ill fame due to that.But,while performing all these actions, it tend to effect the good quality sites as well.
Google was very aggressively finding out a way to overcome this issue since its last update and finally at the SMX Advanced Conference held in Seattle, it confirmed that it will be the main priority of Panda 2.2 to solve out this problem.
According to Matt cutts, head of the Web spam team at Google, this latest version of Panda (Google panda update 2.2) will mainly focus on major pitfalls of original panda as the webmaster's complaint holds that the web sites re-publishing the content taken from other sites are ranking better than those sites from where they have copied the content.
Some of the major issues not addressed by panda are:-
• One of the major issues not addressed by Panda is "scraper sites," or sites that take content from other sites on the web and – due to improved SEO/links – actually outrank the original content.
• Google was not able to find more signals to differentiate bad sites from quality ones during its panda update.
• It was not possible for panda update to keep a track of the site performance and time spent on the site.
One thing Google is going to address with Panda 2.2 is the issue of "scraper" websites that use other site's content on their own site, basically generating revenue from Google Ad Sense in the process – and outranking content originators as well.
Since, this manually run algorithm known as panda update, brought by Google, no sites could fully recover from Panda's punishment. One question raised now, is that whether the update 2.2 will help the sites recover from the damage that the earlier version of Panda did? Till now there isn't any hint/clue whether or not they will recover but one can surely get the answer to this question after the release of the update 2.2.
Feel free to visit: http://casestudies.q3tech.com
http://www.q3tech.com
Author Resource:
Nick Thomas is the author of this article. He has been demonstrating her writing skills by writing the articles for custom application development companies like Q3 technologies from last two years. He also has a keen interest in writing stuff for warehousing management related firms.
For more details, feel free to visit http://www.q3tech.com