study on google search algorithms
TRANSCRIPT
Study on Google Search Algorithms
1
Chapter 1
INTRODUCTION
Google is a search engine and people search several things in Google to get accurate
information. As per the huge database maintained by Google the search results crop up.
Google uses a very realistic approach to this searching method and in the process they
have developed an algorithm. It is very complicated and continues to get more
complicated as Google tries its best to provide searchers with the information that they
need. When search engines were first created, early search marketers were able to easily
find ways to make the search engine think that their client's site was the one that should
rank well. In some cases it was as simple as putting in some code on the website called a
meta keywords tag. The meta keywords tag would tell search engines what the page was
about.
As Google evolved, its engineers, who were primarily focused on making the
search engine results as relevant to users as possible, continued to work on ways to stop
people from cheating, and looked at other ways to show the most relevant pages at the top
of their searches. The algorithm now looks at hundreds of different factors. There are
some that we know are significant such as having a good descriptive title (between the
<title></title> tags in the code.)
In the past, the Google algorithm would change very infrequently. If the site was
sitting at #1 for a certain keyword, it was guaranteed to stay there until the next update
which might not happen for weeks or months. Then, they would push out another update
and things would change. They would stay that way until the next update happened.
Increasingly the online audience for <sites> is coming in the side doors via links
on blogs and social-networking websites like Facebook. Probably the most important tool
for reaching large audiences, however, is Google. If you can climb onto the top of the
site’s search results, you’re certain to be rewarded with a huge number of clicks. Most
publications these days try to harness the Google algorithm through an arcane process
known as search engine optimization. Some are more skilled at these than others.
Google makes over 600 changes to its algorithm in a year, and the vast majority of
these are not announced. But, when Google makes a really big change, they give it a
Study on Google Search Algorithms
2
name, usually make an announcement, and everyone in the SEO world goes crazy trying
to figure out how to understand the changes and use them to their advantage.
Three of the biggest changes that have happened in the last few years are the
Panda algorithm, the Penguin algorithm and Hummingbird.
Study on Google Search Algorithms
3
Chapter 2
GOOGLE PANDA
2.1 Google Panda
Panda first launched on February 23, 2011. It was a big deal. The purpose of Panda was
to try to show high-quality sites higher in search results and demote sites that may be of
lower quality. This algorithm change was unnamed when it first came out, and many of
us called it the "Farmer" update as it seemed to affect content farms. (Content farms are
sites that aggregate information from many sources, often stealing that information from
other sites, in order to create large numbers of pages with the sole purpose of ranking well
in Google for many different keywords.) However, it affected a very large number of
sites. The algorithm change was eventually officially named after one of its creators,
Navneet Panda.
When Panda first happened, a lot of Search Engine Optimization(SEO) forums
thought that this algorithm was targeting sites with unnatural back link patterns. However,
it turns out that links are most likely not a part of the Panda algorithm. It is all about on-
site quality.
In most cases, sites that were affected by Panda were hit quite hard. But, I it is
also seen sites, that have taken a slight loss on the date of a Panda update. Panda tends to
be a site-wide issue which means that it doesn't just demote certain pages of site in the
search engine results, but instead, Google considers the entire site to be of lower quality.
In some cases though Panda can affect just a section of a site such as a news blog or one
particular sub domain.
2.2 Google Panda History
• Rolled Out to US sites on February 24, 2011
http://googleblog.blogspot.com/2011/02/finding-more-high-quality-sites- in.html
• Rolled out Globally to English language users on April 11, 2011
Study on Google Search Algorithms
4
• Also began to roll out data from sites that users block
• Latest update Panda #25 — March 14, 2013
- Panda updates now on rolling update schedule
• Panda Dance, Panda Recovery, June and July 2013
• Panda 4.0 — May 19, 2014
2.3 Types Of Content
2.3.1 Thin Content
A "thin" page is a page that adds little or no value to someone who is reading it. It doesn't
necessarily mean that a page has to be a certain number of words, but quite often, pages
with very few words are not super-helpful. If you have a large number of pages on your
site that contain just one or two sentences and those pages are all included in the Google
index, then the Panda algorithm may determine that the majority of your indexed pages
are of low quality. Having the odd thin page is not going to cause you to run in to Panda
problems. But, if a big enough portion of your site contains pages that are not helpful to
users, then that is not good.
2.3.2 Duplicate Content
There are several ways that duplicate content can cause site to be viewed as a low-quality
site by the Panda algorithm. The first is when a site has a large amount of content that is
copied from other sources on the web. If they have a blog on site and if that blog is
populated with articles that are taken from other sources, Google is pretty good at
figuring out that blog are not the creator of the content. If the algorithm can see that a
large portion of the site is made up of content that exists on other sites then this can cause
Panda to look at you unfavourably.
2.3.3 Low-Quality Content
If a publisher publishes his article on one of the websites, the only type of information
that publisher want to present to Google is information that is the absolute best of its
Study on Google Search Algorithms
5
kind. In the past, many SEOs have sgiven advice to site owners saying that it was
important to blog every day and make sure that publishers are always adding content for
Google to index. But, if information is not high quality content, then it could be doing
more harm then good.
2.4 Recovery From Panda Hit
Google refreshes the Panda algorithm approximately monthly. They used to announce
whenever they were refreshing the algorithm, but presently they only do this if there is a
really big change to the Panda algorithm. What happens when the Panda algorithm
refreshes is that Google takes a new look at each site on the web and determines whether
or not it looks like a quality site in regards to the criteria that the Panda algorithm looks
at. If site was adversely affected by Panda and have made changes such as removing thin
and duplicate content then, when Panda refreshes, can see that things improve. For some
sites it can take a couple of Panda refreshes to see the full extent of the improvements.
This is because it can sometimes take several months for Google to revisit all of your
pages and recognize the changes that you have made.
Every now and then, instead of just refreshing the algorithm, Google does what
they call an update. When an update happens, this means that Google has changed the
criteria that they use to determine what is and isn't considered high quality. On May 20,
2014, Google did a major update which they called Panda 4.0. This caused a lot of sites to
see significant changes in regards to Panda
Fig 2.1: Hit Graph by Panda
Study on Google Search Algorithms
6
Chapter 3
GOOGLE PENGUIN
3.1 Google Penguin
Google Penguin - released in April 2012. Google's stated purpose for the Penguin update
was to decrease search engine rankings for websites with poor quality back link profiles,
excessive keyword rich anchor text, over-optimization for a single term; in general
websites that attempt to manipulate search results by using link building schemes.
Domains with poor quality back links include those with links coming from low quality
sites, sites not with no relevance to your domain, paid links, excessively rich keyword
links and links with over-optimized anchor text.
The goal of Penguin is to reduce the trust that Google has in sites that have
cheated by creating unnatural back links in order to gain an advantage in the Google
results. While the primary focus of Penguin is on unnatural links, there can be
other factors that can affect a site in the eyes of Penguin as well. Links, though, are
known to be by far the most important thing to look at.
3.2 Importance Of Links
A link is like a vote for your site. If a well respected site links to your site, then this is a
recommendation for your site. If a small, unknown site links to you then this vote is not
going to count for as much as a vote from an authoritative site. Still, if you can get a large
number of these small votes, they really can make a difference. This is why, in the past,
SEOs would try to get as many links as they could from any possible source.
Another thing that is important in the Google algorithms is anchor text. Anchor
text is the text that is underlined in a link. So, in this link to a great SEO blog, the anchor
text would be "SEO blog." If Moz.com gets a number of sites linking to them using the
anchor text "SEO blog," that is a hint to Google that people searching for "SEO blog"
probably want to see sites like Moz in their search results.
Study on Google Search Algorithms
7
It's not hard to see how people could manipulate this part of the algorithm. Let's
say that I am doing SEO for a landscaping company in Orlando. In the past, one of the
ways that I could cheat the algorithm into thinking that my company should be ranked
highly would be to create a bunch of self made links and use anchor text in these links
that contain phrases like Orlando Landscaping Company, Landscapers in Orlando
and Orlando Landscaping. While an authoritative link from a well respected site is good,
what people discovered is that creating a large number of links from low quality sites was
quite effective. As such, what SEOs would do is create links from easy to get places like
directory listings, self made articles, and links in comments and forum posts.
While we don't know exactly what factors the Penguin algorithm looks at, what
we do know is that this type of low quality, self made link is what the algorithm is trying
to detect. In my mind, the Penguin algorithm is sort of like Google putting a "trust factor"
on your links. I used to tell people that Penguin could affect a site on a page or even a
keyword level, but Google employee John Mueller has said several times now that
Penguin is a sitewide algorithm. This means that if the Penguin algorithm determines that
a large number of the links to your site are untrustworthy, then this reduces Google's trust
in your entire site. As such, the whole site will see a reduction in rankings.
While Penguin affected a lot of sites drastically, I have seen many sites that saw a
small reduction in rankings. The difference, of course, depends on the amount of link
manipulation that has been done.
3.3 Recovery From Penguin Hit
Penguin is a filter just like Panda. What that means, is that the algorithm is re-run
periodically and sites are re-evaluated with each re-run. At this point it is not run very
often at all. The last update was October 4, 2013 which means that we have currently
been waiting eight months for a new Penguin update. In order to recover from Penguin,
you need to identify the unnatural links pointing to your site and either remove them, or if
you can't remove them you can ask Google to no longer count them by using the disavow
tool. Then, the next time that Penguin refreshes or updates, if you have done a good
enough job at cleaning up your unnatural links, you will once again regain trust in
Google's eyes. In some cases, it can take a couple of refreshes in order for a site to
Study on Google Search Algorithms
8
completely escape Penguin because it can take up to 6 months for all of a site's disavow
file to be completely processed.
If you are not certain how to identify which links to your site are unnatural, here
are some good resources for you:
What is an unnatural link - An in depth look at the Google Quality Guidelines
The link schemes section of the Google Quality Guidelines
It's important to note that when sites "recover" from Penguin, they often don't
skyrocket up to top rankings once again as those previously high rankings were probably
based on the power of links that are now considered unnatural. Here is some information
on what to expect when you have recovered from a link based penalty or algorithmic
issue.
Also, the Penguin algorithm is not the same thing as a manual unnatural links
penalty. You do not need to file a reconsideration request to recover from Penguin. You
also do not need to document the work that you have done in order to get links removed
as no Google employee will be manually reviewing your work. As mentioned previously,
here is more information on the difference between the Penguin algorithm and a manual
unnatural links penalty.
Fig 3.1: Hit by Penguin
Study on Google Search Algorithms
9
Chapter 4
GOOGLE HUMMINGBIRD
4.1 Google Hummingbird
Google Hummingbird - Google released its newest algorithm update, Hummingbird 1.0,
in August 2013. Hummingbird is said to be the biggest change in the Google algorithm
since their "Caffeine" update in 2009 and probably since 2001. Hummingbird enables the
Google search engine to produce better, more relevant results for semantic searches -
conversational searches rather than keyword specific searches. These search terms are
often called long tail inquiries.
4.2 The Resurgence Of Long Tailed Keywords
The Hummingbird is what Google is calling the latest (greatest?) algorithm that they
slipped in under our radar in August. Hummingbird will take a search engine query using
long-tailed keywords and try to decipher the context of the question rather than chase the
specific keywords within the question.
4.3 Page Rank Algorithm
Hummingbird looks at Page Rank how important links to a page are deemed to be along
with other factors like whether Google believes a page is of good quality, the words used
on it and many other things. A webpage’s ranking is determined by analyzing the ranking
of all the other web pages that link to the webpage in question. It is calculated as,
n = the number of pages in the web.
C (Tn) = the Page Rank of each page by the number of outgoing links on that page .
d= damping factor decrease the influence of all of the pages in relation to the page in
question.
Study on Google Search Algorithms
10
4.4 Architecture Of Google Hummingbird Algorithm
Fig 4.1: Architecture of hummingbird algorithm
Hummingbird is a change in search algorithm that utilizes several factors which helps to
initiate conversation with the searcher and provides real answers to the queries instead of
returning keyword matching documents. Hummingbird is all about conversation and long
tail queries are often involved in conversation. Also, during conversation, we involve one
or more entities and this is where Knowledge Graph and semantics enters. The crux is
that Google has adapted its search algorithm to handle complex and conversational
queries entered by the user with the introduction of Hummingbird. It has used semantics
and Knowledge Graph to a much greater depth than it has used in the past.. The signals
for ranking documents remains the same and Panda, Penguin, EMD etc. are all parts of
the main algorithm which is now the Hummingbird. Factors like Domain Authority, Page
Rank, Social Popularity, Overall Content Relevancy, Tf-Idf Score, Domain age, Google
Authorship, Use of Meta Data etc. all contribute towards ranking a specific document.
But, we can surely utilize this new model to adapt our existing content based on the
manner a query gets parsed and identified.
Study on Google Search Algorithms
11
4.5 Working Process Of Google Hummingbird Algorithm
Fig 4.2: Processing the user query and displaying the appropriate pages
SEO now requires a keener understanding of your audience. It does not start or end with
keywords; rather, it starts with the user and an understanding of what your user wants.
Your content may have four or five different types of users, who are searching for the
answer to a query. Understanding what is being served to which user and catering to those
important segments with a good user experience on your site is key.
Currently, personas are talked about more than ever in the search marketing
world. Traditional marketers have long since used this model to better understand their
product or service user. This depth of understanding is important as you think about the
topics your users are interested in and how you can be a solution for them with your
content.
Keyword research still guides us to the topics people in our audience are searching
for; but, our responsibility as marketers is to go beyond that data. That means having the
most useful, most engaging, best quality page for a query with the appropriate keywords
on the page. And although keyword optimization often happens best when a topic is
thoughtfully written, and has enough depth to include many variations of a concept,
optimizing your page for specific queries still reinforces the topic of the page. If you have
Study on Google Search Algorithms
12
not spent much effort gathering qualitative data about your users, now is a good time to
start. Surveys, monitoring conversations on social and talking face-to-face with your
customers will help you build those personas to better understand what matters to them,
so you can execute with content. But more on that in another post.
4.5.1 Performance of the Page
At Bright Edge, we have been arming our customers with ways to measure our contents
performance at a page level even before Google’s secure search was launched in full.
This was not only in anticipation of the change, but also a way to help businesses better
understands the metrics that matter.
Post-Hummingbird and post-secure search is all about measuring the content, not
the keyword. Start measuring what pages are generating the most value for you, and what
types of content are generating the greatest ROI.
If you have content that ranks well, but is not driving traffic or engagement on
your site, it’s not doing a good job of satisfying your users. You want to think about
metrics like overall traffic to a page, conversion rate and so on. Then, you can begin to
look at groups of pages on your site that best perform on a traffic and revenue level,
depending on your goals. In the old paradigm, SEOs may have used a “more content is
better” approach. But now, it’s relevancy, credibility, timeliness and quality over
quantity.
Fig 4.3: Keyword Relations: One Word Can Modify Search Results
Study on Google Search Algorithms
13
Once you have a picture of page performance on your site overall, you can then
begin to make decision decisions about where you want to focus time and resources on
your website.
4.5.2 Knowledge Graph
The Knowledge Graph is a knowledge base used by Google to enhance its search
engine's search results with semantic-search information gathered from a wide variety of
sources With this new algorithm Google is better able to understand the meaning of a
sentence and return results to far more complex search queries. In the past, Google
analyzed keywords individually and tried to match those individual keywords to the
content of the site, but as search queries evolved, so has Google.
In theory, the Knowledge Base has collected data for only a short while; however, most
people believe differently. To this very moment, knowledge is being gathered,
categorized, cross-referenced thousands upon thousands of ways, and stored. This vast
well of knowledge is available to the Hummingbird. With such a Knowledge Graph, was
it not inevitable that Google would eventually find a way to utilize this information with
an algorithm that deciphers the context of all the words in a query rather than homing in
on a few key words therein? This is exactly what Hummingbird is designed to do.
Fig 4.4: Search results of Knowledge Graph
Information from Google’s Knowledge Graph also appears more often than it did
previously, which helps Google provide answers directly in their search results. Early
Study on Google Search Algorithms
14
indicators show that the medical-related queries appear to show Knowledge Graph info
more often than others. Below is an example of a two-word query in which an answer
from the Knowledge Graph is shown at the top of the search results
4.6 Types of Search Techniques 4.6.1 Semantic Search
Google has implemented semantic search into its core algorithm by the recent
introduction of Hummingbird. This is a phenomenal change and one of the biggest to
happen after Caffeine Semantic search involves the study and implementation of
semantics in the search technology in order to find out the real intent of the searcher
behind the search query and presenting the answers or set of results that closely relates to
what the user is searching. It takes into account the importance of context and identifies a
proper relationship between the terms used in the search query before presenting the final
search results.
Semantics Semantics involves finding the relationship between words, phrases, symbols and the
meaning they denote. It further involves the study of linguistics, syntax, etymology,
communication, semiotics etc.
Application of Semantics Search engines use semantics to return relevant results to the query. Ambiguous queries
(those queries which have more than one meaning) are broken down and processed via
set of pre-defined words helping the engines grasp the real context of the query. The use
of semantics applies on research related queries where the user is looking for answers
instead of navigating to a specific web page. Google applies semantics in its Knowledge
Graph.
Page Rank and Relevancy Score: Two Basic Factors For Document Ranking Google applies two basic factors for judging the importance and relevance of any
webpage before ranking them. These factors are Page Rank (for measuring popularity by
analysing the back links) and relevancy (by analysing the use of keywords or search
query terms used in the webpage). But, this form of ranking documents do not help to
find those pages which may be relevant to searchers intent as the popularity factor may
Study on Google Search Algorithms
15
reduce the rankings of semantically relevant documents. This is the reason that Google
uses semantics to identify and prioritize the rankings of pages having semantically
relevant content rather than only counting the keywords and backlinks for analysing any
webpage.
Query Processing In a Semantic Environment The figure below describes the steps involved in the processing of the query by Google.
The search query received by Google is parsed (using a parser) to identify one or more
members (first and second search terms). In this process, synonyms or other replacement
terms gets identified. These synonyms are known as candidate synonyms and they further
get broken down and processed as qualified synonyms. Then, a relationship engine is
used to identify the relationship between the members based upon their respective
domains. Here a domain simply means a centralized category of similar words. First
search term gets identified by the first domain which is a semantic category having a
collection of predefined entities. Similarly, the second term gets identified by a second
domain also containing a database of similar entities. This helps Google to relate the
terms to the closest matching identities (One essential point to note here is that Google
will only find and relate words in the query with those already present in its database
which is the Knowledge Graph, hence some queries although semantically similar might
not show up).
Fig 4.5: Query parsed in a Semantic Environment
A separate search gets conducted by a query engine using domain matching
relationship (do not get confused with the word domain with domain name, here domain
means category) and final results gets displayed after a semantic query is identified (the
Study on Google Search Algorithms
16
query engine may pluralize or rephrase the query if required). Hence, in simple words, a
complex query entered by the user is broken down and simplified involving several
processes into semantic query. Thereafter, relevant web pages are identified and
displayed as a final set of results.
Many search engine optimizers and internet marketers often miss the crucial part
of identifying semantically related queries while doing keyword research because the
main query gets broken down into semantic query before it is processed by Google.
Hence, the chance of ranking increases when the content of the webpage is written
keeping the semantic variants in mind mentioning all the entities matching specific
domains.
4.6.2 Conversational Search
Users of Google Chrome may have noticed a small microphone icon in the right hand
corner of Google’s search box (now on Google search as well). If the user clicks on that
microphone (and has configured their computer for it) they may ask aloud the question
they would have typed into the search box. The question is then displayed on the search
screen, along with the results.
If the answer to the query is in Google’s Knowledge Graph, an Information Card
is displayed with the pertinent facts listed along with a list of sites you may visit with
more information and hopefully, the answer to your question. What users of “Google
Speak” have come to realize is that the more conversational the query, the more
information is provided.
“Conversational search” is one of the biggest examples Google gave. People,
when speaking searches, may find it more useful to have a conversation. “What is the
closest place to buy the iPhone 5s to my home?” A traditional search engine might focus
on finding matches for words finding a page that says “buy” and “iPhone 5s,” for
example. Hummingbird should better focus on the meaning behind the words.
It may better understand the actual location of your home, if you have shared that
with Google. It might understand that “place” means you want a brick-and-mortar store.
It might get that “iPhone 5s” is a particular type of electronic device carried by certain
stores. Knowing all these meanings may help Google go beyond just finding pages with
matching words.
In particular, Google said that Hummingbird is paying more attention to each
word in a query, ensuring that the whole query the whole sentence or conversation or
meaning is taken into account, rather than particular words. The goal is that pages
Study on Google Search Algorithms
17
matching the meaning do better, rather than pages matching just a few words.
Hummingbird expands the use of the Knowledge Graph, so that Google answers
more complex search queries and also improves the follow-up search process. For
example, if we first search “picture of Washington Monument” and then do a second
search for “how tall is it?” Google will understand the context of your second query.
Fig 4.6: Follow Up query process
Study on Google Search Algorithms
18
4.6.3 Voice Search Voice search naturally tends to mean more conversational and more natural language
.Rather than searching for a one or two word phrase, people will be more inclined to use
whole sentences, questions, and more complex queries when they speak. Hummingbird
will determine the most relevant and highest quality pages that meet the needs of the
searcher.
Fig 4.7 Query Translation
Fig: 4.8 Results for Voice Search Based Query
Study on Google Search Algorithms
19
4.6.4 Mobile Search Analysts believe that these changes are heavily influenced by Google's desire to become
more mobile. As well as their mobile search engine pages, Google also owns Android
which even has its own voice search capabilities. Hummingbird will have a direct impact
on those who employ mobile friendly landing pages or sites.
Over time, people are going to increasingly gravitate to voice search in
environments where that is acceptable (e.g. environments where speaking to your device
is not seen as intrusive). Voice queries are far more likely to fall into the pattern of the
natural language queries. As in all things search, Google wants to dominate mobile search
too. Google wants to process “real” speech patterns.
Fig 4.9 Mobile Search Technique
Study on Google Search Algorithms
20
4.7 Features 4.7.1 Comparisons The knowledge graph enables more comparisons between search objects (ex: “compare
butter versus olive oil”, “compare Saturn versus Jupiter” etc.).
Fig 4.10 Comparison between Jupiter and Saturn
4.7.2 Geo-Location Enhancement If someone asks “What is the best place to buy an iPhone 5s?” then Google will likely
bring a result near to his current location.
4.7.3 Improved Mobile Search Design and Functionality Voice search and Android/iPhone synchronization are improved and will likely continue
to improve quickly.
4.8 Advantages
Search engine parse full query more quickly.
Google will be able to rank and identified content it has indexed with relevant
queries. Search engine can compile voice based queries more accurately.
Study on Google Search Algorithms
21
Return relevant results discarded irrelevant results.
Search results would be determined to best suit the user experience.
More pages that are original offer more opportunities to answer search engine
queries.
A wider topic coverage area for your expertise.
The opportunity to introduce more long tail keywords.
Surfing the news websites for your niche and writing creative content from current
stories.
Videos are still hot and alluring for those choosing links with answers to their
questions.
Info graphics draw the curious and are a great way to answer search engine
queries in a creative and attractive manner.
4.9 Disadvantages
4.9.1 Think Long-Term
The new algorithm also teaches an important lesson on the speed at which the Web
evolves: it takes much time to search long tailed keywords.
4.9.2 Dedicated Website
If you are having a website and you write article on different topic such that your website is
based on Mobile gadgets and you write all the latest information about them and suddenly
you start to write post on politics then Google will not give preference to your politics related
articles. Before launching the Hummingbird Algorithm Google categories the Post but Now
Google is categorizing the complete website so this is the time to drop your all the post that is
not related to your website.
4.9.3 Using a Copyright Image/Picture On Website
Google Started Voice Searching and also Picture matching so if you don’t want to loose
your website rank don’t use any copyright Picture in your article.
4.9.4 Using Keywords Are Old Fashion
For the Old Search Google Algorithm then we have to use different keywords like this
Pitfall of Google Hummingbird Algorithm ,Drawback of Google Search New engine
Technique, Disadvantage of Hummingbird search Algorithm. So Google is Working For
User query and Genuine Content based and Content Meaning.
Study on Google Search Algorithms
22
REFERENCES
The algorithms work behind the search in order to organize all the information
available on the web and to be able to give accurate results to the search query
conducted.
Algorithms do not only organize the information, they also rank the pages for the
purpose of providing relevant results to the searcher.
Page Rank tells how important a page is, relatively speaking, compared to other
pages.
Hummingbird is paying more attention to each word in a query, ensuring that the
whole query the whole sentence or conversation or meaning is taken into account,
rather than particular words.
The goal is that pages matching the meaning do better, rather than pages
matching just a few words.
Google Hummingbird is designed to apply the meaning technology to billions of
pages from across the web, in addition to Knowledge Graph facts, which may
bring back better results
Study on Google Search Algorithms
23
BIBLIOGRAPHY
[1] Rice, A. (May 12, 2012). Putting a price on words. New York Times. Retrieved
September 15, 2012, from www.nytimes.com/2010/05/16/magazine/16Journalism-
t.html?pagewanted=all.
[2] Furnas, G.W. (1999). The FISHEYE view: A new look at structured file. Murray Hill,
NJ: Bell Laboratories. Retrieved September 16, 2012, from
http://furnas.people.si.umich.edu/Papers/FisheyeOriginalTM.pdf.
[3] Hurst, M. (February 19, 2004). The page paradigm. Good experience [blog].
Retrieved September 16, 2012, from http://goodexperience.com/blog/2004/02/the-page-
paradigm.php.
[4] Nielsen, J. (March 22, 2010). Scrolling and attention. Jakob Nielsen’s alertbox.
Retrieved September 16, 2012, from www.useit.com/alertbox/scrolling-attention.html.
[5] Harold Davis. Search Engine Optimization. O’Reilly, 2006.
[6] Jennifer Grappone and Gradiva Couzin. Search Engine Optimization: An Hour a
Day. John Wiley and Sons, 2nd edition, 2008.
[7] Peter Kent. Search engine optimization for dummies. Willey Publishing Inc, 2006.
[8] AA Benczur, K Csalogány, T Sarlós, and M Uher. SpamRank – Fully Automatic
Link Spam Detection. In Adversarial Information Retrieval on the Web (AiRWEB’05),
[9] http://thenextweb.com/google/2013/09/26/google-unveils-search-updates-formobile-
new-page-rank-algorithm-and-knowledge-graph-comparison
[10] http://www.scf.usc.edu/~csci571/2013Fall/extras/GooglesNewSearchAlgorithm.pptx
[11] http://www.entrepreneur.com/article/229926[14]http://searchengineland.com/google-
hummingbird-172816