All the parts of the search engine are important, but the search algorithm is the cog that makes everything work. It might be more accurate to say that the search algorithm is the foundation on which everything else is built. How a search engine works is based on the search algorithm, which is closely related to the way that data is discovered by the user.
In very general terms, a search algorithm is a problem-solving procedure that takes a problem, evaluates a number of possible answers, and then returns the solution to that problem. A search algorithm for a search engine takes the problem (the word or phrase being searched for), sifts through a database that contains cataloged keywords and the URLs with which those words are associated, and then returns pages that contain the word or phrase that was searched for, either in the body of the page or in a URL that points to the page.But it even goes one better than that. The search algorithm returns those results based on the perceived quality of the page, which is expressed in the quality score. How this neat little trick is accomplished varies according to the algorithm that’s being used. There are several classifications
of search algorithms, and each search engine uses algorithms that are slightly different. That’s why a search for one word or phrase will yield different results from different search engines. Search algorithms are generally divided into three broad categories: on-page algorithms, whole-site algorithms, and off-site algorithms. Each type of algorithm looks at different elements of a web page, yet all three types are generally part of a much larger algorithm.
On-page algorithms
Algorithms that measure on-page factors look at the elements of a page that would lead a user to think the page is worth browsing. This includes how keywords are used in content as well as how other words on the page relate. For example, for any given topic, some phrases are common, so if your web site is about beading, an on-page algorithm will determine that by the number of times the term ‘‘beading’’ is used, as well as by the number of related
phrases and words that are also used on the page (e.g., wire, patterns, jump rings, string or stringing, etc.).
These word patterns are an indicator that the algorithm results — that beading is the topic of the page — are, in fact, correct. The alternative, no related patterns of words, suggests that keywords were entered randomly on a page, just for their value.
The algorithm will also likely look at the proximity of related words. This is just another element of the pattern that validates the algorithmic results, but these elements also contribute to the quality score of a page.

The on-page algorithm also looks at some elements that human visitors can’t see. The back side of a web page contains special content designed specifically for web crawlers. This content is called meta tags. When a crawler examines your web site, it looks at these tags as definitions for what you intend your site to be about. It then weighs that against the other elements of on-site optimization, as well as whole-site and off-site optimization, too.