by Wayne Smith
Navigational Intent Algorithm
The navigation links or navigation panel under a site is an example of Google learning from user interaction in search what pages are important navigation pages that can be used for navagational search intent.
Navigational Intent Algorithm
Navigational intent is learned by looking at how users interact with the search results. It would not be scientific to say the only signals that are used are the interaction or selected result. Off-page linking also provides a signal as well as internal linking and site intent. In an ideal Google world, site intent, and page intent match user intent; Google search then matches the user with the page that best matches the shared intentions. We get insight into the algorithm when we look at the indented links or the panel of links for a site presented under the brand when somebody searches for the brand without an additional keyword.
User Intent Algorithms
Google Testimony before congress included: Google learns user intent and feeds it back into the algorithm for furture searches.
"Each Searcher Influences Future Searches"
Information collected by Google is used to persuade the algorithms to improve search.
When one enters a question into search that information is sent to Google, (or whichever search is being used), and when a search is made from the results page the search site is aware the search is from the results page, (either by session identification, referral page, or explicitly by having the previous search be part of the data that is sent). The session data can be used to induce Google in the desired direction for additional searches.
A search from the result page, which adds words to convey user intent, can then be fed back into the algorithm to prevail on the algorithm to adjust the first results in the direction that is commonly being asked for by the updated requests.
Natural Language Processing, Search Entities, help convey user intent
A query for "how to make pizza," could be interpreted fairly as asking for a recipe. With Natural Language Processing using entities: Pizza is an entity under the entity of food and related to food is the word recipe, which is closely related to the concept (entity) of "how to make." A search for "how to make pizza," pulls up "pizza recipes," even when the term "how to make," does not exist on the page.
A search from the result page, which adds another entity, can be fed back and used to adjust the algorithm to provide greater importance to related entities for future searches.
Sites that are clicked on are also recorded and both the number of impressions and the number of clicks can be given to the website by using the web tools of either Bing or Google. When a site is the last site clicked for the search session; It is reasonable to assume that the site provided the answer the searcher was looking for, and that data can be fed back into the algorithms.
Both user-specific preferences can be extracted, which can then be used in later sessions to provide the results that match the searcher's desired intent; A popular vote can be extracted on what the popular intent and entities are, which should be associated with the initial term for future search results. As well as an example site that best answers the question for that search. The example site can also be analyzed to determine what entities should be included or were missing from the sites that did not satisfy the searcher.
Search Volatility - for lack of user consensus on entities
It is undisputed that when somebody enters a term into a search what their intent for the results are? Well, no. Any attorney can tell you there is not enough evidence to show the user intent for any specific user.
Words have a degree of ambiguity, have multiple meanings, and are used differently by different professions and cultures. Words even infer different meanings based on the season. Context helps in understanding intent, which can be other searches in the session or history. I'm not talking about wants and desires but what is possible and the reality that language can be a bit ambiguous.
Query intent can work for some terms where there is an understanding of a popular meaning. There are search entities and collections of search entities where there is no user consensus. The algorithm's understanding is pulled back and forth as if in a tug-a-war. Not to mention there are some bad influences; People who search for things that should not be found at the top of the search.
Systems that receive feedback from the output are always subject to oscillations -- to create an oscillator you create a feedback loop and a normal level of oscillation is to be expected. The volatility is not harmful to the search engine. Rather, it is likely to be interpreted as an update, which technically is true. But content creators and websites who attempt to create content based on matching the user intent from search -- may find user intent is a moving target, and experience traffic volatility.
Improvements with query intent possible by identifying the audience
Looking at the trend content creators can get ahead of the advancement. Some entities have an unambiguous audience, the entity for "return on investment" can fairly be interpreted as for a business audience. Hence, some of these tug-a-war searches are resolvable by a closer look at the audience.
There is a schema type for audience, but it does not go into depth regarding types of audiences. The content will need to include the related terms which can be used to identify their intended audience.
Ultimately the trend will improve the ability of search to match users with the content they want. The current level of search volatility suggests we are not there yet.