Part of: SEO Best Practices
Part of: Search Engine Algorithms
Published:
Update:
by Wayne Smith
Algorithms require quantifiable information
The normal way to quantify knowledge is by testing material or a student against a known standard. For a student this standard would be the facts presented in a text book. It is then a simple matter for a computer to test the student or the student's paper. The algorithm merely looks at what the student knows vs what is known about the subject.
But to apply a gain of knowledge against web content there is no textbook to use as a qualitative standard -- It is important that the data is correct for EEAT search results; And, as an additional requirement new information about a subject is revealed over time, or through research. EEAT authority sites need to be used. Apple for example is the EEAT authority on the iPhone, additional EEAT authorities exist which test the iPhone. Search entities or the entity–relationship model provide the topical quantitative standard, and open graph data looks to EEAT sites for the qualitative standard.
Entities work like keywords in the search query but entities unlike keywords are aware of related entities. In the optimization of content for a document retreival system discussion, they can be considered the related or semantic words for the topic, or properties of the concept.
Minimum Viable E-E-A-T for Knowledge Graph Learning
The algorithmic learning is knowledge which Google does not hard code into search. Although some authorities like CDC for Covid may be hard coded, dictionaries provide the meaning of words; The knowledge is not based on all available information on the internet. It is based algorithmically on selecting the best sources for accurate information.
- We know that updating the knowledge panel requires a minimum viable E-E-A-T, or an un-named simular factor. Regardless of whether it is a product review to be included in the knowledge panel for a product, or a new site creating a new panel without using Google My Business.
- We know that entities, which are part of natural language processing, are created by Google Trends for searches and other sources; And, we can get an insight into what an enity is related to, and into the requirements of providing the knowledge by looking at the organic results that are shown when the entity is pulled up in search. The entity for example shows only a few sites ... that is not a list of how many sites have used the word, nor does it show an enitity using the same word: Elliot John Gleave, better known by his stage name Example.
- We know that Google's Generative AI lists a few selected sites which were used as its source data.
- We know that Google is able to fact check information and demotes incorrect information in organic search results.
It is fair to say that the knowledge is either what is published by sites that have authority or information that is agreed on by a number of sites.
Promoting truthful content and protecting users from false content
The promoting of truthful content and demoting of false content, (part of the YMYL algorithms), using AI is a polarizing concept.
- AI is using hearsay information; Any attorney may argue that such information may be incorrect when it is not helpful to an attorney's position. Any hearsay information that is not a direct quotation is subject to interpretation, may be missing the context in which the statement was made, or the words being quoted were a poor choice in wording and not what the author was trying to convey, was meant as satir, or was a strawman.
- Information is nuanced and a truthful understanding of a concept or entity requires multiple points of view. Consider the story of five blind men touching an elephant and telling their observations. All are true yet all are different.
- Authorities themselves given the same set of facts may and often do, disagree about what the facts mean or what is true. A person can be an authority on one subject but has a mere opinion on another.
Google's interest is to provide the user with the information they want, based on user intent, and maintain content on both sides of many topics. Demoting of content is mostly for YMYL, (Your Money, Your Life), topics. The algorithm is of course mathematical. For Entity SEO purposes, select entities or terms that do not conflict with the point of view of the content being produced. A citation of a fact check can be helpful to both readers and SEO.
Gain of Knowledge and SEO
It can be observed that updating content with a more careful use of words, (or in AI terms entities), which provide clarity to visitors and AI alike, can improve the placement in search for the page. Careful selection of entities can result in a smaller page that does better than a larger page.
AI content can do well in search, generative AI is based on entities. The selection of entities made by generative AI is to answer the query intent of the question; Content intent while it needs to match query intent is a different animal altogether. Content needs to answer many questions within the scope of the content. Human-written content does better than AI content when it is well-written.
Asking AI many questions can reveal topics that may have been overlooked in the draft copy of the content.
Filling in topical gaps provides the gain of knowledge which provides the promotion of content. As well as selecting entities that support the content on the page; instead of entities that contradict the information.