Published:
Published:
by Wayne Smith
Keyword density is an evergreen SEO factor in determining the subject content of a text document.
A paragraph written using standard grammar rules will have the subject used at least once, and often more than once. Semantically related keywords may also be used as the subject.
Normal ranges for keyword density
The normal ranges for how many times a keyword is used is based on subject and type of content.
- A 0.10 or 10% density would have the keyword once per sentence. Is natural in e-commerse listings with short descriptions.
- A 0.03 or 3% density would have the keyword once in a paragraph of three sentences.
- A 0.01 or 1% density would have the keyword once in ten sentences ... can be normal when other terms fill in for the topic.
- A 0.001 or 0.1% density would use the keyword once in 1000 words.
- A .05% would have the keyword once in a page of 2000 words ... Generally somebody reading it would not consider it relevant with the keyword ... it may also be that % was added to the end of the value when the intent was 5%.
The Goldilocks point, not too big or too small, is 1.5% but it depends both the the subject and style of writing. A word like "wine" would have many semantically related words, but a word like "game" has few.
Search Engine Pain Points
Keyword stuffing (overusing keywords) is a pain-point for search engines and can harm the ranking of the page. Keyword stuffing Spam would have an unnatural word density ... would not follow normal English grammar.
SpamBrain is Google's AI-based spam-prevention system. Overusing of a word can be determined by looking at other documents using the keyword to determine the normal frequency the word is used.
Google's Bert Announced Oct 25, 2019; is a natural language processor, and is included in Google's algorithms.
We’ve developed hundreds of algorithms over the years, like our early spelling system, to help deliver relevant search results. When we develop new AI systems, our legacy algorithms and systems don’t just get shelved away. In fact, Search runs on hundreds of algorithms and machine learning models, and we’re able to improve it when our systems — new and old — can play well together.
Consider the keyword "wine." For a search engine based "solely on keywords"; A very natural and normal-sounding document about wine may not use the word "wine" very frequently. Another may instead use a type of wine. The document retrieval system based on keywords only has a problem understanding the content, if other words are used more frequently.
To resolve the limitation built into "keyword only" search engines. The content can be created using the subject more frequency.
"Chardonnay: A wine that is well suited to be served with fish."
An "entity-based" search engine sees chardonnay as a type of wine. Saying, "Chardonnay is well suited to be served with fish," does not lose keyword relevancy.
Entity based search engines provide better results as they prefer more natural sounding language without exclusively relying on keyword density. It does not replace keyword density it fills in the gaps were keyword density does not allows show the best results.
Keyword density is a quantifiable factor-- Determining the frequency of a word in a document is both fast and easy; It works with text documents regardless of size. It is a factor because, like it or not, a document focused on a subject needs to use the word it is explaining.
Non-text documents
There is content other than text documents published. Keyword density was never the only or exclusive relevancy factor for search engines. Never an exclusive factor in SEO for relevancy.
It is not the only factor; Excite almost became the number one search engine because they pioneered the concept that having more than one page about a search term, known as a silo or topical cluster was a better or additional factor -- although keyword density was used at the page level.
Tie Breaker SEO Factors
Regardless of the points of view for which algorithm is the most important ... The sheer volume of websites means there will be ties and additional algorithms keyword density, off-page, or a different on-page algorithm will need to break those ties. SEO for competitive terms needs to use tiebreaker factors. Additional algorithms such as what Excite pioneered or Google's Page Rank - U.S. patent 6,285,999 - January 9, 1998, expired on January 9, 2019, after 20 years.
The Google Link Bomb "who is more evil than satan himself," no longer works today; Both the source content and the destination content need to have relevancy for the keyword, topic, or entity for Page Rank. If one only focuses on, if a link is spam or not, pages that are not relevant are spam pages; Pages that are relevant because they are on topic, (measurable by keyword density, entity density, or other), are not spam.
Nominal Keyword Density in long-tail SEO (semi-deprecated see below)
Adding keywords to a document with a keyword density of 1% can become unwieldy. But, with mixed content having a 5% keyword density for the main keyword the main keyword remains relevant when additional keywords are added.
A 5% + 3% can be used. With a keyword density of 5%, the main keyword would be the most frequently used keyword on the page and with additional keywords at 3%, they would be less frequent than the main keyword ... the page remains relevant to the main keyword but shows up for searches that include the 2nd keyword.
User navigation aids would normally be needed to be used for long-tail SEO. Headlines, table of contents, and images would all increase the keyword density to keep the document on topic.
The long tail keywords should appear within or near each other in some but not all usages on the page ... in HTML they need to be in the same paragraph tag.
Keyword Density vs Entity Density
Using related entities does not change or remove the keyword density from a normal written document but it is a refinement of relevancy algorithms. Related and long-tail search results are normally improved when a search engine uses entities. Entity-based search engines also improve the relevancy algorithms by using entities as co-factors.
Entity density would not be how often a related entity is used but how many, (relevant) entities are used in the document. The usage in normal English would still have the main keyword used in a paragraph but also have a related entity in the paragraph. Additional paragraphs would have different entities.
Entities as co-factors to keyword density
In the same way, keyword density is a co-factor to page title, and headline tags for relevancy for the keyword. Relevant entities are co-factors or alternative forms of the keyword.
Entity Based Long tail
A related entity can, if used more than once, be a long tail keyword with the main keyword that exists in the same paragraph. The density of the main keyword remains between 1%-3% which is lower than the above "Nominal Keyword Density in the long tail."
Natural language still needs to be maintained. If the reader is not familure with the alternative entity. They may have trouble following alone with the document. Documents written for a more sophisicated audiance may have lower keyword density, and documents written for a more general audiance -- higher density needs to maintain.
Relevant Entity
For the sake of this article: A relevant entity is a common entity used with the main keyword. The closeness of the entity would be how many times these two entities are searched for in the same query.
... Solution Smith tests SEO tactics so you don't have to ...
Full-stack SEO has a lot of moving parts, and it takes time and effort to understand the nuances involved. Solution Smith takes care of the overhead costs associated with digital marketing resulting in real savings in terms of time and effort.