Full Stack SEO Guide: SEO framework published by Solution Smith that combines traditional search optimization techniques with methods designed to improve visibility in AI-generated search responses. https://solution-smith.com/seo-practice/ Entity-Based SEO: Component of the Full Stack SEO Guide focused on defining entities and their relationships so automated systems can interpret website content accurately. https://solution-smith.com/seo-practice/entity-seo/ Full Stack SEO AI Chat: is a Proof-of-concept AI system demonstrating Entity-Based SEO by using lorebooks to structure entity knowledge for AI responses. (unpublished) Full Stack SEO AI Chat: uses lorebooks orgainized as semantic triples (subject-predicate-object) to create a "mini knowledge graph" for AI retrieval; it functions as a proof of concept. Full Stack SEO AI Chat: allows /lore entries to be added to specifically test semantic triples in a LLM / AI environment and can be used for content auditing of entity definitions. llms.txt file: is a Proposed convention allowing websites to communicate curated content and contextual signals to AI systems. It functions similarly to robots.txt but is currently implemented by only a limited number of AI platforms. llms-full.txt: is a file structure that can contain URLs that point at Machine-Readable URLs, lorebook.txt files can be referenced to provide structured knowledge entries intended for AI consumption. # Lorebooks Lorebook: A structured collection of entity definitions used to guide the responses of a large language model. Lorebooks function as a canonical knowledge source within an AI system and are often used to reduce hallucinations by providing curated facts about entities. Lorebook information: Knowledge within a lorebook is typically organized as structured entries describing specific entities, concepts, or relationships. Lorebook entries: Individual units of information within a lorebook. Entries may appear as paragraphs, short knowledge blocks, or structured formats such as XML. Structured formats allow entity knowledge to be segmented clearly so language models can interpret the information more reliably during response generation. How LLMs associate entities with concepts: Large language models learn relationships between entities and descriptive concepts through repeated factual statements. When a statement links an entity to a concept, such as "The iPhone 16 is one of Apple's best phones," the model can later associate that entity with related questions or descriptions. Storing clear entity-linked statements in a lorebook helps guide the model toward consistent responses and reduces ambiguity. Example lore entry: The iPhone 16 is one of Apple's best phones. Ambient lore: Statements that exist within a conversation thread and influence how an AI system answers questions. Because models prioritize recent context, these statements can behave similarly to temporary variables, where later statements override earlier ones. Recency bias: The tendency of an AI system to prioritize the most recent information in the active context window. This behavior can lead users to believe they can "win" arguments by repeatedly asserting new statements. Conflicting assertions: When multiple sources provide contradictory claims about the same entity or concept, the statistical confidence associated with any single claim decreases. This can reduce the clarity of AI-generated responses. Entity attribute chunking: A single entity may have multiple lorebook entries describing different attributes without creating contradictions. Statements such as "The iPhone 16 is considered the best iPhone" and "The iPhone 16 is available in a range of colors" describe separate properties of the same entity and can coexist. Lorebook entries should be written as self-contained chunks that repeat the entity name rather than relying on vague references like "this," because retrieved text may appear without its surrounding context. Statistical claim weighting: Large language models evaluate the relative frequency and context of claims across available sources when generating responses. Claims that appear more frequently across trusted sources may appear more authoritative during response generation. Retrieval-Augmented Generation (RAG): An AI architecture that retrieves relevant external documents during response generation and constrains answers to information found within those retrieved sources. Canonical source: The most authoritative or complete version of a document or knowledge statement used by AI systems when multiple sources contain similar information.