{"id":30358,"date":"2026-01-21T17:24:18","date_gmt":"2026-01-21T16:24:18","guid":{"rendered":"https:\/\/wordlift.io\/blog\/en\/?p=30358"},"modified":"2026-01-22T18:09:08","modified_gmt":"2026-01-22T17:09:08","slug":"recursive-language-models-on-kg","status":"publish","type":"post","link":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/","title":{"rendered":"RLM-on-KG: Recursive Language Models and the Future of SEO"},"content":{"rendered":"\n<figure class=\"wp-block-audio\"><audio controls src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Agentic_SEO_Cures_Context_Rot.mp3\"><\/audio><figcaption class=\"wp-element-caption\">NotebookLM Summary <\/figcaption><\/figure>\n\n\n\n<p>We are entering a new phase of the web, what I call the <a href=\"https:\/\/wordlift.io\/blog\/en\/the-reasoning-web\/\"><em><strong>Reasoning Web<\/strong><\/em>.<\/a> And with it comes a turning point for search as we\u2019ve known it. AI systems are no longer passive readers of documents; they\u2019re becoming agents that explore information spaces, navigate relationships, and build understanding through structured reasoning.<\/p>\n\n\n\n<p>That distinction matters.<\/p>\n\n\n\n<p>Retrieval\u2011augmented generation (RAG) grounded LLM outputs by injecting external sources into prompts. But most RAG systems still behave like a <em>single lookup<\/em>: one query, top chunks, one synthesis. This works when the answer lives in a single passage. It fails when the truth emerges only from the connections between pages, entities, and concepts.<\/p>\n\n\n\n<p>And enterprise questions rarely fit in one page.<\/p>\n\n\n\n<p>When Zurich Insurance asks about coverage options that span multiple policy types, the answer isn\u2019t \u201cin an article.\u201d It\u2019s in the <em>structure<\/em>, policy definitions, exclusions, jurisdictions, endorsements, and in how they relate. When an automotive transport customer wants to know how routes affect pricing, the answer emerges from the interplay of locations, constraints, seasonal demand, carrier availability, and service tiers.<\/p>\n\n\n\n<p>In other words: <strong>the evidence lives in the graph, not just the text<\/strong>.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>This article introduces <strong>RLM\u2011on\u2011KG<\/strong>, an adaptation of recursive language model inference where the \u201cenvironment\u201d is a WordLift Knowledge Graph accessed via GraphQL. Recursive Language Models (RLMs), introduced in recent research by Alex Zhang, Tim Kraska, and Omar Khattab at MIT (<a href=\"https:\/\/arxiv.org\/abs\/2512.24601?utm_source=chatgpt.com\">arXiv<\/a>), propose a simple but powerful idea: instead of feeding massive amounts of context into a language model and hoping it stays coherent, let the model treat the prompt as an external environment it can inspect, decompose, and navigate recursively.<\/p>\n\n\n\n<p>Using a 150\u2011question benchmark built from a WordLift blog knowledge graph (Articles + FAQPages), we compare three answering modes, Vanilla, Simple RAG, and RLM\u2011on\u2011KG, and surface two key findings:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Multi\u2011hop KG traversal improves evidence quality and citation behavior.<\/strong><\/li>\n\n\n\n<li><strong>It also reveals a failure mode, overreach, visible only when we add an explicit grounding judge.<\/strong><\/li>\n<\/ol>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">1. Introduction<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">1.1 Motivation: SEO in the era of \u201cinfinite context\u201d<\/h3>\n\n\n\n<p>Modern SEO is increasingly about semantic coverage: entities, relationships, structured data, and satisfying user intent across an interconnected content network. For an LLM agent, \u201cinfinite context\u201d is not a massive prompt, it\u2019s a large, evolving corpus of linked knowledge. The central challenge becomes:<\/p>\n\n\n\n<p><strong>How do we turn unlimited context into navigable memory with provenance?<\/strong><\/p>\n\n\n\n<p>In the Reasoning Web, SEO shifts from optimizing individual pages to optimizing <strong>signals for reasoning<\/strong>.<\/p>\n\n\n\n<p>We\u2019re entering a world where AI systems:<\/p>\n\n\n\n<p>\u2022 explore content across multiple steps<br>\u2022 follow entity relationships<br>\u2022 verify claims across sources<br>\u2022 build answers by navigating structure, not by reading a single page<\/p>\n\n\n\n<p>The guiding question becomes:<\/p>\n\n\n\n<p><strong>Is your content explorable by reasoning systems?<\/strong><\/p>\n\n\n\n<p>If your site is well\u2011structured, with semantic markup and explicit entity relationships, AI agents can traverse it deeply. They can connect ideas across pages, disambiguate meaning, and construct accurate answers.<\/p>\n\n\n\n<p>If your site is mostly unstructured text, the agent sees isolated fragments. It can retrieve, but it cannot <em>navigate<\/em>.<\/p>\n\n\n\n<p>This is the shift toward what I call <a href=\"https:\/\/wordlift.io\/blog\/en\/what-is-seo-3-0\/\"><strong>SEO<\/strong> <strong>3.0<\/strong><\/a>: optimizing your information architecture so AI agents can explore it intelligently, not just retrieve it superficially.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1.2 From RLMs to RLM-on-KG<\/h3>\n\n\n\n<p>The RLM framework, introduced by Zhang, Kraska, and Khattab at MIT, in December 2025 flips the script as it reframes the memory problem (I wrote about the paper, <a href=\"https:\/\/www.linkedin.com\/pulse\/reasoning-over-memory-retrieval-andrea-volpini-bkjff\">here<\/a> on LinkedIn). <\/p>\n\n\n\n<p>Instead of shoving more information into the context window, <strong>let the model explore its environment recursively<\/strong>, querying, examining, and decomposing information step by step.<\/p>\n\n\n\n<p>Think of it as the difference between:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>giving someone ten random pages and asking them to answer a question<br>vs.<\/li>\n\n\n\n<li>letting them roam an entire library, follow references, and build an understanding iteratively.<\/li>\n<\/ul>\n\n\n\n<p>RLMs treat the prompt as an <em>external environment<\/em> they can navigate, implemented as a Python REPL in the original paper.<\/p>\n\n\n\n<p>I asked myself a simple question:<\/p>\n\n\n\n<p><strong>What if that environment was a Knowledge Graph?<\/strong><\/p>\n\n\n\n<p><strong>Recursive Language Models (RLMs)<\/strong> treat long prompts as an environment and allow the LLM to programmatically examine, decompose, and recursively call itself over snippets. <br><\/p>\n\n\n\n<p><strong>I made one extra step:<\/strong><\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>If the environment is a knowledge graph, then \u201c<em>infinite context<\/em>\u201d becomes <strong>a graph exploration problem<\/strong>.<\/p>\n<\/blockquote>\n\n\n\n<p>Along with the team, we adapted the RLM approach by replacing the Python REPL with something we know intimately: <strong>a WordLift Knowledge Graph accessed via GraphQL.<\/strong><\/p>\n\n\n\n<p>In the implementation, the model doesn&#8217;t receive a massive context dump. Instead, it navigates the graph iteratively. Each &#8220;hop&#8221; brings back thin evidence, a few FAQs, some article snippets, key entity relationships, typically around 200 bytes rather than 50KB articles. <\/p>\n\n\n\n<p>The model (Gemini Flash 3.0) examines this evidence, decides which related entities to explore next, and continues until it has enough perspective to synthesize an answer. Here is in short:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Fetch a node<\/strong> (seed entity)<\/li>\n\n\n\n<li><strong>Pull a tiny subgraph<\/strong> (neighbors via KG relations \/ co-occurrence in content)<\/li>\n\n\n\n<li><strong>Recurse<\/strong> with a hop budget<\/li>\n\n\n\n<li><strong>Synthesize<\/strong> an answer while <strong>citing URIs\/URLs<\/strong> observed during traversal<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>The navigation isn&#8217;t random. It&#8217;s guided by question relevance, how well each entity matches the original query, and diversity, ensuring the model explores different angles rather than drilling into one perspective.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" width=\"2816\" height=\"1536\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Gemini_Generated_Image_o2jknpo2jknpo2jk.png\" alt=\"\" class=\"wp-image-30366\" style=\"width:900px\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Gemini_Generated_Image_o2jknpo2jknpo2jk.png 2816w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Gemini_Generated_Image_o2jknpo2jknpo2jk-300x164.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Gemini_Generated_Image_o2jknpo2jknpo2jk-1024x559.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Gemini_Generated_Image_o2jknpo2jknpo2jk-768x419.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Gemini_Generated_Image_o2jknpo2jknpo2jk-1536x838.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Gemini_Generated_Image_o2jknpo2jknpo2jk-2048x1117.png 2048w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Gemini_Generated_Image_o2jknpo2jknpo2jk-150x82.png 150w\" sizes=\"(max-width: 2816px) 100vw, 2816px\" \/><figcaption class=\"wp-element-caption\"><strong>Figure 1. System Overview<\/strong>: Question \u2192 entitySearch seed \u2192 hop loop \u2192 evidence packets \u2192 compact context \u2192 answer + citations \u2192 judge.<br><\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">1.3 Positioning vs GraphRAG<\/h3>\n\n\n\n<p>GraphRAG (as introduced by Microsoft) builds a graph index from raw text, then uses community summaries and query\u2011time retrieval to answer questions at scale, particularly well suited for global, corpus\u2011level queries.<\/p>\n\n\n\n<p>RLM\u2011on\u2011KG differs from GraphRAG in three practical ways:<\/p>\n\n\n\n<p><strong>1. Graph source<\/strong><br>\u2022 <em>GraphRAG:<\/em> constructs a graph from documents.<br>\u2022 <em>RLM\u2011on\u2011KG:<\/em> operates on a native, pre\u2011curated RDF knowledge graph with explicit semantic relationships.<\/p>\n\n\n\n<p><strong>2. Query\u2011time behavior<\/strong><br>\u2022 <em>GraphRAG:<\/em> retrieves communities\/summaries to assemble an answer.<br>\u2022 <em>RLM\u2011on\u2011KG:<\/em> runs a multi\u2011hop exploration policy where traversal itself is the primary reasoning loop.<\/p>\n\n\n\n<p><strong>3. Provenance granularity<\/strong><br>\u2022 <em>GraphRAG:<\/em> citations generally point to text chunks or community summaries.<br>\u2022 <em>RLM\u2011on\u2011KG:<\/em> cites specific entity URIs and page URLs discovered hop\u2011by\u2011hop (e.g., Article and FAQ URLs), which aligns more naturally with SEO needs such as traceability and editorial review.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">2. System: WordLift KG as an environment<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">2.1 WordLift KG access<\/h3>\n\n\n\n<p>We use <strong>WordLift\u2019s GraphQL endpoint<\/strong> to query an account\/site knowledge graph. <br>The key capabilities are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><code>entitySearch<\/code><\/strong> for semantic\/lexical discovery of candidate entities<\/li>\n\n\n\n<li><strong><code>resource(iri:)<\/code><\/strong> for schemaless access to node properties and relations<\/li>\n\n\n\n<li>Article\/FAQ retrieval patterns that return <strong>URLs + content snippets<\/strong><\/li>\n<\/ul>\n\n\n\n<p>For more context on WordLift KG concepts and API usage, see <a href=\"https:\/\/docs.wordlift.io\/knowledge-graph\/?utm_source=chatgpt.com\">docs.wordlift.io<\/a>. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2.2 Evidence types<\/h3>\n\n\n\n<p>Our blog KG is organized around:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Articles<\/strong> (e.g., schema:Article \/ headline, description, url)<\/li>\n\n\n\n<li><strong>FAQPages<\/strong> with Q\/A pairs (schema:FAQPage, schema:mainEntity, schema:acceptedAnswer)<\/li>\n\n\n\n<li>Entities connected via schema relations (e.g., <code>schema:about<\/code>, <code>schema:mentions<\/code>)<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">3. Methods<\/h2>\n\n\n\n<p>To evaluate the approach, we built a 150\u2011question benchmark from the WordLift blog knowledge graph (Articles + FAQPages). We compared three answering modes:<\/p>\n\n\n\n<p><strong>1. Vanilla<\/strong>: Gemini Flash 3.0 answers directly from training data. No retrieval. Fast but with no provenance.<br><strong>2. Simple RAG<\/strong>: single\u2011shot retrieval: search once, retrieve top results, and synthesize.<br><strong>3. RLM\u2011on\u2011KG<\/strong>: multi\u2011hop traversal of the knowledge graph, aggregating evidence from five entities across five hops before synthesis.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3.1 Compared answering modes<\/h3>\n\n\n\n<p><strong>Mode A \u2014 Vanilla<\/strong><br>LLM answers directly with no retrieval. Produces fluent responses but no provenance.<\/p>\n\n\n\n<p><strong>Mode B \u2014 Simple RAG (one-shot)<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><code>entitySearch(question)<\/code><\/li>\n\n\n\n<li>Pick top entity<\/li>\n\n\n\n<li>Fetch top FAQs + top Articles (thin snippets)<\/li>\n\n\n\n<li>Answer using only that evidence<\/li>\n<\/ol>\n\n\n\n<p><strong>Mode C \u2014 RLM-on-KG (multi-hop)<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Seed entity from <code>entitySearch(question)<\/code><\/li>\n\n\n\n<li>For each hop (budget = 5):\n<ul class=\"wp-block-list\">\n<li>gather an <strong>EvidencePacket<\/strong> for focus entity (FAQs + article snippets)<\/li>\n\n\n\n<li>expand candidates using related entities from top articles (<code>schema:about<\/code> + <code>schema:mentions<\/code>)<\/li>\n\n\n\n<li>choose next entity via a simple overlap-plus-score policy (avoid revisiting)<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Build a compact context from evidence packets<\/li>\n\n\n\n<li>Generate answer constrained to evidence and asked to cite URLs when possible<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">3.2 The exploration loop as a policy<\/h3>\n\n\n\n<p>We can formalize RLM-on-KG as a lightweight Markov Decision Process (MDP):<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>State<\/strong>: current focus entity (IRI\/name), visited set, hop index<\/li>\n\n\n\n<li><strong>Actions<\/strong>: select next entity among neighbors\/candidates<\/li>\n\n\n\n<li><strong>Transition<\/strong>: next entity becomes focus<\/li>\n\n\n\n<li><strong>Budget<\/strong>: fixed hop limit<\/li>\n\n\n\n<li><strong>Objective<\/strong>: maximize downstream grounded answer quality (approximated by our judge)<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">4. Evaluation and Learnings<\/h2>\n\n\n\n<h4 class=\"wp-block-heading\">What We Learned: Evidence, Citations, and a Failure Mode Worth Studying<\/h4>\n\n\n\n<p><strong>RLM-on-KG<\/strong> gathered <strong>4-6x more evidence than simple RAG<\/strong>. It discovered connections that didn&#8217;t exist in any single article, relationships that only become visible when you follow the graph structure from entity to entity.<\/p>\n\n\n\n<p>When asked about semantic SEO, for example, the system didn&#8217;t just return articles tagged with that term. It hopped to Knowledge Graphs, then to Structured Data, then to Schema.org, accumulating evidence at each step. The final synthesis showed how semantic SEO is <em>built on<\/em> knowledge graph principles, a relationship no single document stated explicitly.<\/p>\n\n\n\n<p>The multi-hop approach also improved citation behavior. Instead of relying on one or two sources, answers drew from diverse perspectives across the graph.<\/p>\n\n\n\n<p>But we also discovered a failure mode that deserves attention: <strong>overreach<\/strong>.<\/p>\n\n\n\n<p>When we introduced an explicit grounding judge to evaluate answers, we found cases where the system achieved high intent coverage, it addressed what the user asked, but with low faithfulness to what the evidence actually supported. The model was sometimes confident about conclusions the underlying sources didn&#8217;t quite warrant.<\/p>\n\n\n\n<p>This becomes visible <em>only<\/em> when you add rigorous grounding evaluation. Without a judge checking faithfulness, the answers look impressive. With one, you see where enthusiasm outpaced evidence.<\/p>\n\n\n\n<p>We think this is critical for anyone building AI systems that need to be trustworthy. The structure gives you more material to synthesize, which is powerful, but that same richness can enable more sophisticated hallucinations if you&#8217;re not careful.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4.1 Dataset<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>150 questions<\/strong> (blog FAQ-style prompts)<\/li>\n\n\n\n<li>For each question: 3 answers (Vanilla, Simple RAG, RLM-on-KG)<\/li>\n\n\n\n<li>Total: <strong>450 rows<\/strong> (150 questions \u00d7 3 modes)<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4.2 Grounding judge <\/h3>\n\n\n\n<p>We use an LLM-based evaluator that outputs structured scores:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>faithfulness<\/code> (0\u201310): are claims supported by evidence?<\/li>\n\n\n\n<li><code>on_intent<\/code> (0\u201310): does it answer the question?<\/li>\n\n\n\n<li><code>citation_coverage<\/code> (0\u201310): are key claims cited?<\/li>\n\n\n\n<li><code>citation_precision<\/code> (0\u201310): are cited URLs among allowed sources?<\/li>\n<\/ul>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>Important limitation:<\/strong> Vanilla has no retrieval \u2192 \u201callowed sources\u201d and \u201cevidence\u201d are absent, so the judge tends to score its faithfulness\/citations near zero by design. This is useful for \u201cgrounding compliance,\u201d but it is not a fair factuality comparison unless Vanilla is also provided evidence for evaluation.<\/p>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">4.3 Mean scores (\u00b1 std)<\/h3>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img decoding=\"async\" width=\"1089\" height=\"440\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image.png\" alt=\"Figure 1. Mean judge scores (\u00b11 SD) across 150 questions. RLM-on-KG significantly improves citation coverage and precision relative to Simple RAG, while intent remains high across modes. Faithfulness decreases for RLM-on-KG, indicating a higher rate of synthesis beyond the provided evidence excerpts.\" class=\"wp-image-30360\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image.png 1089w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-300x121.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-1024x414.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-768x310.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-150x61.png 150w\" sizes=\"(max-width: 1089px) 100vw, 1089px\" \/><figcaption class=\"wp-element-caption\"><strong>Figure 2. Mean judge scores (\u00b11 SD) across 150 questions.<\/strong> RLM-on-KG significantly improves citation coverage and precision relative to Simple RAG, while intent remains high across modes. Faithfulness decreases for RLM-on-KG, indicating a higher rate of synthesis beyond the provided evidence excerpts.<br><\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img decoding=\"async\" width=\"790\" height=\"440\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-1.png\" alt=\"\" class=\"wp-image-30362\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-1.png 790w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-1-300x167.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-1-768x428.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-1-150x84.png 150w\" sizes=\"(max-width: 790px) 100vw, 790px\" \/><figcaption class=\"wp-element-caption\"><strong>Figure 3. Distribution of overall grounded score per mode.<\/strong> RLM-on-KG shifts the distribution upward compared to Simple RAG, but exhibits substantial variance consistent with heterogeneous question types and hop trajectories.<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img decoding=\"async\" width=\"889\" height=\"390\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-2.png\" alt=\"\" class=\"wp-image-30364\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-2.png 889w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-2-300x132.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-2-768x337.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-2-150x66.png 150w\" sizes=\"(max-width: 889px) 100vw, 889px\" \/><figcaption class=\"wp-element-caption\"><strong>Figure 4. Per-question win rate of RLM-on-KG vs Simple RAG (higher is better).<\/strong><br>For each metric, we compute the percentage of questions where RLM-on-KG\u2019s score exceeds Simple RAG\u2019s score (ties excluded or reported separately, depending on the analysis). RLM-on-KG wins most often on <strong>citation coverage<\/strong> and <strong>citation precision<\/strong>, and wins a majority of questions on the <strong>overall<\/strong> score, while losing more frequently on <strong>faithfulness<\/strong>, highlighting a tradeoff between improved citation behavior and increased risk of overreach during multi-hop synthesis.<\/figcaption><\/figure>\n\n\n\n<p>Here is the summary:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Mode<\/th><th>overall<\/th><th>faithfulness<\/th><th>on_intent<\/th><th>citation_coverage<\/th><th>citation_precision<\/th><\/tr><\/thead><tbody><tr><td><strong>RLM-on-KG<\/strong><\/td><td><strong>5.813 \u00b1 1.947<\/strong><\/td><td>3.347 \u00b1 3.096<\/td><td>9.273 \u00b1 1.601<\/td><td>7.987 \u00b1 3.328<\/td><td>6.233 \u00b1 3.363<\/td><\/tr><tr><td><strong>Simple RAG<\/strong><\/td><td>4.981 \u00b1 2.932<\/td><td><strong>4.513 \u00b1 3.698<\/strong><\/td><td>8.780 \u00b1 2.273<\/td><td>2.107 \u00b1 4.064<\/td><td>2.133 \u00b1 4.110<\/td><\/tr><tr><td><strong>Vanilla<\/strong><\/td><td>2.897 \u00b1 1.325<\/td><td>0.753 \u00b1 2.567<\/td><td><strong>9.993 \u00b1 0.082<\/strong><\/td><td>0.147 \u00b1 1.013<\/td><td>0.000 \u00b1 0.000<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\"><strong>Table 1. Mean scores.<\/strong><\/figcaption><\/figure>\n\n\n\n<p>Interpretation:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>RLM-on-KG strongly improves citation behavior<\/strong> (coverage + precision) versus Simple RAG.<\/li>\n\n\n\n<li><strong>RLM-on-KG improves overall<\/strong> versus Simple RAG.<\/li>\n\n\n\n<li><strong>Simple RAG has higher faithfulness than RLM-on-KG<\/strong> under this judge setup\u2014suggesting a meaningful failure mode (overreach).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4.4 Paired deltas (RLM-on-KG \u2212 Simple RAG)<\/h3>\n\n\n\n<p>The paired analysis between RLML-on-KG and Simple Rag shows:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Overall:<\/strong> +0.832 (95% CI \u2248 +0.351 to +1.297), Cliff\u2019s \u03b4 \u2248 0.40, Wilcoxon p \u2248 4.17e-05<\/li>\n\n\n\n<li><strong>Citation coverage:<\/strong> +5.88 (CI \u2248 +4.94 to +6.80), \u03b4 \u2248 0.59, p \u2248 6.97e-18<\/li>\n\n\n\n<li><strong>Citation precision:<\/strong> +4.10 (CI \u2248 +3.13 to +5.05), \u03b4 \u2248 0.59, p \u2248 3.15e-10<\/li>\n\n\n\n<li><strong>On-intent:<\/strong> +0.493 (CI \u2248 +0.10 to +0.893), \u03b4 \u2248 0.153, p \u2248 2.56e-02<\/li>\n\n\n\n<li><strong>Faithfulness:<\/strong> <strong>\u22121.167<\/strong> (CI \u2248 \u22121.747 to \u22120.593), \u03b4 \u2248 \u22120.20, p \u2248 2.04e-04<\/li>\n<\/ul>\n\n\n\n<p>Let&#8217;s review an example from the dataset related to the question <em>\u201cHow does Knowledge Graph-Native content work?\u201d<\/em><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>mode<\/strong><\/td><td><strong>answer<\/strong><\/td><\/tr><tr><td><strong>Simple RAG<\/strong><\/td><td>Knowledge Graph-Native content works by using connected entities to build a &#8220;web of interconnected meanings.&#8221; It combines a knowledge graph approach with JSON-LD representation and structured data\u2014such as taxonomies and schemas\u2014to make content modular, findable, and portable. This structured approach allows content to be easily reproduced in different formats or moved between databases. Additionally, it involves creating content through &#8220;intelligent entity AI SERP analysis.&#8221;<\/td><\/tr><tr><td><strong>RLM-on-KG<\/strong><\/td><td>Knowledge graph-native content works by connecting entities to build a &#8220;web of interconnected meanings.&#8221; According to the provided evidence, the process involves several key components:<br><br>* \u00a0 **Interconnected Entities:** The core of the approach is the use of entities that work more effectively when they are connected, reflecting a vision of the web based on shared meanings (<a href=\"https:\/\/wordlift.io\/blog\/en\/knowledge-graph-native-structured-content\/\"><u>https:\/\/wordlift.io\/blog\/en\/knowledge-graph-native-structured-content\/<\/u><\/a>). <br><br>* \u00a0 **Structured Data and JSON-LD:** Content is represented using JSON-LD. This structured data approach makes content modular, allowing it to be easily reproduced in different formats, moved between locations, or populated into various databases (<a href=\"https:\/\/wordlift.io\/blog\/en\/knowledge-graph-native-structured-content\/\"><u>https:\/\/wordlift.io\/blog\/en\/knowledge-graph-native-structured-content\/<\/u><\/a>). <br><br>* \u00a0 **Intelligent Analysis:** Creation of this content involves &#8220;intelligent entity AI SERP analysis,&#8221; often utilizing tools like SEO add-ons to bridge traditional SEO with advanced AI functionalities (<a href=\"https:\/\/wordlift.io\/blog\/en\/knowledge-graph-native-structured-content\/\"><u>https:\/\/wordlift.io\/blog\/en\/knowledge-graph-native-structured-content\/<\/u><\/a>, <a href=\"https:\/\/wordlift.io\/blog\/en\/knowledge-graphs-ai-powered-seo\/\"><u>https:\/\/wordlift.io\/blog\/en\/knowledge-graphs-ai-powered-seo\/<\/u><\/a>). <br><br>* \u00a0 **Findability and Portability:** By using taxonomies and schemas, the content becomes more findable for search engines and can be easily picked up by personal digital assistants like Google Assistant (<a href=\"https:\/\/wordlift.io\/blog\/en\/knowledge-graphs-ai-powered-seo\/\"><u>https:\/\/wordlift.io\/blog\/en\/knowledge-graphs-ai-powered-seo\/<\/u><\/a>, <a href=\"https:\/\/wordlift.io\/blog\/en\/make-your-website-talk\/\"><u>https:\/\/wordlift.io\/blog\/en\/make-your-website-talk\/<\/u><\/a>). <br><br>* \u00a0 **Data Integration:** In specific contexts like e-commerce, it involves transforming product feeds (Merchant or Manufacturer feeds) into a product knowledge graph to improve visibility in search results (<a href=\"https:\/\/wordlift.io\/blog\/en\/how-build-product-knowledge-graph\/\"><u>https:\/\/wordlift.io\/blog\/en\/how-build-product-knowledge-graph\/<\/u><\/a>).<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\"><strong>Table 2. Simple RAG vs RLM-on-KG.<\/strong> This is a question about&nbsp;<em>how concepts relate<\/em>&nbsp;(knowledge graph-native content),&nbsp;<strong>graph navigation reliably finds the right supporting nodes<\/strong>, and citations become both&nbsp;<strong>present and correct<\/strong>.<\/figcaption><\/figure>\n\n\n\n<p><strong>Simple RAG (single-pass)<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Citation coverage:&nbsp;<strong>0<\/strong><\/li>\n\n\n\n<li>Citation precision:&nbsp;<strong>0<\/strong><\/li>\n\n\n\n<li>Faithfulness:&nbsp;<strong>3<\/strong><\/li>\n\n\n\n<li>On-intent:&nbsp;<strong>10<\/strong><\/li>\n\n\n\n<li>URLs in answer:&nbsp;<strong>0<\/strong><\/li>\n\n\n\n<li>Judge note (summary): the answer was&nbsp;<em>plausible<\/em>, but&nbsp;<strong>not grounded\/cited<\/strong>&nbsp;and only weakly supported by the retrieved evidence.<\/li>\n<\/ul>\n\n\n\n<p><strong>RLM-on-KG (recursive graph exploration)<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Citation coverage:&nbsp;<strong>10<\/strong><\/li>\n\n\n\n<li>Citation precision:&nbsp;<strong>10<\/strong><\/li>\n\n\n\n<li>Faithfulness:&nbsp;<strong>10<\/strong><\/li>\n\n\n\n<li>On-intent:&nbsp;<strong>10<\/strong><\/li>\n\n\n\n<li>URLs in answer:&nbsp;<strong>6<\/strong><\/li>\n\n\n\n<li>Judge note (summary): the answer&nbsp;<strong>accurately described<\/strong>&nbsp;the concept and&nbsp;<strong>key claims were supported<\/strong>&nbsp;by the allowed sources.<\/li>\n<\/ul>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>RLM-on-KG increases \u201cgrounded behavior\u201d (citations + coverage) and improves overall score, but it currently pays a faithfulness penalty vs one-shot RAG.<\/strong><\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">What This Means for SEO<\/h2>\n\n\n\n<p>Let me translate these findings into the language of search and discovery.<\/p>\n\n\n\n<p><strong>Infinite context isn&#8217;t the answer.<\/strong> The industry has spent years racing toward larger context windows, assuming that more input means better output. Context rot research, and our own experiments, suggests this assumption breaks down. <strong>What matters isn&#8217;t how much context you can theoretically provide, but how efficiently an AI system can navigate to the relevant pieces<\/strong>.<\/p>\n\n\n\n<p><strong>Structure becomes the key to accuracy.<\/strong> When the environment is a knowledge graph rather than raw text, the AI has navigation affordances. It can follow typed relationships, query specific entity properties, and traverse connections intentionally. This structure improves both retrieval quality and synthesis accuracy, because the model isn&#8217;t just finding text that pattern-matches the query, it&#8217;s understanding how concepts relate.<\/p>\n\n\n\n<p><strong>We can now measure AI-readiness of structured data.<\/strong> Traditional SEO metrics, rankings, impressions, click-through rates, measure visibility to search engines. But as AI agents become the primary consumers of online information, we need new metrics. By observing how an RLM navigates a knowledge graph, we can identify which entities are well-connected, which relationships enable productive reasoning, and where structural gaps limit discovery.<\/p>\n\n\n\n<p>At WordLift, we&#8217;re calling this <em>SEO 3.0<\/em>: optimizing not for traditional search algorithms but for AI agents that reason over structured information. RLM-on-KG gives us a way to actually measure how that reasoning works, and where it fails.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Path Forward<\/h2>\n\n\n\n<p>RLMs are still new. The original paper focuses on synchronous sub-calls with a maximum recursion depth of one, and explicitly notes that deeper recursion and asynchronous approaches remain unexplored. The authors hypothesize that RLM trajectories can be viewed as a form of reasoning that could be trained explicitly, just as reasoning is currently trained for frontier models.<\/p>\n\n\n\n<p>I am inclined to agree. The combination of structured environments and recursive navigation feels like a natural fit for the next generation of AI agents, systems that don&#8217;t just respond to queries but actively explore knowledge spaces to build comprehensive understanding.<\/p>\n\n\n\n<p>For those of us building knowledge infrastructure, this research validates a core thesis: <strong>the web of tomorrow isn&#8217;t just about content, it&#8217;s about connections<\/strong>. The sites that structure their information as navigable graphs, with explicit relationships, typed entities, and semantic annotations, will be the sites that AI agents can reason over effectively.<\/p>\n\n\n\n<p>The sites that remain unstructured blobs of text, however search-engine-optimized, will suffer from the same context rot that plagues today&#8217;s RAG systems.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Structure isn&#8217;t just nice to have. In an AI-first world, it&#8217;s the foundation for being understood at all.<\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">References<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Zhang, A. L. et al. \u201cRecursive Language Models.\u201d arXiv (2025). (<a href=\"https:\/\/arxiv.org\/abs\/2512.24601?utm_source=chatgpt.com\">arXiv<\/a>)<\/li>\n\n\n\n<li>Edge, D. et al. \u201cFrom Local to Global: A Graph RAG Approach to Query-Focused Summarization.\u201d arXiv (2024). (<a href=\"https:\/\/arxiv.org\/abs\/2404.16130?utm_source=chatgpt.com\">arXiv<\/a>)<\/li>\n\n\n\n<li>Microsoft Research. \u201cProject GraphRAG.\u201d (<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/graphrag\/?utm_source=chatgpt.com\">Microsoft<\/a>)<\/li>\n\n\n\n<li>WordLift. \u201cGraphQL support \/ Knowledge Graph documentation.\u201d (<a href=\"https:\/\/docs.wordlift.io\/api\/graphql\/graphql-support\/?utm_source=chatgpt.com\">docs.wordlift.io<\/a>)<\/li>\n\n\n\n<li>WordLift. \u201cContent Evaluations API \/ Agent workflow.\u201d (<a href=\"https:\/\/docs.wordlift.io\/api\/content-evaluations\/wordlift-content-evaluations-api\/?utm_source=chatgpt.com\">docs.wordlift.io<\/a>)<\/li>\n<\/ul>\n\n\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recursive Language Models (RLMs) treat prompts as environments to explore, not consume. We adapted this for Knowledge Graphs and discovered why structure, not bigger context windows, is the key to AI accuracy and search visibility.<\/p>\n","protected":false},"author":6,"featured_media":30393,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"wl_entities_gutenberg":"","_wlpage_enable":"","footnotes":""},"categories":[8],"tags":[],"wl_entity_type":[30],"coauthors":[4226],"class_list":["post-30358","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-seo","wl_entity_type-article"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v22.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>RLM-on-KG: Recursive Language Models and the Future of SEO - WordLift Blog<\/title>\n<meta name=\"description\" content=\"Recursive Language Models (RLMs) treat prompts as environments to explore, not consume. We adapted this for Knowledge Graphs and discovered why structure, not bigger context windows, is the key to AI accuracy and search visibility.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"RLM-on-KG: Recursive Language Models and the Future of SEO\" \/>\n<meta property=\"og:description\" content=\"Discover why adapting Recursive Language Models to Knowledge Graphs shows that structure\u2014not bigger context windows\u2014is the real driver of AI accuracy and search visibility.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/\" \/>\n<meta property=\"og:site_name\" content=\"WordLift Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-21T16:24:18+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-22T17:09:08+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"630\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Andrea Volpini\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"RLM-on-KG: Recursive Language Models and the Future of SEO\" \/>\n<meta name=\"twitter:description\" content=\"Discover why adapting Recursive Language Models to Knowledge Graphs shows that structure\u2014not bigger context windows\u2014is the real driver of AI accuracy and search visibility.\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Andrea Volpini\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"13 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/\"},\"author\":{\"name\":\"Andrea Volpini\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a\"},\"headline\":\"RLM-on-KG: Recursive Language Models and the Future of SEO\",\"datePublished\":\"2026-01-21T16:24:18+00:00\",\"dateModified\":\"2026-01-22T17:09:08+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/\"},\"wordCount\":2862,\"publisher\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png\",\"articleSection\":[\"seo\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/\",\"name\":\"RLM-on-KG: Recursive Language Models and the Future of SEO - WordLift Blog\",\"isPartOf\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png\",\"datePublished\":\"2026-01-21T16:24:18+00:00\",\"dateModified\":\"2026-01-22T17:09:08+00:00\",\"description\":\"Recursive Language Models (RLMs) treat prompts as environments to explore, not consume. We adapted this for Knowledge Graphs and discovered why structure, not bigger context windows, is the key to AI accuracy and search visibility.\",\"breadcrumb\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#primaryimage\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png\",\"contentUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png\",\"width\":1200,\"height\":630,\"caption\":\"recursive-language-models-on-kg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog\",\"item\":\"https:\/\/wordlift.io\/blog\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"RLM-on-KG: Recursive Language Models and the Future of SEO\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#website\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/\",\"name\":\"WordLift Blog\",\"description\":\"AI-Powered SEO\",\"publisher\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/wordlift.io\/blog\/en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\",\"name\":\"WordLift\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png\",\"contentUrl\":\"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png\",\"width\":152,\"height\":40,\"caption\":\"WordLift\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a\",\"name\":\"Andrea Volpini\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/image\/466a1652833e48ca11c81b363eba7c25\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg\",\"caption\":\"Andrea Volpini\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"RLM-on-KG: Recursive Language Models and the Future of SEO - WordLift Blog","description":"Recursive Language Models (RLMs) treat prompts as environments to explore, not consume. We adapted this for Knowledge Graphs and discovered why structure, not bigger context windows, is the key to AI accuracy and search visibility.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/","og_locale":"en_US","og_type":"article","og_title":"RLM-on-KG: Recursive Language Models and the Future of SEO","og_description":"Discover why adapting Recursive Language Models to Knowledge Graphs shows that structure\u2014not bigger context windows\u2014is the real driver of AI accuracy and search visibility.","og_url":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/","og_site_name":"WordLift Blog","article_published_time":"2026-01-21T16:24:18+00:00","article_modified_time":"2026-01-22T17:09:08+00:00","og_image":[{"width":1200,"height":630,"url":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png","type":"image\/png"}],"author":"Andrea Volpini","twitter_card":"summary_large_image","twitter_title":"RLM-on-KG: Recursive Language Models and the Future of SEO","twitter_description":"Discover why adapting Recursive Language Models to Knowledge Graphs shows that structure\u2014not bigger context windows\u2014is the real driver of AI accuracy and search visibility.","twitter_image":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png","twitter_misc":{"Written by":"Andrea Volpini","Est. reading time":"13 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#article","isPartOf":{"@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/"},"author":{"name":"Andrea Volpini","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a"},"headline":"RLM-on-KG: Recursive Language Models and the Future of SEO","datePublished":"2026-01-21T16:24:18+00:00","dateModified":"2026-01-22T17:09:08+00:00","mainEntityOfPage":{"@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/"},"wordCount":2862,"publisher":{"@id":"https:\/\/wordlift.io\/blog\/en\/#organization"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#primaryimage"},"thumbnailUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png","articleSection":["seo"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/","url":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/","name":"RLM-on-KG: Recursive Language Models and the Future of SEO - WordLift Blog","isPartOf":{"@id":"https:\/\/wordlift.io\/blog\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#primaryimage"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#primaryimage"},"thumbnailUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png","datePublished":"2026-01-21T16:24:18+00:00","dateModified":"2026-01-22T17:09:08+00:00","description":"Recursive Language Models (RLMs) treat prompts as environments to explore, not consume. We adapted this for Knowledge Graphs and discovered why structure, not bigger context windows, is the key to AI accuracy and search visibility.","breadcrumb":{"@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#primaryimage","url":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png","contentUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/recursive-language-models-on-kg-wordlift.png","width":1200,"height":630,"caption":"recursive-language-models-on-kg"},{"@type":"BreadcrumbList","@id":"https:\/\/wordlift.io\/blog\/en\/recursive-language-models-on-kg\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog","item":"https:\/\/wordlift.io\/blog\/en\/"},{"@type":"ListItem","position":2,"name":"RLM-on-KG: Recursive Language Models and the Future of SEO"}]},{"@type":"WebSite","@id":"https:\/\/wordlift.io\/blog\/en\/#website","url":"https:\/\/wordlift.io\/blog\/en\/","name":"WordLift Blog","description":"AI-Powered SEO","publisher":{"@id":"https:\/\/wordlift.io\/blog\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/wordlift.io\/blog\/en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/wordlift.io\/blog\/en\/#organization","name":"WordLift","url":"https:\/\/wordlift.io\/blog\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/","url":"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png","contentUrl":"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png","width":152,"height":40,"caption":"WordLift"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a","name":"Andrea Volpini","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/image\/466a1652833e48ca11c81b363eba7c25","url":"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg","caption":"Andrea Volpini"}}]}},"_wl_alt_label":[],"wl:entity_url":"http:\/\/data.wordlift.io\/wl0216\/post\/-30358","_links":{"self":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/30358"}],"collection":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/comments?post=30358"}],"version-history":[{"count":27,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/30358\/revisions"}],"predecessor-version":[{"id":30398,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/30358\/revisions\/30398"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/media\/30393"}],"wp:attachment":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/media?parent=30358"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/categories?post=30358"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/tags?post=30358"},{"taxonomy":"wl_entity_type","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/wl_entity_type?post=30358"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/coauthors?post=30358"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}