{"id":30962,"date":"2026-05-12T11:03:28","date_gmt":"2026-05-12T09:03:28","guid":{"rendered":"https:\/\/wordlift.io\/blog\/en\/?p=30962"},"modified":"2026-05-12T11:04:26","modified_gmt":"2026-05-12T09:04:26","slug":"how-ai-models-perceive-brands","status":"publish","type":"post","link":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/","title":{"rendered":"From Frame Semantics to Natural Language Autoencoders: How AI Models Perceive Brands"},"content":{"rendered":"\n<p>In our earlier work on <a href=\"https:\/\/wordlift.io\/blog\/en\/unveiling-monosemanticity-anthropics\/\">monosemanticity<\/a>, we connected <strong>Anthropic\u2019s research on sparse autoencoders<\/strong> with a much older idea from linguistics: <strong><a class=\"wl-entity-page-link\" title=\"FrameNet\" href=\"https:\/\/wordlift.io\/blog\/en\/entity\/frame-semantics\/\" data-id=\"http:\/\/data.wordlift.io\/wl0216\/entity\/frame-semantics;http:\/\/dbpedia.org\/resource\/Frame_semantics_(linguistics);http:\/\/de.dbpedia.org\/resource\/Frame-Semantik;http:\/\/en.dbpedia.org\/resource\/Frame_semantics_(linguistics);http:\/\/it.dbpedia.org\/resource\/Frame_semantico;http:\/\/nl.dbpedia.org\/resource\/Framesemantiek\" >Frame Semantics<\/a><\/strong>. Frame Semantics teaches us that meaning is not just a word or an entity in isolation. Meaning emerges from a structured scene: entities, roles, relationships, expectations, and context.<\/p>\n\n\n\n<p>A brand works in a similar way. When a model sees a brand name, it does not activate a single isolated label. It activates a frame: products, competitors, geography, category, reputation, use cases, risks, and sometimes outdated or overly dominant associations.<\/p>\n\n\n\n<p>This is where <strong>monosemanticity<\/strong> became important. Anthropic\u2019s earlier work showed that some internal model activations can be decomposed into more interpretable features using sparse autoencoders. These features can behave like semantic units inside the model, sometimes corresponding to entities, places, behaviors, abstractions, or recurring conceptual patterns. In our previous article, we described this as a bridge between symbolic knowledge representation and the internal geometry of language models. <\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"209\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/renault-sae-1024x209.png\" alt=\"Renault SAE. This Layer-33 feature cluster reveals how the model internally associates Renault with French identity, language, and adjacent automotive concepts before generating any text.\" class=\"wp-image-30985\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/renault-sae-1024x209.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/renault-sae-300x61.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/renault-sae-768x157.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/renault-sae-1536x313.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/renault-sae-150x31.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/renault-sae.png 1808w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Tracing the latent semantic pathway behind \u201cRenault\u201d inside Gemma-3-12B. This Layer-33 feature cluster reveals how the model internally associates Renault with French identity, language, and adjacent automotive concepts before generating any text.<\/figcaption><\/figure>\n\n\n\n<p>The latest step is <strong><a href=\"https:\/\/transformer-circuits.pub\/2026\/nla\/#introduction\">Natural Language Autoencoders<\/a><\/strong>. Anthropic describes an NLA as a system with two components: an <strong>Activation Verbalizer<\/strong>, which maps an activation from a target model into a text explanation, and an <strong>Activation Reconstructor<\/strong>, which maps that explanation back into activation space. In simple terms, NLAs make it possible to translate selected internal activations into natural language hypotheses. <\/p>\n\n\n\n<p>This does not mean we are reading the model\u2019s private chain of thought. We are not claiming to recover hidden reasoning. Instead, we are using open-weight models as an interpretability laboratory to ask a more practical <a class=\"wl-entity-page-link\" title=\"content\" href=\"https:\/\/wordlift.io\/blog\/en\/entity\/content-marketing\/\" data-id=\"http:\/\/data.wordlift.io\/wl0216\/entity\/content_marketing;http:\/\/dbpedia.org\/resource\/Content_marketing\" >AI<\/a> Visibility question:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>Before the model answers, what does it seem to associate with a brand, a product, or a competitor?<\/strong><\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">Why this matters for AI Visibility<\/h2>\n\n\n\n<p>AI Visibility is often measured at the output layer: does a model mention the brand, cite the website, recommend the product, or include it in a comparison?<\/p>\n\n\n\n<p>That is useful, but incomplete.<\/p>\n\n\n\n<p>By the time an answer is generated, the model has already compressed the prompt into internal representations. Those representations influence what the model retrieves, emphasizes, ignores, or frames as important. For brand visibility, this means we need to look not only at the final response, but also at the latent perception that precedes it.<\/p>\n\n\n\n<p>This is especially relevant if we accept the intuition behind the <strong><a href=\"https:\/\/arxiv.org\/abs\/2405.07987\">Platonic Representation Hypothesis<\/a><\/strong>. The hypothesis argues that representations in <strong>AI models are becoming more aligned as models scale, across architectures, domains, and modalities<\/strong>. This does not mean every model thinks the same way. It means that <strong>open-weight models can be useful laboratories for studying patterns that may generalize across parts of the model ecosystem<\/strong>, especially when we treat the results as directional diagnostics rather than universal truth. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The workflow we are using<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"524\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/ChatGPT-Image-May-12-2026-10_00_20-AM-1-1024x524.png\" alt=\"\" class=\"wp-image-30965\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/ChatGPT-Image-May-12-2026-10_00_20-AM-1-1024x524.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/ChatGPT-Image-May-12-2026-10_00_20-AM-1-300x154.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/ChatGPT-Image-May-12-2026-10_00_20-AM-1-768x393.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/ChatGPT-Image-May-12-2026-10_00_20-AM-1-1536x786.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/ChatGPT-Image-May-12-2026-10_00_20-AM-1-150x77.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/ChatGPT-Image-May-12-2026-10_00_20-AM-1.png 1753w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Decoding Renault\u2019s latent representation inside Gemma-3-12B using Sparse Autoencoders (SAEs) and Natural Language Activations (NLA).<\/figcaption><\/figure>\n\n\n\n<p>We are testing this workflow on open-weight models, starting with <strong>Gemma 3<\/strong>.<\/p>\n\n\n\n<p>The process is intentionally simple and repeatable:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><p><strong>Define the semantic frame<\/strong>: We create prompts across brand, product, competitor, risk, and strategy contexts.<\/p><\/li>\n\n\n\n<li><p><strong>Run the target model<\/strong>: We use Gemma 3 to process each prompt and generate an answer.<\/p><\/li>\n\n\n\n<li><p><strong>Extract internal activations<\/strong>: We capture layer-32 residual vectors from relevant tokens, such as the brand token, product token, competitor token, and final prompt token.<\/p><\/li>\n\n\n\n<li><p><strong>Inspect SAE activations<\/strong>: We use Gemma Scope sparse autoencoders to identify recurrent activation features across prompts.<\/p><\/li>\n\n\n\n<li><p><strong>Verbalize selected activations with NLA<\/strong>: We pass selected residual vectors into the NLA Activation Verbalizer to obtain natural-language descriptions of the latent state<span style=\"color: initial; font-family: serif;\">.<\/span><\/p><\/li>\n\n\n\n<li><p><strong>Compare three layers of evidence<\/strong>: We compare what the model says, what activates internally, and how the NLA describes those activations.<\/p><\/li>\n\n\n\n<li><p><strong>Run counterfactual tests<\/strong>: We can then improve machine-readable context through Wikidata, Wikipedia, Schema.org, entity pages, provenance links, and Knowledge Graph enrichment, then rerun the same prompts to measure what changed.<\/p><\/li>\n<\/ol>\n\n\n\n<p>The diagnostic loop becomes:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>latent perception \u2192 structured data \/ Knowledge Graph intervention \u2192 counterfactual test<\/strong><\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">Example: Renault as a public automotive brand<\/h2>\n\n\n\n<p>To test the workflow, we used Renault as a public automotive brand example, not as a client.<\/p>\n\n\n\n<p>We asked questions such as:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><em>What comes to mind when you hear the brand \u201cRenault\u201d?<\/em><\/li>\n\n\n\n<li><em>What is Renault known for?<\/em><\/li>\n\n\n\n<li><em>What do you associate with Renault 5 E-Tech, Clio, Scenic E-Tech, and Megane E-Tech?<\/em><\/li>\n\n\n\n<li><em>Compare Renault with BYD, Toyota, Volkswagen, Tesla, and Peugeot.<\/em><\/li>\n\n\n\n<li><em>When would a customer choose Renault over BYD?<\/em><\/li>\n\n\n\n<li><em>What concerns might a customer have about Renault?<\/em><\/li>\n<\/ul>\n\n\n\n<p>The goal was not to evaluate Renault\u2019s marketing strategy. The goal was to observe how an open model internally organizes Renault\u2019s brand frame.<\/p>\n\n\n\n<p>In this run, the latent perception appears to split between two poles:<\/p>\n\n\n\n<p><strong>Renault as a French legacy automaker<\/strong><br>\nThe model activates a traditional frame around Renault Group, French automotive identity, practical mass-market models, Clio, Dacia associations, and European mobility.<\/p>\n\n\n\n<p><strong>Renault as an EV transition player<\/strong><br>\nThe EV layer appears through product-specific associations such as Renault 5 E-Tech, Scenic E-Tech, Megane E-Tech, and E-Tech powertrains. This EV perception is present, but it is not yet as dominant as the legacy European OEM frame.<\/p>\n\n\n\n<p>Competitor prompts help sharpen the contrast:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Renault vs BYD<\/strong> activates a legacy European OEM versus Chinese EV-native and battery-scale frame.<\/li>\n\n\n\n<li><strong>Renault vs Toyota<\/strong> activates EV transition versus hybrid reliability and trust.<\/li>\n\n\n\n<li><strong>Renault vs Volkswagen<\/strong> activates French mass-market versus German group-scale associations.<\/li>\n\n\n\n<li><strong>Renault vs Tesla<\/strong> activates traditional automaker transition versus software-native EV disruption.<\/li>\n<\/ul>\n\n\n\n<p><strong>This is precisely the kind of signal that matters for AI Visibility<\/strong>. If a brand wants to be perceived as an EV innovator, but the latent frame still centers on legacy mass-market associations, the intervention should not only be more content. It should be better structured, better linked, and more machine-readable.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-1 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\">\n<figure class=\"wp-block-gallery aligncenter has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"568\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/01_renault_decode_scope_coverage-1024x568.png\" alt=\"\" class=\"wp-image-30967\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/01_renault_decode_scope_coverage-1024x568.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/01_renault_decode_scope_coverage-300x167.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/01_renault_decode_scope_coverage-768x426.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/01_renault_decode_scope_coverage-1536x853.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/01_renault_decode_scope_coverage-150x83.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/01_renault_decode_scope_coverage.png 1580w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"666\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/02_renault_vector_norm_heatmap-1024x666.png\" alt=\"\" class=\"wp-image-30973\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/02_renault_vector_norm_heatmap-1024x666.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/02_renault_vector_norm_heatmap-300x195.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/02_renault_vector_norm_heatmap-768x499.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/02_renault_vector_norm_heatmap-1536x998.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/02_renault_vector_norm_heatmap-2048x1331.png 2048w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/02_renault_vector_norm_heatmap-150x97.png 150w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"632\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/03_renault_nla_theme_heatmap-1024x632.png\" alt=\"\" class=\"wp-image-30974\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/03_renault_nla_theme_heatmap-1024x632.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/03_renault_nla_theme_heatmap-300x185.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/03_renault_nla_theme_heatmap-768x474.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/03_renault_nla_theme_heatmap-1536x948.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/03_renault_nla_theme_heatmap-2048x1263.png 2048w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/03_renault_nla_theme_heatmap-150x93.png 150w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"524\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/04_renault_nla_theme_by_scope-1024x524.png\" alt=\"\" class=\"wp-image-30970\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/04_renault_nla_theme_by_scope-1024x524.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/04_renault_nla_theme_by_scope-300x154.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/04_renault_nla_theme_by_scope-768x393.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/04_renault_nla_theme_by_scope-1536x786.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/04_renault_nla_theme_by_scope-150x77.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/04_renault_nla_theme_by_scope.png 1831w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"495\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/05_renault_product_theme_heatmap-1024x495.png\" alt=\"\" class=\"wp-image-30969\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/05_renault_product_theme_heatmap-1024x495.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/05_renault_product_theme_heatmap-300x145.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/05_renault_product_theme_heatmap-768x371.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/05_renault_product_theme_heatmap-1536x743.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/05_renault_product_theme_heatmap-150x73.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/05_renault_product_theme_heatmap.png 2021w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"498\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/06_renault_competitor_theme_heatmap-1024x498.png\" alt=\"\" class=\"wp-image-30971\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/06_renault_competitor_theme_heatmap-1024x498.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/06_renault_competitor_theme_heatmap-300x146.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/06_renault_competitor_theme_heatmap-768x373.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/06_renault_competitor_theme_heatmap-1536x747.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/06_renault_competitor_theme_heatmap-150x73.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/06_renault_competitor_theme_heatmap.png 2010w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"626\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/07_renault_nla_repetition_quality-1024x626.png\" alt=\"\" class=\"wp-image-30972\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/07_renault_nla_repetition_quality-1024x626.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/07_renault_nla_repetition_quality-300x183.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/07_renault_nla_repetition_quality-768x469.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/07_renault_nla_repetition_quality-1536x939.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/07_renault_nla_repetition_quality-150x92.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/07_renault_nla_repetition_quality.png 1978w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"946\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/08_renault_prompt_similarity_nla-1024x946.png\" alt=\"\" class=\"wp-image-30975\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/08_renault_prompt_similarity_nla-1024x946.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/08_renault_prompt_similarity_nla-300x277.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/08_renault_prompt_similarity_nla-768x709.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/08_renault_prompt_similarity_nla-1536x1419.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/08_renault_prompt_similarity_nla-150x139.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/08_renault_prompt_similarity_nla.png 1923w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"568\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/09_renault_mean_vector_norm_by_scope-1024x568.png\" alt=\"\" class=\"wp-image-30968\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/09_renault_mean_vector_norm_by_scope-1024x568.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/09_renault_mean_vector_norm_by_scope-300x166.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/09_renault_mean_vector_norm_by_scope-768x426.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/09_renault_mean_vector_norm_by_scope-1536x852.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/09_renault_mean_vector_norm_by_scope-150x83.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/09_renault_mean_vector_norm_by_scope.png 1581w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<figcaption class=\"blocks-gallery-caption wp-element-caption\">Renault AI Visibility Charts<\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\">What this tells us<\/h2>\n\n\n\n<p>This workflow gives us a new way to evaluate brand perception in AI systems.<\/p>\n\n\n\n<p>Not just:<\/p>\n\n\n\n<p><strong>Does the model mention the brand?<\/strong><\/p>\n\n\n\n<p>But:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>What internal frame does the model activate when the brand appears?<\/strong><\/p>\n<\/blockquote>\n\n\n\n<p>For SEO 3.0 \/ GEO and AI Visibility, this opens a practical path. We can test whether structured data, Wikidata, Wikipedia, entity pages, product knowledge graphs, and provenance links influence the model\u2019s latent representation over time.<\/p>\n\n\n\n<p>This is not a replacement for ranking analysis, citation tracking, or answer monitoring. It is <strong>a complementary diagnostic layer<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What we should be careful about<\/h2>\n\n\n\n<p>There are important caveats.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>First, NLA explanations are not ground truth. They are interpretations produced by an auxiliary model trained to verbalize activations.<\/li>\n\n\n\n<li>Second, SAE feature IDs are not automatically meaningful. They become useful when they recur across prompts, align with NLA explanations, and match observable answer behavior.<\/li>\n\n\n\n<li>Third, open-weight models are laboratories, not perfect proxies for every frontier model. The Platonic Representation Hypothesis gives us a reason to study representational convergence, but it should not be used to claim that one open model fully represents all models. <\/li>\n\n\n\n<li>Finally, this is not chain-of-thought extraction. We are building a diagnostic layer around observable activations in open models.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">Why this is useful for brands<\/h2>\n\n\n\n<p>For brands, the practical value is a new kind of <a href=\"https:\/\/wordlift.io\/ai-audit\/\">AI audit<\/a> (and might extend our existing audit in the near future).<\/p>\n\n\n\n<p>Traditional AI Visibility asks:<\/p>\n\n\n\n<p><strong>Where does the brand appear in AI-generated answers?<\/strong><\/p>\n\n\n\n<p>Latent perception analysis asks:<\/p>\n\n\n\n<p><strong>What does the model seem prepared to believe, retrieve, or emphasize about the brand before it answers?<\/strong><\/p>\n\n\n\n<p>That matters because future AI discovery will not only depend on pages being indexed. It will depend on whether the brand\u2019s identity, products, and relationships are legible to machines.<\/p>\n\n\n\n<p>This is where structured data and Knowledge Graphs become strategic. They do not simply decorate web pages. They provide stable semantic anchors that help AI systems connect entities, products, claims, and evidence.<\/p>\n\n\n\n<p>The next step is to make this diagnostic loop repeatable:<\/p>\n\n\n\n<p><strong>measure latent perception \u2192 enrich machine-readable context \u2192 rerun the model \u2192 measure the shift<\/strong><\/p>\n\n\n\n<p>That is where AI Visibility becomes testable.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusions and future works<\/h2>\n\n\n\n<p>We are entering a phase where <strong>AI systems do not simply retrieve information<\/strong>. They compress, organize, and activate latent semantic frames before generating an answer.<\/p>\n\n\n\n<p>This changes how we should think about visibility on the web. <strong>The future of AI Visibility is not only about ranking pages or appearing in citations<\/strong>. <strong>It is about shaping the machine-readable semantic environment<\/strong> that models use to construct meaning. Brands that expose clear entities, products, relationships, provenance, and structured evidence will increasingly have an advantage in how they are interpreted by AI systems.<\/p>\n\n\n\n<p>Natural Language Autoencoders and Sparse Autoencoders give us an early glimpse into this hidden layer of perception. They provide a new diagnostic instrument for understanding how models internally organize brands, products, and competitive landscapes.<\/p>\n\n\n\n<p>For the first time, we can begin to test a hypothesis that has long existed in SEO and semantic search:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>better structured knowledge changes not only what machines retrieve, but potentially how they internally frame the world itself.<\/p>\n<\/blockquote>\n\n\n\n<p>That makes AI Visibility measurable, testable, and eventually optimizable.<\/p>\n\n\n","protected":false},"excerpt":{"rendered":"<p>What does a language model internally associate with a brand before it generates an answer? Using Natural Language Autoencoders, and Gemma 3, we explore how latent semantic representations shape AI Visibility and brand perception for Renault.<\/p>\n","protected":false},"author":6,"featured_media":30980,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"wl_entities_gutenberg":"","_wlpage_enable":"","footnotes":""},"categories":[4304,612,8,4285],"tags":[],"wl_entity_type":[30],"coauthors":[4226],"class_list":["post-30962","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-marketing","category-semantic-seo","category-seo","category-wordlift-lab","wl_entity_type-article"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v22.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How AI Models perceive brands - Natural Language Autoencoders<\/title>\n<meta name=\"description\" content=\"Explore how Sparse Autoencoders and Natural Language Autoencoders reveal the latent semantic perception of brands inside language models, and why this matters for AI Visibility, Knowledge Graphs, and structured data strategy.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How AI Models perceive brands - Natural Language Autoencoders\" \/>\n<meta property=\"og:description\" content=\"Explore how Sparse Autoencoders and Natural Language Autoencoders reveal the latent semantic perception of brands inside language models, and why this matters for AI Visibility, Knowledge Graphs, and structured data strategy.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/\" \/>\n<meta property=\"og:site_name\" content=\"WordLift Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-12T09:03:28+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-05-12T09:04:26+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png\" \/>\n\t<meta property=\"og:image:width\" content=\"960\" \/>\n\t<meta property=\"og:image:height\" content=\"540\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Andrea Volpini\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Andrea Volpini\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/\"},\"author\":{\"name\":\"Andrea Volpini\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a\"},\"headline\":\"From Frame Semantics to Natural Language Autoencoders: How AI Models Perceive Brands\",\"datePublished\":\"2026-05-12T09:03:28+00:00\",\"dateModified\":\"2026-05-12T09:04:26+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/\"},\"wordCount\":1518,\"publisher\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png\",\"articleSection\":[\"AI Marketing\",\"Semantic SEO\",\"seo\",\"WordLift Lab\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/\",\"name\":\"How AI Models perceive brands - Natural Language Autoencoders\",\"isPartOf\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png\",\"datePublished\":\"2026-05-12T09:03:28+00:00\",\"dateModified\":\"2026-05-12T09:04:26+00:00\",\"description\":\"Explore how Sparse Autoencoders and Natural Language Autoencoders reveal the latent semantic perception of brands inside language models, and why this matters for AI Visibility, Knowledge Graphs, and structured data strategy.\",\"breadcrumb\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#primaryimage\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png\",\"contentUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png\",\"width\":960,\"height\":540,\"caption\":\"Decoding Renault\u2019s latent representation inside Gemma-3-12B using Sparse Autoencoders (SAEs) and Natural Language Activations (NLA)\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog\",\"item\":\"https:\/\/wordlift.io\/blog\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"From Frame Semantics to Natural Language Autoencoders: How AI Models Perceive Brands\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#website\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/\",\"name\":\"WordLift Blog\",\"description\":\"AI-Powered SEO\",\"publisher\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/wordlift.io\/blog\/en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\",\"name\":\"WordLift\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png\",\"contentUrl\":\"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png\",\"width\":152,\"height\":40,\"caption\":\"WordLift\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a\",\"name\":\"Andrea Volpini\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/image\/466a1652833e48ca11c81b363eba7c25\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg\",\"caption\":\"Andrea Volpini\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How AI Models perceive brands - Natural Language Autoencoders","description":"Explore how Sparse Autoencoders and Natural Language Autoencoders reveal the latent semantic perception of brands inside language models, and why this matters for AI Visibility, Knowledge Graphs, and structured data strategy.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/","og_locale":"en_US","og_type":"article","og_title":"How AI Models perceive brands - Natural Language Autoencoders","og_description":"Explore how Sparse Autoencoders and Natural Language Autoencoders reveal the latent semantic perception of brands inside language models, and why this matters for AI Visibility, Knowledge Graphs, and structured data strategy.","og_url":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/","og_site_name":"WordLift Blog","article_published_time":"2026-05-12T09:03:28+00:00","article_modified_time":"2026-05-12T09:04:26+00:00","og_image":[{"width":960,"height":540,"url":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png","type":"image\/png"}],"author":"Andrea Volpini","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Andrea Volpini","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#article","isPartOf":{"@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/"},"author":{"name":"Andrea Volpini","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a"},"headline":"From Frame Semantics to Natural Language Autoencoders: How AI Models Perceive Brands","datePublished":"2026-05-12T09:03:28+00:00","dateModified":"2026-05-12T09:04:26+00:00","mainEntityOfPage":{"@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/"},"wordCount":1518,"publisher":{"@id":"https:\/\/wordlift.io\/blog\/en\/#organization"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#primaryimage"},"thumbnailUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png","articleSection":["AI Marketing","Semantic SEO","seo","WordLift Lab"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/","url":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/","name":"How AI Models perceive brands - Natural Language Autoencoders","isPartOf":{"@id":"https:\/\/wordlift.io\/blog\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#primaryimage"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#primaryimage"},"thumbnailUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png","datePublished":"2026-05-12T09:03:28+00:00","dateModified":"2026-05-12T09:04:26+00:00","description":"Explore how Sparse Autoencoders and Natural Language Autoencoders reveal the latent semantic perception of brands inside language models, and why this matters for AI Visibility, Knowledge Graphs, and structured data strategy.","breadcrumb":{"@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#primaryimage","url":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png","contentUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/05\/Structured-Data-Insights-December-2024-38.png","width":960,"height":540,"caption":"Decoding Renault\u2019s latent representation inside Gemma-3-12B using Sparse Autoencoders (SAEs) and Natural Language Activations (NLA)"},{"@type":"BreadcrumbList","@id":"https:\/\/wordlift.io\/blog\/en\/how-ai-models-perceive-brands\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog","item":"https:\/\/wordlift.io\/blog\/en\/"},{"@type":"ListItem","position":2,"name":"From Frame Semantics to Natural Language Autoencoders: How AI Models Perceive Brands"}]},{"@type":"WebSite","@id":"https:\/\/wordlift.io\/blog\/en\/#website","url":"https:\/\/wordlift.io\/blog\/en\/","name":"WordLift Blog","description":"AI-Powered SEO","publisher":{"@id":"https:\/\/wordlift.io\/blog\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/wordlift.io\/blog\/en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/wordlift.io\/blog\/en\/#organization","name":"WordLift","url":"https:\/\/wordlift.io\/blog\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/","url":"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png","contentUrl":"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png","width":152,"height":40,"caption":"WordLift"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a","name":"Andrea Volpini","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/image\/466a1652833e48ca11c81b363eba7c25","url":"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg","caption":"Andrea Volpini"}}]}},"_wl_alt_label":[],"wl:entity_url":"http:\/\/data.wordlift.io\/wl0216\/post\/from-frame-semantics-to-natural-language-autoencoders-measuring-how-ai-models-perceive-brands-30962","_links":{"self":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/30962"}],"collection":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/comments?post=30962"}],"version-history":[{"count":11,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/30962\/revisions"}],"predecessor-version":[{"id":30988,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/30962\/revisions\/30988"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/media\/30980"}],"wp:attachment":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/media?parent=30962"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/categories?post=30962"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/tags?post=30962"},{"taxonomy":"wl_entity_type","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/wl_entity_type?post=30962"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/coauthors?post=30962"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}