{"id":30455,"date":"2026-01-29T12:57:40","date_gmt":"2026-01-29T11:57:40","guid":{"rendered":"https:\/\/wordlift.io\/blog\/en\/?p=30455"},"modified":"2026-02-02T15:38:34","modified_gmt":"2026-02-02T14:38:34","slug":"visual-fan-out-in-ai-mode","status":"publish","type":"post","link":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/","title":{"rendered":"Visual Fan-Out: Make Your Products and Destinations Discoverable in AI Mode"},"content":{"rendered":"\n<p>We have spent the last year talking about <strong><a href=\"https:\/\/wordlift.io\/blog\/en\/query-fan-out-ai-search\/\">query fan-out<\/a><\/strong> (and to be honest, we still are): the moment when Google\u2019s AI Mode takes a single question, decomposes it into sub-questions, runs many searches in parallel, then synthesizes an answer with links. Google describes this explicitly as issuing \u201cmultiple related searches concurrently across subtopics and multiple data sources.\u201d<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>But the inflection point is not only text, it&#8217;s multimodality.<\/strong> <\/p>\n<\/blockquote>\n\n\n\n<p><a href=\"https:\/\/wordlift.io\/optimizing-ai-mode-guide\/\">Google AI Mode is now <strong>multimodal<\/strong><\/a>: you can snap a photo or upload an image, ask a question about what you see, and AI Mode responds with a rich answer and links. Under the hood, Google says it uses Lens to identify objects and then applies the same <strong>query fan-out technique<\/strong> to issue multiple queries about the image as a whole and the objects within it.<\/p>\n\n\n\n<p>That combination creates what we call <strong>Visual Fan-Out<\/strong>.<\/p>\n\n\n\n<p>It is the shift from \u201csearching for an image\u201d to \u201csearching through an image.\u201d A single picture becomes a branching tree of intents: objects, attributes, styles, and actions. If you care about <strong>eCommerce<\/strong>, <strong><a class=\"wl-entity-page-link\" title=\"Travel Industry\" href=\"https:\/\/wordlift.io\/blog\/en\/entity\/travel\/\" data-id=\"http:\/\/data.wordlift.io\/wl0216\/entity\/travel;http:\/\/rdf.freebase.com\/ns\/m.014dsx;http:\/\/dbpedia.org\/resource\/Travel;http:\/\/de.dbpedia.org\/resource\/Reise;http:\/\/pt.dbpedia.org\/resource\/Viagem;http:\/\/lt.dbpedia.org\/resource\/Kelion\u0117;http:\/\/hu.dbpedia.org\/resource\/Utaz\u00e1s;http:\/\/uk.dbpedia.org\/resource\/\u041f\u043e\u0434\u043e\u0440\u043e\u0436;http:\/\/en.dbpedia.org\/resource\/Travel;http:\/\/it.dbpedia.org\/resource\/Viaggio;http:\/\/es.dbpedia.org\/resource\/Viaje;http:\/\/et.dbpedia.org\/resource\/Reisimine;http:\/\/ro.dbpedia.org\/resource\/C\u0103l\u0103torie;http:\/\/nl.dbpedia.org\/resource\/Reis;http:\/\/no.dbpedia.org\/resource\/Reise;http:\/\/ru.dbpedia.org\/resource\/\u041f\u0443\u0442\u0435\u0448\u0435\u0441\u0442\u0432\u0438\u0435;http:\/\/bg.dbpedia.org\/resource\/\u041f\u044a\u0442\u0435\u0448\u0435\u0441\u0442\u0432\u0438\u0435;http:\/\/fr.dbpedia.org\/resource\/Voyage;http:\/\/sk.dbpedia.org\/resource\/Cestovanie;http:\/\/ca.dbpedia.org\/resource\/Viatge;http:\/\/sv.dbpedia.org\/resource\/Resa;http:\/\/cs.dbpedia.org\/resource\/Cestov\u00e1n\u00ed;http:\/\/pl.dbpedia.org\/resource\/Podr\u00f3\u017c;http:\/\/da.dbpedia.org\/resource\/Rejse;http:\/\/tr.dbpedia.org\/resource\/Seyahat\" >travel<\/a><\/strong>, and any experience where users decide with their eyes first, this is the new architecture of AI discovery. <\/p>\n\n\n\n<p>To make this concrete (and to make the invisible visible), I built a <strong>Visual Fan-Out Simulator<\/strong>: a prototype that takes an image, decomposes it into candidate intents, fans out parallel searches, and shows you the branching structure as a navigable tree.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>Try the Visual Fan-Out Simulator <\/strong><br>This post is easier to understand if you can see the branching structure live. Below I embedded my <strong>Visual Fan-Out Simulator<\/strong>, a prototype that turns a single image into an explorable tree of intents: it decomposes the scene into entities and attributes, fans out parallel searches, then grounds and prunes branches so you only see actionable paths.<\/p>\n<\/blockquote>\n\n\n\n<div style=\"width: 100%; height: 800px; border-radius: 20px; overflow: hidden; border: 1px solid #333;\">\n  <iframe \n    src=\"https:\/\/visual-fan-out-explorer-934556142424.us-west1.run.app\/\" \n    width=\"100%\" \n    height=\"100%\" \n    style=\"border: none;\"\n  ><\/iframe>\n<\/div>\n\n\n\n<p><\/p>\n\n\n<div class=\"hero-btn-wrapper \">\n                      <a class=\"btn btn-bg-primary text-white\"\n  href=\"https:\/\/wordlift.io\/visual-fan-out-simulator\/\"  target=\"_self\"          >\n\n  \n  <p>\n          Turn Your Results Into Action\n        <\/p>\n\n  \n  <\/a>\n      <\/div>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Google has also introduced <strong>Agentic Vision capabilities in Gemini 3 Flash<\/strong>, which suggests we should expect broader adoption of multimodal experiences across the board.<\/p>\n\n\n\n<figure class=\"wp-block-embed aligncenter is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">Introducing Agentic Vision \u2014 a new frontier AI capability in Gemini 3 Flash that converts image understanding from a static act into an agentic process.<br><br>By combining visual reasoning with code execution, one of the first tools supported by Agentic Vision, the model grounds\u2026<\/p>&mdash; Google AI (@GoogleAI) <a href=\"https:\/\/twitter.com\/GoogleAI\/status\/2016267526330601720?ref_src=twsrc%5Etfw\">January 27, 2026<\/a><\/blockquote><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n<\/div><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">From keywords to scenes: what changed in AI Mode<\/h2>\n\n\n\n<p>Google\u2019s public documentation and <a href=\"https:\/\/blog.google\/products-and-platforms\/products\/search\/ai-mode-multimodal-search\/\">product posts<\/a> give us a reliable baseline:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>AI Mode uses query fan-out<\/strong> to run multiple related searches concurrently and then synthesize a response.<\/li>\n\n\n\n<li><strong>AI Mode is now multimodal<\/strong> and can \u201cunderstand the entire scene\u201d in an image, including objects, materials, colors, shapes, and how objects relate to one another.<\/li>\n\n\n\n<li>After Gemini 3.0 Flash identifies objects, <strong>AI Mode issues multiple queries about the image and the objects within it<\/strong> using query fan-out.<\/li>\n<\/ul>\n\n\n\n<p>So the image is not treated as a single blob. It is treated as a <strong>scene<\/strong> with multiple candidate entry points.<\/p>\n\n\n\n<p>If you want a mental model, think of a scene as a graph:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Nodes: objects (chair), attributes (leather), styles (mid-century), contexts (living room), actions (buy, compare, book, navigate).<\/li>\n\n\n\n<li>Edges: relations (chair-in-room, lamp-next-to-chair), similarity (style-near-style), intent transitions (identify \u2192 compare \u2192 purchase).<\/li>\n<\/ul>\n\n\n\n<p>This is why Visual Fan-Out is fundamentally a <strong>graph problem<\/strong>, which is exactly where the <a href=\"https:\/\/wordlift.io\/blog\/en\/the-reasoning-web\/\">Reasoning Web<\/a> framing becomes useful.<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video autoplay loop muted src=\"https:\/\/storage.googleapis.com\/gweb-uniblog-publish-prod\/original_videos\/Lens_PR_1.mp4#t=0.001\" playsinline><\/video><figcaption class=\"wp-element-caption\">Bringing multimodal search to AI Mode &#8211; From Google&#8217;s Blog<br><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Visual Fan-Out: the pipeline (decompose \u2192 branch \u2192 ground \u2192 synthesize)<\/h2>\n\n\n\n<p>\u201cVisual Fan-Out\u201d is a pattern that emerges when you combine multimodal understanding with fan-out retrieval. Google does not publish all internal implementation details, so <strong>treat the following as an engineering interpretation anchored in what Google <em>does<\/em> describe publicly<\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1) Visual decomposition: turning pixels into candidate intents<\/h3>\n\n\n\n<p>Visual Fan-Out starts by extracting multiple \u201chandles\u201d from the scene:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>primary subject(s)<\/li>\n\n\n\n<li>secondary objects<\/li>\n\n\n\n<li>attributes (materials, colors, patterns)<\/li>\n\n\n\n<li>relationships (layout, composition)<\/li>\n\n\n\n<li>style cues (vibe, era, aesthetic)<\/li>\n<\/ul>\n\n\n\n<p>Google explicitly says AI Mode can understand objects and their <strong>materials, colors, shapes, arrangements<\/strong>, and that Lens <strong>identifies each object<\/strong>.<\/p>\n\n\n\n<p>This is not just \u201cgenerate a caption.\u201d It is \u201cgenerate a set of clickable hypotheses.\u201d<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2) Parallel branching: fan-out across objects, attributes, and styles<\/h3>\n\n\n\n<p>Once you have candidate intents, the system can branch:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Identify the product (exact match)<\/li>\n\n\n\n<li>Find similar products (approximate match)<\/li>\n\n\n\n<li>Interpret the style (aesthetic match)<\/li>\n\n\n\n<li>Provide supporting context (history, care, compatibility)<\/li>\n\n\n\n<li>Move to action (where to buy, price, availability, nearby stock)<\/li>\n<\/ul>\n\n\n\n<p>Google\u2019s AI Mode, in general, runs many queries concurrently across sources.<br>In the multimodal case, Google says it issues multiple queries about the image and objects within it.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3) Grounding: pruning branches that do not map to reality<\/h3>\n\n\n\n<p>A visual system that hallucinates is useless for commerce.<\/p>\n\n\n\n<p>Google\u2019s shopping experience in AI Mode is explicitly backed by the <strong>Shopping Graph<\/strong>, which Google describes as having <strong>more than 50 billion product listings<\/strong>, with <strong>2 billion updated every hour<\/strong>.<\/p>\n\n\n\n<p>That matters because it enables a simple rule: <strong>only show branches that can be grounded to real inventory, real places, real entities.<\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4) Synthesis: answering with links, then inviting the next hop<\/h3>\n\n\n\n<p>Finally, AI Mode synthesizes a response and provides links for exploration.<br>Crucially, Google also notes that AI Mode queries are often longer and used for exploratory tasks like comparing products and planning trips.<\/p>\n\n\n\n<p>This is not a single answer. It is an interactive reasoning loop.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Why Visual Fan-Out is a big deal for eCommerce<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Discovery becomes multi-object, not single-product<\/h3>\n\n\n\n<p>In a visual-first journey, the user rarely wants \u201cthe thing.\u201d They want <strong>the set<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>the chair <em>and<\/em> the lamp<\/li>\n\n\n\n<li>the outfit <em>and<\/em> the shoes<\/li>\n\n\n\n<li>the backpack <em>and<\/em> the hiking poles<\/li>\n\n\n\n<li>the sunglasses <em>and<\/em> the helmet that matches the vibe<\/li>\n<\/ul>\n\n\n\n<p>Visual Fan-Out makes that natural: <strong>a scene decomposes into multiple purchasable nodes.<\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">\u201cVibe\u201d becomes queryable<\/h3>\n\n\n\n<p>A growing fraction of commerce intent is aesthetic. AI Mode can move from \u201cwhat is this object\u201d to \u201cwhat is this style\u201d because style cues are present in the image and can become branching intents.<\/p>\n\n\n\n<p>You can already see this direction in Google\u2019s own description of AI Mode shopping results: if you ask for visual inspiration, you get <strong>shoppable images<\/strong>; if you ask to compare, you get a structured comparison.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Grounding favors structured merchants<\/h3>\n\n\n\n<p>Here is the uncomfortable part:<\/p>\n\n\n\n<p>If your product exists only as a flat JPEG and a vague title, you have fewer \u201chooks\u201d when the system decomposes a scene. If your product is described with <strong>machine-readable attributes<\/strong> (material, color, pattern, size, GTIN, offers, availability), you become a better candidate for a branch that needs grounding.<\/p>\n\n\n\n<p>This is where <a class=\"wl-entity-page-link\" title=\"Schema &amp; Structured Data for WP &amp; AMP\" href=\"https:\/\/wordlift.io\/blog\/en\/entity\/schema-structured-data-for-wp-amp\/\" data-id=\"http:\/\/data.wordlift.io\/wl0216\/entity\/schema-structured-data-for-wp-amp-20577;https:\/\/wordpress.org\/plugins\/schema-and-structured-data-for-wp\/;https:\/\/structured-data-for-wp.com\/\" >structured data<\/a> stops being \u201crich results\u201d and becomes <strong>retrieval infrastructure<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Why Visual Fan-Out matters for travel (and every place-based experience)<\/h2>\n\n\n\n<p>Travel is inherently visual: rooms, views, landmarks, beaches, trailheads, museums, restaurants.<\/p>\n\n\n\n<p>Google explicitly positions AI Mode for complex tasks like <strong>planning a trip<\/strong>.<br>And Google has been extending visual search beyond static images into <strong>video questions<\/strong>, which is a natural accelerator for travel and local discovery.<\/p>\n\n\n\n<p>A single travel photo can fan out into:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\u201cWhere is this?\u201d<\/li>\n\n\n\n<li>\u201cBest time to visit\u201d<\/li>\n\n\n\n<li>\u201cSimilar places\u201d<\/li>\n\n\n\n<li>\u201cHow do I get there from here?\u201d<\/li>\n\n\n\n<li>\u201cBook a hotel like this\u201d<\/li>\n\n\n\n<li>\u201cWhat is the hike difficulty?\u201d<\/li>\n\n\n\n<li>\u201cWhat are the rules, permits, safety constraints?\u201d<\/li>\n<\/ul>\n\n\n\n<p>In the Reasoning Web framing: the picture is not content, it is an entry point into a <strong>context graph<\/strong> that the agent can traverse.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">A quick note on the research trend: decomposition is becoming standard<\/h2>\n\n\n\n<p>Even outside Google, the entire multimodal field is moving toward <strong>region and slice-based perception<\/strong> because a single global view is not enough for real scenes.<\/p>\n\n\n\n<p>A few examples worth highlighting:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><a href=\"https:\/\/arxiv.org\/abs\/2408.00714\" target=\"_blank\" rel=\"noreferrer noopener\">SAM 2 (Segment Anything Model 2)<\/a><\/strong> pushes promptable segmentation for images and videos, with real-time streaming memory for video processing. This matters because segmentation is the cleanest bridge from pixels to \u201cthings you can reason about.\u201d<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/arxiv.org\/abs\/2403.11703\">LLaVA-<\/a><a href=\"https:\/\/arxiv.org\/abs\/2403.11703\" target=\"_blank\" rel=\"noreferrer noopener\">UHD<\/a><\/strong> introduces an \u201cimage modularization strategy\u201d that divides native-resolution images into variable-sized slices, then compresses and organizes them with spatial structure for an LLM. This is basically decomposition as an input primitive.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2025\/html\/Huang_HiRes-LLaVA_Restoring_Fragmentation_Input_in_High-Resolution_Large_Vision-Language_Models_CVPR_2025_paper.html\" target=\"_blank\" rel=\"noreferrer noopener\">HiRes-LLaVA (CVPR 2025<\/a>)<\/strong> focuses on the downside of naive slicing (\u201ccontext fragmentation\u201d) and proposes mechanisms to preserve global-local coherence across slices. This is important because Visual Fan-Out must keep the whole scene coherent while exploring parts.<\/li>\n<\/ul>\n\n\n\n<p>The take here is that: whether you call it region-aware encoding, slicing, or segmentation, <strong>visual decomposition is becoming the default strategy<\/strong> for systems that must reason over real-world images.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Introducing my Visual Fan-Out Simulator<\/h2>\n\n\n\n<p>I built the simulator for one reason: the conversation about AI Mode often stays abstract. \u201cFan-out\u201d sounds like a metaphor until you can <em>see<\/em> the branching structure.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><a href=\"https:\/\/visual-fan-out-explorer-934556142424.us-west1.run.app\/\"><img decoding=\"async\" width=\"2574\" height=\"1368\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-3.png\" alt=\"\" class=\"wp-image-30462\" style=\"object-fit:cover\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-3.png 2574w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-3-300x159.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-3-1024x544.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-3-768x408.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-3-1536x816.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-3-2048x1088.png 2048w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/image-3-150x80.png 150w\" sizes=\"(max-width: 2574px) 100vw, 2574px\" \/><\/a><figcaption class=\"wp-element-caption\"><strong>WordLift <a href=\"https:\/\/visual-fan-out-explorer-934556142424.us-west1.run.app\/\" target=\"_blank\" rel=\"noreferrer noopener\">Visual Fan-Out Simulator<\/a>.<\/strong><\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">What the simulator does<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Decomposes an input image<\/strong> into candidate entities and intents (objects, attributes, style cues).<\/li>\n\n\n\n<li><strong>Builds a tree of next-step questions<\/strong> that a user is likely to ask, even if they did not type them.<\/li>\n\n\n\n<li><strong>Executes branches in parallel<\/strong> (where possible), so the experience feels like a mind map unfolding rather than a linear workflow.<\/li>\n\n\n\n<li><strong>Grounds each branch<\/strong> by attaching verified sources (product pages, guides, booking pages, references), then prunes dead ends.<\/li>\n\n\n\n<li><strong>Persists context across depth<\/strong>, so the original image remains the root memory even after multiple hops.<\/li>\n<\/ol>\n\n\n\n<p>If you want to describe it in one line:<\/p>\n\n\n\n<p><strong>It turns an image into an explorable context graph.<\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Why I call it a \u201csimulator\u201d<\/h3>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Because it is not trying to replicate Google\u2019s internal stack. It is simulating the <em>interaction pattern<\/em> that Google publicly describes: multimodal scene understanding plus query fan-out across multiple sources.<\/p>\n<\/blockquote>\n\n\n\n<h3 class=\"wp-block-heading\">Feature mapping (prototype \u2192 real-world pattern)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Visual Decomposition<\/strong> \u2192 object identification + scene understanding (Lens + multimodal model)<\/li>\n\n\n\n<li><strong>Parallel Branching<\/strong> \u2192 query fan-out issuing multiple related searches concurrently<\/li>\n\n\n\n<li><strong>Verified Matches<\/strong> \u2192 grounding against authoritative inventories and sources (Shopping Graph for commerce)<\/li>\n\n\n\n<li><strong>Tree Memory<\/strong> \u2192 persistent context across multi-step exploration (what users experience as \u201cI can keep refining\u201d)<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">What this means for SEO in the Reasoning Web<\/h2>\n\n\n\n<p>In classic SEO, we optimized documents to rank for queries.<\/p>\n\n\n\n<p>In AI Mode, especially with Visual Fan-Out, we are increasingly optimizing for <strong>paths<\/strong>:<\/p>\n\n\n\n<p><strong>image \u2192 entity \u2192 attributes \u2192 constraints \u2192 action<\/strong><\/p>\n\n\n\n<p>Your job is to make sure that when the system decomposes a scene, it can reliably attach your content (and your products) to the right node in the graph.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Practical checklist for eCommerce and travel teams<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Make entities explicit<\/strong><br>Product, brand, place, lodging, attraction, itinerary item. Use Schema.org markup that matches what users see.<\/li>\n\n\n\n<li><strong>Enrich attributes that matter visually<\/strong><br>Materials, color, pattern, style, size, compatibility, seasonality. These are the hooks Visual Fan-Out branches on.<\/li>\n\n\n\n<li><strong>Connect images to entities, not just pages<\/strong><br>Treat images as first-class objects (ImageObject) and link them to the entity they represent.<\/li>\n\n\n\n<li><strong>Ensure \u201cactionability\u201d is machine-readable<\/strong><br>Offers, availability, price, inventory, return policy for commerce. Booking details, location, opening hours, policies for travel.<\/li>\n\n\n\n<li><strong>Build internal graph cohesion<\/strong><br>Strong internal linking and consistent identifiers help the agent traverse your site like a graph, not a list of URLs.<\/li>\n<\/ol>\n\n\n\n<p>Google\u2019s own guidance still says there are no special optimizations required beyond good SEO fundamentals, but it also explicitly calls out structured data alignment and high-quality images as important.<\/p>\n\n\n\n<p>The difference today is <em>why<\/em> those fundamentals matter: they are no longer only for ranking, they are for <strong>being selectable during decomposition and grounding<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Closing: Visual Fan-Out changes the unit of search<\/h2>\n\n\n\n<p><a class=\"wl-entity-page-link\"  href=\"https:\/\/wordlift.io\/blog\/en\/entity\/google-lens\/\" data-id=\"http:\/\/data.wordlift.io\/wl0216\/entity\/google-lens;http:\/\/www.wikidata.org\/entity\/Q30309023\" >Google Lens<\/a> alone processes visual searches at massive scale, with reporting citing tens of billions per month.<br>Add AI Mode\u2019s multimodal fan-out, and the unit of search is no longer the query string.<\/p>\n\n\n\n<p>It is the <strong>scene<\/strong>.<\/p>\n\n\n\n<p><strong>Visual Fan-Out is how agents turn a scene into a plan:<\/strong> decompose, branch, verify, synthesize, then invite the next hop.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>And that is the <strong>Reasoning Web<\/strong> in practice: not a bigger prompt, but an explorable context graph where meaning is navigated, not retrieved. If you are building for eCommerce, travel, or any visual-first journey, the question is simple:<\/p>\n\n\n\n<p><strong><em>When AI Mode decomposes your customer\u2019s world, will your products and pages be the obvious nodes to pick?<\/em><\/strong><\/p>\n<\/blockquote>\n\n\n<div class=\"hero-btn-wrapper \">\n                      <a class=\"btn btn-bg-primary text-white\"\n  href=\"https:\/\/wordlift.io\/visual-fan-out-simulator\/\"  target=\"_self\"          >\n\n  \n  <p>\n          Tried the Fan-Out Simulator? See what\u2019s next\n        <\/p>\n\n  \n  <\/a>\n      <\/div>\n\n\n\n<h5 class=\"wp-block-heading\">References<\/h5>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google Search, \u201cExpanding AI Overviews and introducing AI Mode\u201d <a href=\"https:\/\/blog.google\/products-and-platforms\/products\/search\/ai-mode-search\/?utm_source=chatgpt.com\">https:\/\/blog.google\/products-and-platforms\/products\/search\/ai-mode-search\/<\/a>;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google Search, \u201cBringing multimodal search to AI Mode\u201d <a href=\"https:\/\/blog.google\/products-and-platforms\/products\/search\/ai-mode-multimodal-search\/?utm_source=chatgpt.com\">https:\/\/blog.google\/products-and-platforms\/products\/search\/ai-mode-multimodal-search\/<\/a>; <\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google Search Help, \u201cGet AI-powered responses with AI Mode in Google Search\u201d <a href=\"https:\/\/support.google.com\/websearch\/answer\/16011537\">https:\/\/support.google.com\/websearch\/answer\/16011537<\/a>; <\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google Search PDF, \u201cAI Overviews and AI Mode in Search\u201d <a href=\"https:\/\/search.google\/pdf\/google-about-AI-overviews-AI-Mode.pdf?utm_source=chatgpt.com\">https:\/\/search.google\/pdf\/google-about-AI-overviews-AI-Mode.pdf<\/a>; <\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google Shopping, \u201cShopping on Google: AI Mode and virtual try-on updates\u201d <a href=\"https:\/\/blog.google\/products-and-platforms\/products\/shopping\/google-shopping-ai-mode-virtual-try-on-update\/?utm_source=chatgpt.com\">https:\/\/blog.google\/products-and-platforms\/products\/shopping\/google-shopping-ai-mode-virtual-try-on-update\/<\/a>; <\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google Shopping, \u201cLet AI do the hard parts of your holiday shopping\u201d <a href=\"https:\/\/blog.google\/products-and-platforms\/products\/shopping\/agentic-checkout-holiday-ai-shopping\/?utm_source=chatgpt.com\">https:\/\/blog.google\/products-and-platforms\/products\/shopping\/agentic-checkout-holiday-ai-shopping\/<\/a>; <\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Vertex AI Docs, \u201cGrounding overview\u201d <a href=\"https:\/\/docs.cloud.google.com\/vertex-ai\/generative-ai\/docs\/grounding\/overview?utm_source=chatgpt.com\">https:\/\/docs.cloud.google.com\/vertex-ai\/generative-ai\/docs\/grounding\/overview<\/a>; <\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Vertex AI Docs, \u201cGrounding with Google Search\u201d <a href=\"https:\/\/docs.cloud.google.com\/vertex-ai\/generative-ai\/docs\/grounding\/grounding-with-google-search?utm_source=chatgpt.com\">https:\/\/docs.cloud.google.com\/vertex-ai\/generative-ai\/docs\/grounding\/grounding-with-google-search<\/a>; <\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Vertex AI Docs, \u201cVertex AI RAG Engine overview\u201d <a href=\"https:\/\/docs.cloud.google.com\/vertex-ai\/generative-ai\/docs\/rag-engine\/rag-overview?utm_source=chatgpt.com\">https:\/\/docs.cloud.google.com\/vertex-ai\/generative-ai\/docs\/rag-engine\/rag-overview<\/a>; <\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google DeepMind, \u201cIntroducing Agentic Vision in Gemini 3 Flash\u201d <a href=\"https:\/\/blog.google\/innovation-and-ai\/technology\/developers-tools\/agentic-vision-gemini-3-flash\/?utm_source=chatgpt.com\">https:\/\/blog.google\/innovation-and-ai\/technology\/developers-tools\/agentic-vision-gemini-3-flash\/<\/a>; <\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>WordLift, \u201cQuery Fan-Out: A Data-Driven Approach to AI Search Visibility\u201d <a href=\"https:\/\/wordlift.io\/blog\/en\/query-fan-out-ai-search\/?utm_source=chatgpt.com\">https:\/\/wordlift.io\/blog\/en\/query-fan-out-ai-search\/<\/a>.<\/li>\n<\/ul>\n\n\n","protected":false},"excerpt":{"rendered":"<p>Visual Fan-Out is the shift from \u201csearching for an image\u201d to \u201csearching through an image.\u201d In Google AI Mode, an image is treated as a scene: objects and attributes are detected, the intent is decomposed, and multiple related queries run in parallel to produce grounded results.<\/p>\n","protected":false},"author":6,"featured_media":30490,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"wl_entities_gutenberg":"","_wlpage_enable":"","footnotes":""},"categories":[8],"tags":[],"wl_entity_type":[30],"coauthors":[4226],"class_list":["post-30455","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-seo","wl_entity_type-article"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v22.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Visual Fan-Out: Make Your Products Discoverable in AI Mode - WordLift Blog<\/title>\n<meta name=\"description\" content=\"Visual Fan-Out shows how Google AI Mode changes image search. Learn how to make your products and destinations discoverable \u2014 try the simulator.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Visual Fan-Out: Make Your Products Discoverable in AI Mode\" \/>\n<meta property=\"og:description\" content=\"Visual Fan-Out shows how Google AI Mode changes image search. Learn how to make your products and destinations discoverable \u2014 try the simulator.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/\" \/>\n<meta property=\"og:site_name\" content=\"WordLift Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-29T11:57:40+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-02T14:38:34+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-sharing-image.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"630\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Andrea Volpini\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"Visual Fan-Out: Make Your Products Discoverable in AI Mode\" \/>\n<meta name=\"twitter:description\" content=\"Visual Fan-Out shows how Google AI Mode changes image search. Learn how to make your products and destinations discoverable \u2014 try the simulator.\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-sharing-image.png\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Andrea Volpini\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/\"},\"author\":{\"name\":\"Andrea Volpini\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a\"},\"headline\":\"Visual Fan-Out: Make Your Products and Destinations Discoverable in AI Mode\",\"datePublished\":\"2026-01-29T11:57:40+00:00\",\"dateModified\":\"2026-02-02T14:38:34+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/\"},\"wordCount\":2101,\"publisher\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-featured-image.png\",\"articleSection\":[\"seo\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/\",\"name\":\"Visual Fan-Out: Make Your Products Discoverable in AI Mode - WordLift Blog\",\"isPartOf\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-featured-image.png\",\"datePublished\":\"2026-01-29T11:57:40+00:00\",\"dateModified\":\"2026-02-02T14:38:34+00:00\",\"description\":\"Visual Fan-Out shows how Google AI Mode changes image search. Learn how to make your products and destinations discoverable \u2014 try the simulator.\",\"breadcrumb\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#primaryimage\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-featured-image.png\",\"contentUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-featured-image.png\",\"width\":1200,\"height\":630,\"caption\":\"Visual Fan-Out in Google AI Mode\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog\",\"item\":\"https:\/\/wordlift.io\/blog\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Visual Fan-Out: Make Your Products and Destinations Discoverable in AI Mode\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#website\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/\",\"name\":\"WordLift Blog\",\"description\":\"AI-Powered SEO\",\"publisher\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/wordlift.io\/blog\/en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\",\"name\":\"WordLift\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png\",\"contentUrl\":\"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png\",\"width\":152,\"height\":40,\"caption\":\"WordLift\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a\",\"name\":\"Andrea Volpini\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/image\/466a1652833e48ca11c81b363eba7c25\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg\",\"caption\":\"Andrea Volpini\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Visual Fan-Out: Make Your Products Discoverable in AI Mode - WordLift Blog","description":"Visual Fan-Out shows how Google AI Mode changes image search. Learn how to make your products and destinations discoverable \u2014 try the simulator.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/","og_locale":"en_US","og_type":"article","og_title":"Visual Fan-Out: Make Your Products Discoverable in AI Mode","og_description":"Visual Fan-Out shows how Google AI Mode changes image search. Learn how to make your products and destinations discoverable \u2014 try the simulator.","og_url":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/","og_site_name":"WordLift Blog","article_published_time":"2026-01-29T11:57:40+00:00","article_modified_time":"2026-02-02T14:38:34+00:00","og_image":[{"width":1200,"height":630,"url":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-sharing-image.png","type":"image\/png"}],"author":"Andrea Volpini","twitter_card":"summary_large_image","twitter_title":"Visual Fan-Out: Make Your Products Discoverable in AI Mode","twitter_description":"Visual Fan-Out shows how Google AI Mode changes image search. Learn how to make your products and destinations discoverable \u2014 try the simulator.","twitter_image":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-sharing-image.png","twitter_misc":{"Written by":"Andrea Volpini","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#article","isPartOf":{"@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/"},"author":{"name":"Andrea Volpini","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a"},"headline":"Visual Fan-Out: Make Your Products and Destinations Discoverable in AI Mode","datePublished":"2026-01-29T11:57:40+00:00","dateModified":"2026-02-02T14:38:34+00:00","mainEntityOfPage":{"@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/"},"wordCount":2101,"publisher":{"@id":"https:\/\/wordlift.io\/blog\/en\/#organization"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#primaryimage"},"thumbnailUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-featured-image.png","articleSection":["seo"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/","url":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/","name":"Visual Fan-Out: Make Your Products Discoverable in AI Mode - WordLift Blog","isPartOf":{"@id":"https:\/\/wordlift.io\/blog\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#primaryimage"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#primaryimage"},"thumbnailUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-featured-image.png","datePublished":"2026-01-29T11:57:40+00:00","dateModified":"2026-02-02T14:38:34+00:00","description":"Visual Fan-Out shows how Google AI Mode changes image search. Learn how to make your products and destinations discoverable \u2014 try the simulator.","breadcrumb":{"@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#primaryimage","url":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-featured-image.png","contentUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2026\/01\/Visual-Fan-Out-in-Google-AI-Mode-featured-image.png","width":1200,"height":630,"caption":"Visual Fan-Out in Google AI Mode"},{"@type":"BreadcrumbList","@id":"https:\/\/wordlift.io\/blog\/en\/visual-fan-out-in-ai-mode\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog","item":"https:\/\/wordlift.io\/blog\/en\/"},{"@type":"ListItem","position":2,"name":"Visual Fan-Out: Make Your Products and Destinations Discoverable in AI Mode"}]},{"@type":"WebSite","@id":"https:\/\/wordlift.io\/blog\/en\/#website","url":"https:\/\/wordlift.io\/blog\/en\/","name":"WordLift Blog","description":"AI-Powered SEO","publisher":{"@id":"https:\/\/wordlift.io\/blog\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/wordlift.io\/blog\/en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/wordlift.io\/blog\/en\/#organization","name":"WordLift","url":"https:\/\/wordlift.io\/blog\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/","url":"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png","contentUrl":"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png","width":152,"height":40,"caption":"WordLift"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a","name":"Andrea Volpini","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/image\/466a1652833e48ca11c81b363eba7c25","url":"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg","caption":"Andrea Volpini"}}]}},"_wl_alt_label":[],"wl:entity_url":"http:\/\/data.wordlift.io\/wl0216\/post\/visual-fan-out-in-google-ai-mode-30455","_links":{"self":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/30455"}],"collection":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/comments?post=30455"}],"version-history":[{"count":19,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/30455\/revisions"}],"predecessor-version":[{"id":30520,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/30455\/revisions\/30520"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/media\/30490"}],"wp:attachment":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/media?parent=30455"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/categories?post=30455"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/tags?post=30455"},{"taxonomy":"wl_entity_type","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/wl_entity_type?post=30455"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/coauthors?post=30455"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}