Select Page
RedLink GmbH

RedLink GmbH

RedLink is at the forefront of semantic technology, offering advanced solutions that empower businesses to harness the full potential of their data. With a deep understanding of the semantic web, RedLink provides tools and services that enable organizations to create, link, and manage their information with unparalleled precision and ease.


At the core of RedLink’s mission is a steadfast commitment to innovation. The company continually explores new frontiers in technology to deliver state-of-the-art solutions that address the complex challenges of today’s digital landscape. RedLink’s dedication to pushing the boundaries of what’s possible ensures its clients are always equipped with cutting-edge tools to stay ahead of the curve.


A testament to their innovative spirit is RedLink’s strategic partnership with WordLift. This collaboration combines RedLink’s semantic expertise with WordLift’s AI-driven SEO capabilities, offering a comprehensive digital solution that enhances online visibility and user engagement. By integrating WordLift’s technology, RedLink helps clients optimize their content for search engines and human readers, driving success in the dynamic online ecosystem.


Together, RedLink and WordLift are shaping the future of digital communication, making it more intelligent, connected, and influential. Whether through semantic data management or SEO optimization, RedLink is dedicated to providing solutions that meet and exceed their clients’ expectations in an ever-evolving digital world.

The Power Of Knowledge Graphs In Modern SEO: Helping The Editorial Industry Optimize Content For The New Search [KGC 23 – Presentation]

The Power Of Knowledge Graphs In Modern SEO: Helping The Editorial Industry Optimize Content For The New Search [KGC 23 – Presentation]

With the rapid rise of generative AI, the landscape for digital publishers and SEO professionals has undergone a profound transformation. Our work, goals, and objectives have evolved in response to these changes. 

Search engines have become stricter in defining quality criteria as online content grows. Therefore, it is crucial for digital publishers, now more than ever, to establish authority and maintain a consistent level of quality.

Look at Beatrice Gamba‘s presentation at the Knowledge Graph Conference 2023 – The Power of Knowledge Graphs in Modern SEO 

Facing the Content Tsunami of AI-Written Content

The role of SEO experts has adapted to state-of-the-art technologies to help clients achieve online visibility. 

Until now, my job as an SEO expert has been to enable digital businesses to achieve good visibility and build an excellent online reputation for their websites.

Research by Ahrefs on 920 mln websites states that 91% of online content has no traffic on Google. We expect this number to grow with the rise of generative AI tools, especially for low-quality content that human editors haven’t reviewed.

But what about optimization for AI…

To answer this, I asked for guidance directly from Bing Chat.

I attended SMX Munich in March, and Fabrice Canel, during his keynote, gave some advice to SEOs about optimization for the new search experience

The essential advice was to focus on quality content and semantic markup.

I went on asking what semantic markup is: 

And it was no surprise to find a mention of WordLift since we have embraced this approach since forever: delivering high-quality content enriched with semantic markup to meet user needs.

Securing the Value of Digital Content in the Era of AI

One of the critical challenges for content producers in today’s fast-paced industry is to demonstrate expertise and credibility. In this context, Knowledge graphs play a vital role in securing and validating the authenticity of online content. 

With the abundance of online information, users (and search engines) have become more cautious about the credibility and accuracy of the content they consume and propose.  Structured data can help build trust within the target audience by demonstrating expertise in a particular field. 

While AI-generated content can be valuable and efficient, it needs the human touch and contextual understanding that come with genuine expertise. The information in a knowledge graph can help showcase experiences and in-depth knowledge that differentiate humanly-crafted content from AI-generated content. 

Demonstrating expertise and credibility is crucial for building trust, standing out, establishing authority, engaging an audience, and improving visibility. By addressing the authenticity of content with structured data, it’s possible to provide un-replicable value while strengthening your online presence.

The role of WordLift in this picture is to leverage knowledge graphs to generate and validate content at scale; in this way, we have helped clients like Ippen Digital, one of Germany’s largest online publishers, to enhance their content with 9,000 enriched tags and 29,000 semantic connections (triples) published to the web.

With this process, millions of articles have been marked up and connected to the knowledge graph, generating linked mini-graphs with intelligent features. 

The test has worked exceptionally well because it provided context to what previously hadn’t. And we did so by providing additional information to each piece of content, delivering users a more meaningful and engaging experience.

Providing More Information With Structured Data

Let’s dive into the main Schema.org properties and markups that can help assess authority and build trust online for digital publishers.

Targeting Local Queries with Structured Data

Structured data plays a crucial role in targeting local queries and meeting the informational needs of specific regions. For example, adding Place markup to news articles about a particular location ensures comprehensive coverage and relevancy for local searches. 

Citing beloved SEO expert Bill Slawski:

“If you use structured data, you’re presenting more precise information to search engines, using data in formats that they expect people to use the search for.”

The Era of Person Schema and Demonstrating Expertise

The emergence of the Person Schema Type aligns with Google’s framework for assessing the quality of content on websites, known as E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness). 

To prove E-E-A-T, we can rely on Schema markup for authors, incorporating properties such as sameAs, citations, awards, and credentials. 

Embedding all the information about our experience in a specific field will help search engines, and AI ecosystems recognize and credit our hands-on expertise in the industry.

An example of the markup of my Author page.

The more Google recognizes the Author name as an Expert on the topic, the better the authority of their content online.

Other Schema.org properties that nurture the E-E-A-T context are:

Exceeding Search Engines’ Expectations

The NewsMediaOrganization markup is an industry-specific type that provides supplementary background information about a news publisher; in this case, the most important properties to compile thoroughly are the founder and address.

The founder proves that there is a physical person behind the business. At the same time, the address demonstrates that the organization exists physically, adding up to its authority.

Is this work of data sourcing for Google only?

The answer is NO.

It’s for everyone.
It’s for us as a digital community of publishers and users, and it aims at:

  • A better and safer AI to which everyone can contribute
  • Spreading good information and avoiding fake news
  • Providing more value to editorial teams

Structuring Hands-On Experience Semantically

Incorporating structured data in the form of metadata has several benefits in terms of SEO for news publishers:

  • Traffic boost
  • Comprehensive coverage of topics
  • Content recognizable as a reliable source for AI ecosystems. 

At WordLift, our strategy includes search demand analysis, planning, validation, and publishing content enriched with structured data. By monitoring the performance, we continuously optimize and improve the content, proving to our customers that a published knowledge graph drives the ROI of their digital marketing campaigns.

When updating existing articles to assess E-E-A-T, we focus mainly on the following schema Properties within the Article markup:

  • Author – with information about the person who wrote the article; this is a Person type markup
  • datePublished – Article property, to assess when the article was published and how old it is
  • dateModified – Article property that proves content is fresh and updated periodically
  • inLanguage – Article property that gives information about the language of the content
  • Publisher – Organization markup leads to the information about the business that is making available the content online
  • About & mentions – Article properties that provide contextual information about the topics touched inside the article
  • Questions & Answers – FAQPage markup nested inside the Article markup

Conclusion

In the rapidly evolving digital landscape, optimizing content for AI and differentiating it from AI-generated content is vital to improve SEO for news publishers

Knowledge Graphs are beneficial because they can communicate to search engines and AI ecosystems in the same language: every piece of information in the form of metadata that we include in our content demonstrates expertise, helps in achieving better rankings, improves ROI, and establishes digital publishers as reliable sources of information. 

Structuring data benefits organizations and contributes to a better and safer AI ecosystem where good information is valued and utilized effectively.

Look at Beatrice Gamba’s presentation at the Knowledge Graph Conference 2023 – The Power of Knowledge Graphs in Modern SEO 

FAQPage, QAPage, AskAction, ReplyAction Schema Markups: Which One Should I Use And When?

FAQPage, QAPage, AskAction, ReplyAction Schema Markups: Which One Should I Use And When?

Table of contents:

  1. FAQPage schema markup
  2. QAPage schema markup
  3. AskAction schema markup
  4. ReplyAction schema markup
  5. On Using Action-based Schema Markups

Line bars in Google Search Console are not just lines or simple SEO reporting metrics. They show actual search interest performed by real people: users searching for answers to their problems or getting information before buying a product or service online. This happens directly on your website or in Google search, where your content shows up as a possible solution to their problems. Therefore, it’s important that you markup your pages with the most appropriate schema markups that you can find in Schema.org’s lists, so that you can gain the audience’s interest as quickly as possible and hope that they convert.

At Wordlift, we have participated in more than INSERT _NUMBER schema markup experiments and implementations and know from experience that sometimes website owners like you struggle with finding the right schema markup for them. Sometimes this is due to a lack of knowledge about good schema markup practices, sometimes it’s due to insufficient explanations on the Schema.org website, and sometimes it’s because there are too many options to choose from, so they are not sure which schema markup types are best for their site.

When marking up a web page that contains questions and answers, it is important to work with the Frequently Asked Questions (FAQ) schema markup, but the QA, AskAction, and ReplyAction schema markups can also be useful. So when should you use each schema and for which SEO scenario? We will explain it to you. Get started!

FAQPage Schema Markup

If your web page contains questions and answers on a particular topic, with each question having only one answer, or if you have a product category page that contains a section of Q&A without allowing users to provide multiple answers, then you are dealing with a FAQ page. The pages that qualify for using a FAQ schema markup and use it correctly can perform well on Google, even though they may look different for different business cases. It may seem obvious, but we still remind you that all content (both questions and answers) must be present on the web page.

Here’s one example of a legit FAQPage schema markup:

<script type="application/ld+json">
{
"@context":"https://schema.org",
"@type":"FAQPage",
"mainEntity":[{"@type":"Question","name":"Who's the founder of Google?","acceptedAnswer"
:[{"@type":"Answer",
"text":"Sergey Brin and Larry Page"}]},
{"@type":"Question",
"name":"What's the parent company of Google?",
"acceptedAnswer":[{
"@type":"Answer","text":"Alphabet "}]}]
}
</script>

As you know, properly marked FAQs give you access to broader Google visibility, such as in the FAQ and People Also Ask sections. When you use WordLift, you add FAQPage markup without having to manually add code to your site. In addition, you can use WordLift Looker Studio Connector to measure the performance of your FAQs on specific web pages and optimize your FAQs to rank better on Google and get more organic traffic to your website.

You can use the FAQ schema to build Conversational AI systems, such as chatbots. As the WordLift team itself has tested, you can train the model to answer users’ questions using data from KG and the questions and answers you have included in your content comments. With a chatbot and KG +FAQ, you can provide a better experience for users. When you get the information they are looking for, users will not have to go back to Google.

QAPage Schema Markup

If your web page contains questions followed by one or more answers, a forum page, or a product support page where users can submit an answer to a single question, then you are dealing with a page that is eligible for QAPage schema markup. Of course, all the required content must be publicly available on this page as well, just as it is in the schema markup itself, which identifies sections that are available to the online audience anyway.

Example QAPage schema markup:

<script type="application/ld+json">
    {
      "@context": "https://schema.org",
      "@type": "QAPage",
      "mainEntity": {
        "@type": "Question",
        "name": "FAQPage, QAPage, AskAction, ReplyAction schema markup - which one should I use and when?",
        "text": "",
        "answerCount": 3,
        "upvoteCount": 26,
        "acceptedAnswer": {
          "@type": "Answer",
          "text": "FAQPage for single answers and single questions, QAPage for multiple answers for a given question and AskAction + ReplyAction on QA-based webpages.",
          "upvoteCount": 1337,
          "url": "https://example.com/question1#acceptedAnswer"
        }
      }
    }
 </script>

AskAction Schema Markup

Over the last decade, our main focus has been on entities, things like places, people, companies, etc., and on describing them as accurately as possible using schema markup or advanced knowledge graph techniques.

However, the web is not static and entities are more complex than just information about them: they also interact with each other, forming relationships and performing actions, e.g. [“Dan Brickley works at Google helping the SEO community”]. For this reason, Jason Douglas, Sam Goto (Google), Steve Macbeth, Jason Johnson (Yahoo), Alexander Shubin (Yandex), and Peter Mika developed the actions vocabulary, which allows websites to describe what kinds of actions are possible on their websites.

AskAction is one of them. If users have the ability to ask questions on a particular web page, you should describe this activity with the AskAction schema markup. It usually fits well with ReplyAction. Some notable attributes that you can use to construct your AskAction schema are:

  • About attribute
  • InLanguage attribute
  • Potential Action
  • Question
  • Recipient

An example of AskAction implementation as a JSON-LD markup:

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "AskAction",
  "agent": {
    "@type": "Person",
    "name": "Emilia Gjorgjevska"
  },
  "recipient": {
    "@type": "Google",
    "name": "Dan Brickley"
  },
  "question": {
    "@type": "Question",
    "text": "What are the most advanced schema markups for SEO?"
  }
}
</script>

ReplyAction Schema Markup

This schema markup goes hand in hand with the AskAction schema markup and is used when someone gives an answer. By and large, we can use the same schema attributes as in the AskAction schema markup. We recommend using it whenever all the basic schema markup criteria are met.

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "ReplyAction",
  "agent": {
    "@type": "Person",
    "name": "Dan Brickley"
  },
  "recipient": {
    "@type": "Person",
    "name": "Emilia Gjorgjevska"
  },
  "resultComment": {
    "@type": "Answer",
    "parentItem": {
      "@type": "Question",
      "text":  "What are the most advanced schema markups for SEO?"
    },
    "text": "AskAction, ReplyAction, FAQPage and QAPage"
  }
}
</script>

On Using Action-based Schema Markups

Action schema markups help disambiguate intentions. So we advise you to play around with them and use them on your websites in an A/B test. We hope we have helped you understand what is the right use case for implementing FAQPage, QAPage and AskAction, ReplyAction schema markups to gain more visibility on search engine results pages. Are you ready to learn more about advanced schema markups with us?

There is nothing more powerful than utilizing what you have on your side in the first place. Do you want to learn how you can bring your business to the next level? Book a demo.

Semantic Markup In SEO: Html5, Structured Data And Beyond With AI Power

Semantic Markup In SEO: Html5, Structured Data And Beyond With AI Power

Table of contents:

  1. What is Semantic Markup?
  2. Semantic markup HTML
  3. Microdata for HTML5
  4. From Microdata to JSON/LD
  5. The power of Knowledge Graphs
  6. Entity Markup with the power of AI
  7. Why is semantic markup so important for SEO?
  8. How to Take Semantic Markup a Step Further

What is Semantic Markup?

With the advent of the concept of Web 3.0, it is becoming increasingly important to create web pages that have meaning beyond the codes that make them up, that search engines can understand. Semantic markup can serve this purpose. Semantic markup in modern SEO is the process of adding semantic value to the content of a web page. We know two groups of semantic markup: semantic HTML tags and structured data.

Semantic markup HTML

Let’s start with HTML, a markup language used to create a hierarchical structure of a web document. Many of the HTML tags we know are naturally associated with this language. For example, the <ul> markup and the nested <li> tags are elements that compose the paragraph’s form and content; the <p> tag represents the grammatical concept of a paragraph; <h1>, <h2>, <h3>, and <h4> tags are the titling of a text according to a hierarchical order of importance. 

The semantic HTML elements are one of the first approaches to structuring a web page’s information. The semantic HTML markup differs from the tags used for the sole purpose of graphical modeling through the use of CSS (<div>, <ul>, <li>, <b>, <span>, etc.) but adds new descriptive elements of some areas of a web page. Thus was born the latest version of HTML, HTML5, which introduces several tags with a semantic purpose, such as <section>, <article>, <aside>, <header>, <footer>, etc., each with a meaning and a role of markup. To understand the difference between a generic <div> and a <section>, the W3C, the World Wide Web Consortium founded by Tim Berners-Lee, explains that:

The section element is not a generic container element. When an element is needed for styling purposes or as a convenience for scripting, authors are encouraged to use the div element instead.“(…)”The section element represents a generic section of a document or application. A section, in this context, is a thematic grouping of content, typically with a heading.

World Wide Web Consortium

Let’s see the most common semantic HTML tags and their meaning.

Microdata for HTML5

Defining the role of some areas of a web page is not enough to structure the information so that it is understandable to the search engine. Instead, it is necessary to use a markup that can convey the same linguistic understanding as a human. To achieve this, HTML5 introduces the ability to add Microdata, a speaking attribute that can be added to a regular HTML tag.

Microdata is organized into groups of items declared via the itemcope attribute assigned to an HTML tag. The itemtype attribute can specify a data vocabulary, which is a dictionary that defines terms for types of things, properties, and relationships. To add a property to an item, there is the itemprop attribute. Let’s see an example of using the Schema.org vocabulary to structure the information of a breadcrumb:

<ol itemscope itemtype="https://schema.org/BreadcrumbList">
  <li itemprop="itemListElement" itemscope
      itemtype="https://schema.org/ListItem">
    <a itemprop="item" href="https://example.com/shoes">
    <span itemprop="name"> Shoes </span> </a>
    <meta itemprop="position" content="1"/>
  </li>
  <li itemprop="itemListElement" itemscope itemtype="https://schema.org/ListItem ">
    <a itemprop="item" href ="https://example.com/shoes/sneaker ">
    <span itemprop="name"> Sneaker </span> </a>
    <meta itemprop="position" content="2"/>
  </li>
</ol>

Let’s see an example using Schema.org vocabulary to describe a company.

<div itemscope itemtype="https://schema.org/Organization">
<span itemprop="name"> WordLift </span>
  <img src="logo.jpg" itemprop="logo" alt ="Wordlift's logo" />
 WordLift's home page:
  <a href="https://wordlift.io" itemprop="url">wordlift.io </a>
</div>

Microdata is a markup system for HTML code that uses a data vocabulary that contains all the rules of the properties of an object. In the last example, we saw a semantic HTML markup of the type “organization”, whose properties are organized hierarchically and expressed by a key-value pair understandable by a search engine:

  • Type = Organization;
    • Name = WordLift;
    • Logo = https://wordlift.io/logo.jpg
    • Url = https://wordlift.io

From Microdata to JSON/LD

Although semantic markup with microdata represents a clear step forward in structuring information, it has several disadvantages for maintenance and updates of the code. The attribute markup is a weak construct due to the dependence on an HTML tag. The evolution of semantic markup is the JSON/LD annotation (still reported by Google as a best practice for structuring data). JSON/LD can be defined as a headless system that works independently of DOM’s <body> because it develops independent <script> tags inserted in the <head> tag of the document. Microdata can still be mixed  with the JSON/LD annotation.

NOTE: JSON/LD markup, according to Google directives, always needs a counter-proof of the information within the content visible on the page.

The Power Of Knowledge Graphs

In recent years, search engines have developed an accurate archive of related concepts called knowledge graphs. These huge relational archives are organized with a graph structure of entities (nodes) and relationships between entities (arcs). The nodes in a knowledge graph represent real-world entities such as people, companies, cities, events, etc., and the arcs represent the relationships between the nodes, i.e., the relationships between different entities.

The most important knowledge graph for SEO is Google’s knowledge graph. Thanks to the entities contained within his Knowledge Graph, Google can answer factual questions such as “When was Tom Cruise born?”, “How long is the Chinese wall?”, “When was America discovered?” or “How old is the CEO of Meta?”.

The Knowledge Graph information comes from various sources collecting factual data, public databases, and information in Google Knowledge Panels. Another example of a semantic network is WordNet, one of the most popular lexical databases for the English language, which is often used for NLP frameworks. Other important public knowledge graphs are:

  • DBpedia takes advantage of the Wikipedia infobox structure to create a large data set. Often used to improve the performance of NLP and search applications;
  • Geonames a database with over 25 million geographical entities: states, regions, cities, municipalities, places of interest such as villas, monuments, etc.

In summary, knowledge graphs help us organize content into interrelated concepts and objects into known entities through a shared vocabulary. Search engines can consult entities to better understand our expertise and mastery of content, and thus provide more relevant search results to users.

Entity Markup with the power of AI

We have analyzed how it is possible to structure the information of a web document by using languages that machines can understand. This task has always involved the manual injection of JSON/LD, an approach that is not scalable and requires constant human intervention. Thanks to new machine learning and artificial intelligence technologies, it is now possible to automate a large part of the semantic markup process.

How To Implement Semantic Markup Using WordLift

WordLift is an AI-powered SEO tool to automate the semantic markup process. Its technology captures, marks, and integrates structured data into any website by reading and analyzing the content of your page.

WordLift can organize entities in 4 categories: Who, What, When and Where, and with its AI, builds a customized Knowledge Graph for businesses with entities marked by different topics, categories, and regions. You can accept the entities suggested adding contextual info for the user, efficiently selecting internal links for your content.

WordLift is the perfect assistant for semantic publishing because it also helps to beautify your structured data with specific properties and valuable content. For instance, events can be displayed chronologically by adding the Timeline widget, Locations in your article can quickly be mapped by adding the Geomap widget, and with the Navigator feature, you can display relevant articles to your readers obtaining better user engagement. Those are only a few features that come built-in with WordLift.

By using WordLift, search engines can understand the structure of your content faster and more accurately and avoid ambiguities. This way you’ll get more organic traffic and user activity on your website, leading to more conversions and leaving your competition behind.

Add structured data to your content without needing technical expertise and without much effort. Try WordLift and start boosting your SEO today! Book a demo!

Why Is Semantic Markup So Important For SEO?

Semantic markup makes it possible to structure the information on the page and make it understandable for search engines. This activity is crucial for SEO for three main reasons:

  • Improvement of the E.AT. (Expertise, Authoritativeness and Trustworthiness) thanks to the sending of relational signals (SameAs attribute, memberOf, isPartOf, etc.)
  • Improvement of the appearance of search results thanks to rich snippets in SERP that improve CTR performance (FAQ, HowTo, Reviews, etc.)
  • Consolidate the data within the Google Knowledge Graph (also called the data reconciliation process).

How to Take Semantic Markup a Step Further

So far, we have seen the process of structuring web page data with a self-referencing approach. In a competitive system like SERPs, it is necessary to change the perspective from the inside to the outside and understand how to improve the structuring of the data compared to the competitors that are dealing with our topic.

Questions such as “What entities have our competitors mentioned?”, “Are there entities that expand the horizon of my content?”, and “What relationships have I not considered?”. We have entered the age of semantic publishing, which is now a fact. In a webinar with Max Geraci, SEO Expert, he used his application Entities’ Swiss Knife to perform an example of entity gap analysis. Let’s us take a look at how it’s done in a few steps.

First, connect to the web application and fill in the URL field within the target web page you wish to analyze.

Then select the following option for data extraction:

  1. Check the first option to extract entities only from the relevant HTML tags;
  2. Check the second option to process a content analysis with SpaCy, an open-source software library for advanced natural language processing;
  3. Check the third option below to extract categories and topics according to the media topics taxonomies developed by IPCT;
  4. Check the last option to scrape all the entity’s descriptions from Wikipedia.

Now you should see an interesting output: all the entities mentioned within the content, categories and topics covered and the content audit with SpaCy. All this information can be used in your content to drive more value and to fill the entity gap between your content and competitor’s content. Below there’s an example of WordLift’s blog post on schema markup for Local SEO.

Entities and aggregated data.
Top entities by frequency.
Categories and topics gathered with TextRazor API.

How To Properly Use The About And Mentions Properties

We already saw how to extract entities from a competitor’s page, but how do we use those entities to enrich our content? Let’s spend some words on two properties crucial for the SEO entity gap strategy: the about and mention property.

We correctly use the about property when we refer to one or two entities for us the main topic of our content. The entity included in the above property should be inserted in the relevant HTML semantic tags or at least in the <h1> (the main header) and metatags such as the title tag.

The mention property is also crucial because it describes all the subtopics we touch on in our content. The ideal number of mentioned entities is 3-5 per article, and it strictly depends on the article’s length. For long articles, it is not strange to see five mentioned entities in the property value. The mention property must be used only when we explicitly refer to the entity in a significant portion of our content.

To know more about the semantic publishing, watch the webinar Entity-based SEO: Semantic Publishing and Entities Gap Analysis with Max Geraci or check out his presentation here👇

The Impact Of Semantic Annotation: Poem Analysis Case Study

The Impact Of Semantic Annotation: Poem Analysis Case Study

How can we organize the taxonomy of a website to make it easier to find in search engines? What can we do with entities and what can we expect when we annotate content with them? This is the story of William Green, who founded Poem Analysis in 2016 with the goal of giving poetry a home online. William had noticed that there were not enough online resources about poetry and that people were still struggling to understand and appreciate it because they were not being helped to study it in depth.

The first important step was to create a well-organized website. But, as we know, a website without visibility is like an empty house. So, the biggest challenge at that time was to gain more and more visibility on Google and the search engines. So while a website must be designed for the user, the content that fills it must also be understandable to search engines. In this way, Google can index it and the website can get a good ranking that will bring it more visitors.

When William came across WordLift, he knew there was untapped potential here. He knew Schema.org and had already tried solutions that allowed him to annotate content and add structured data to the website, but he had not yet found the solution that could really make a difference. Let’s see closer this SEO case study.

WordLift brought its innovation, both in approach and technology

“The ability to add entities on a scale like no other is something that every website owner should get excited about. Helping Google understand content better, and make the links, will only benefit the website, where schema is becoming a more dominant force for search engines, and into the future.

It was and is a pleasure to work with WordLift on cutting-edge SEO, particularly with such a quick thinking an Agile team. In doing so, we are able to test and create experiments that produce incredible results before most have even read new SEO schemas

William Green – Poem Analysis

The challenge

The goal of Poem Analysis was to achieve better ranking on Google and search engines and get more organic traffic to the website. The real challenge was to find a solution that would scale quickly and systematically given the large amount of content on the site. 

It was also about increasing relevancy, making sure that Google was able to capture the right queries. It seems obvious at the beginning but, really, it is not when you look at each individual poem. Google might know who Allen Ginsberg is but might struggle to connect ‘Howl’ (one of Allen’s poems) to him.

The solution

For PoemAnalysis.com we used a version of WordLift specifically created for publishers in the CafeMedia/AdThrive network. 

The solution proposed by the WordLift team started from the idea of using the pre-existing taxonomy and to integrate and inject structured data through “match terms”. This means that you can enrich your site by using categories and tags without having to use WordLift’s content classification panel to add markup. This way, a tag or category is treated like a recognized entity in the content classification box. 

For PoemAnalysis.com we created a custom solution to achieve this goal and in this way, we used the well organized taxonomy of the websites to associate the entity correspondent to the category of the poet (e.i. The category of  William Shakespeare has been associated with the Wikidata correspondent entity). Specifically, this was done here with Authors.

In a second testing phase, we decided to add the SameAs of the poem to the page of the poem (e.i. Shakespeare’s Sonnet 19 associated with the correspondent wikidata and DBPedia). The items we marked in the SameAs field are converted by WordLift into meaningful links to Wikidata and DBpedia. Subsequently, the Poem Analysis editorial team added SameAs to all entities on the site.

We are now working to add “Quiz” markup for Education Q&A, a format that helps students better find answers to educational questions. With structured data, the content is eligible to appear in the Education Q&A carousel in Google Search, Google Assistant, and Google Lens results. Stay tuned!

The results

As we saw in the previous section, the WordLift team worked on two actions that had a positive and measurable impact on Poem Analysis’ SEO strategy

In the first case, the existing, well-organized taxonomy was used with the “Match Terms” feature to convert categories into entities, which were then annotated in the site content. To measure the impact of this first experiment, a control group was created that included a set of unannotated Poets with WordLift markup.

The semantic annotation brought +13% of clicks and +29.3% of impressions when comparing this year to last year. In this case, we did not include the control group, because after we did follow-ups for these URLs as well, the data is not statistically relevant at the moment (in the expansion of the experiment we moved most of the URLs of the control group into the variant group, so to date, only 5 URLs remain within the control group).

In the second case, the WordLift team worked on SameAs of the poem to the page of the Poem. Again, to measure impact, we created a control group containing a set of Poems not annotated. Using SameAS brought +59.3% of clicks and +82.9% of impressions. 

The semantic annotation of the content also allowed another result: if you type the whole poem in Google, the first result that appears is poem analysis🤩

Example: William Shakespeare’s Sonnet 19 

Conclusion

The story of William shows the impact of semantic annotation and how building a well-organized taxonomy helps semantic annotation of content have a positive impact on a website’s impressions and traffic. 

In this case, we used the well-structured and indexed tags and categories as entities to annotate the website content. Generally, it is important that the entities are relevant and indexed by search engines to create the right sidewalk that moves both Google and the user. 

Let us not forget that semantic content annotations add value to the user by providing them with information and insights that they would otherwise have to look for elsewhere. And, of course, it also creates a semantic path for Google, which can thus more easily assemble the concepts surrounding a piece of content and rank it by reinforcing the topical authority on that topic.

How to Build SEO Demand Strategies By Using Knowledge Graphs

How to Build SEO Demand Strategies By Using Knowledge Graphs

Table of contents:

  1. Serving and creating SEO demand strategies
  2. The problem with third party data aggregators and brokers
  3. How to build proper SEO demand strategies by using knowledge graphs
  4. Developing a experimentation mindset with AI and semantic technologies
  5. Other frequent questions

Serving And Creating SEO Demand Strategies

In the world of marketing tactics, almost all of them fall into two strategic groups: serving existing demand and developing awareness to create demand. What does this actually mean and how can you distinguish between these two in order to act strategically?

When people want to meet the needs of searchers (and we can attest to this from experience), they usually want to rely on some third-party data tools to get keywords, entities, related concepts, questions, and contextual analysis to match them with their search personas. You can also accomplish this by scraping and analyzing data from online forums like Reddit or Quora to help you match existing user queries for more specific problems. This is the traditional approach.

While this may be true for many business cases and industries, a great opportunity is missed when there is a lack of a holistic approach that includes developing awareness to create demand for your product or service. Why is this?

The Problem With Third Party Data Aggregators And Brokers

Because in order to provide data to end users, third-party data aggregators and brokers need to collect enough data to be of interest to end users like you

The first problem? Targeting the data. That’s difficult if you want to attract more potential customers online, because:

  1. The data is not 100% accurate and this includes both web analytics and campaign tracking. So, basically the typical directional data will tell you the following:
    • If you are spending time and money on the right sites and keywords, you secure business results, otherwise you should pull your money out from non-performing keywords (or keyword groups);
    • Directional data also forces the mindset that “if you invest in X keywords, you will get Y organic traffic back”. This is a big problem here because it puts limitations on your thinking system and it is problematic when managing expectations. Why? Because they are not necessarily overlapping with the real life scenarios (in a bad way but also in a good way, so there is a real chance that you are missing out on some great opportunities out there).
  2. Your searchers can come from small places where there is no enough data for keyword research, because the keywords that are collected cannot be assigned to a satisfactory search volume (usually at least 10-100 searches for a given keyword and area on a monthly level);
  3. Your searchers may be researching things in a way that is too specific and unique and does not resemble the way the global audience searches, so their searches are underrepresented in these keyword research platforms. Upssie.

Now, while you may prefer a keyword or customer research strategy based on sufficient demand data (meeting the requirements halfway), you now see that this is a big missed opportunity for two reasons:

  1. It is very likely that your competition is also focused on meeting demand rather than creating demand and following the casual keyword research process;
  2. You are missing out on huge audiences because you are taking an outdated, unholistic approach. For example: 10,000 high conversion keywords (based on intent analysis or Google Ads testing) that have less than 10 searches a month are more valuable than 1 million generic low or medium competition keywords that have questionable or hard to reach conversion rates. Besides, your competition is after these keywords anyway because they use the same SEO stack for keyword research as you do.

The idea of keyword search volume highlights the fact that you should optimize a page for a keyword. And if you do not know the search volume for a particular keyword, how can you develop an appropriate demand strategy?

How To Build Proper SEO Demand Strategies By Using Knowledge Graphs

WRONG. To build appropriate demand for your online searchers, for which there is no user data to draw on, it’s smart to use content-based features or hubs that help you understand how your topic relates to other topics of interest and how you can use those connections to build an intelligent, data-driven content strategy.

We humans do not have the ability to look holistically at the Web, with its more than 1 billion pages produced on a regular basis. We can not recognize the concepts that are tightly interconnected to the extent that machines can. It’s just in our nature, in our way, to analyze this amount of data in a short period of time. That’s why we have to enlist the help of machines and their artificial intelligence capabilities.

Developing A Experimentation Mindset With AI And Semantic Technologies

At WordLift, for example, we experimented with advanced AI and semantic technologies some time ago and leveraged their power to create appropriate title tags by analyzing the semantic similarity between the intent and the content we could capture (this is important when building for appropriate demand).

There are two interesting perspectives here.

The first one is leveraging the power of the “semantic tree” or better we should say a directed graph by using the Wikidata P279-property (subclass of). This is how the SPARQL statement looks like -> https://w.wiki/5qPm  

We work with the entity Q180711 – SEO. and here is how the term directed graph looks like -> link. This way, we can practically use this output to start sculpting for demand for the disambiguated entity-based SEO.

In summary, we intend to provide an initial concept and then build the directed graph from that concept to obtain all the connected concepts that exist in the world of knowledge graphs. This provides a unique way to obtain related concepts in a more structured, intelligent and reliable way than traditional keyword search platforms.

Another option is to use embeddings, where we can use the power of semantics to find the relevant context using a web service for querying an embedding of entities in the Wikidata knowledge graph. By using both approaches, we should be able to answer two questions:

  • How are things and concepts connected in giant graphs like Wikidata and DBpedia?
  • What do we have in stock – what content is available in our content knowledge graph and where are we truly authoritative?

Both approaches scale well regardless of the knowledge domain and help us decode the semantic tree. That’s why we, WordLift, regularly invest in building internal capacity and tools that support our work in research and content production.

Here is an example for our business that we can analyze together with you. If you search for [“seo knowledge graphs”], [“search engine optimization knowledge graphs”] and [“marketing knowledge graphs”], nobody has been searching for them on a worldwide level, since 2004. According to this data and traditional keyword research processes, these concepts are not attractive enough and definitely not something worth working on and investing in.

However, we have clients in our portfolio that have invested (and are still investing) a lot of money to get ahead in applying knowledge graphs for marketing purposes. This is a perfect example of why traditional keyword research tools simply do not work well unless you incorporate content-based features and knowledge graphs into the process.

Long story short, a major flaw of traditional keyword research tools is that they fail to identify future trending terms. Most keyword research tools are not good at capturing this sudden surge in search demand, and it takes a while for search volume data to show up in them.

That’s why you need to prioritize your ideal customer profile over the search volumes and metrics that traditional keyword research software forces on you.

We first create the search persona, determine the basic topic (source topics) we will write about in our niche industry, and then use the power of content features to further enhance our content by connecting it to semantically similar articles or those that are close to our topic of interest in the graph-based world. This is a data-driven way to simultaneously consider intent, content, and context.

Ready to try this for your business today? Book a Demo with one of our SEO experts.

Start performing Semantic Keyword Research with the New SEO Add-on for Google Sheets™

Other Frequent Questions

Is SEO a demand generation?

If executed properly, search engine optimization (SEO) can be the highest return of investment (ROI) channel in demand generation. It can bring more customers at a lower customer acquisition cost (CAC). It is great for creating awareness (top of the funnel) but also useful when going towards the bottom of the funnel.

What are the main strategies in SEO?

There are many creative SEO strategies worth mentioning but we can say that on-page SEO, off-page SEO, technical SEO and content marketing are among the main ones.

What is the best SEO strategy in 2022?

Modern SEO developments will move more towards AI, linked-data and knowledge engineering. Therefore, it is smart to invest in knowledge graph-based approaches that utilize the power of AI and linked data at the same time.