Let’s start with the end. In the experiment I am sharing today we measured the impact of a specific improvement on the structured data of a website that references 500+ Local Business (more specifically the site promotes Lodging Business such as hotels and villas for rent). Before diving into the solution; let’s have a look at the results that we obtained using a Causal Impact analysis. If you are a marketing person or an SEO you constantly struggle to measure the impact of your actions in the most precise and irrefutable way; Casual Impact, a methodology originally introduced by Google, helps you exactly with this. It’s a statistical analysis that builds a Bayesian structural time series model that helps you isolate the impact of a single change being made on a digital platform.
In a week, after improving the existing markup, we could see a positive increase of +5.09% of clicks coming from Google Search – this improvement is statistically relevant, unlikely to be due to random fluctuations and the probability of obtaining this effect by chance is very small 🔥🔥
We did two major improvements to the markup of these local businesses:
Improve the quality of NAP (Name, Address and Phone number) by reconciling the entities with entities in Google My Business (viia Google Maps APIs) and by making sure we had the same data Google has or better;
Adding, for all the reconciled entities, the hasMap property with a direct link to the Google CID Number (Customer ID Number), this is an important identifier that business owners and webmasters should know – it helps Google match entities found by crawling structured data with entities in GMB.
Google My Business is indeed the simplest and most effective way for a local business to enter the Google Knowledge Graph. If your site operates in the travel sector or provides users with immediate access to hundreds of local businesses, what should you do to market your pages using schema markup against a fierce competition made of the business themselves or large brands such as booking.com and tripadvisors.com?
How can you be more relevant for both travelers abroad searching for their dream holiday in another country and for locals trying to escape from large urban areas?
The approach, in most of our projects, is the same regardless of the vertical we work for: knowledge completion and entity reconciliation; these reallyare two essential building blocks of our SEO strategy.
By providing more precise information in the form of structured linked data we are helping search engines find the searchers we’re looking for, at the best time of their customer journey.
Another important aspect is that, while we’re keen on automating SEO (and data curation in general), we understand the importance of the continuous feedback loop between humans and machines: domain experts need to be able to validate the output and to correct any inaccurate predictions that the machine might produce.
There is no way out – tools like WordLift needs to facilitate the process and web scale it but they cannot replace human knowledge and human validation (not yet at least).
LocalBusiness markup works for different types of businesses from a retail shop to a luxury hotel or a shopping center and it comes with sub-types (here is the full list of the different variants from the schema.org website).
All the sub-types, when it comes to SEO and Google in particular, shall contain the following set of information:
Name, Address and Phone number (and here consistency plays a big role and we want to ensure that the same entity on Yelp shows the same data on Apple Maps, Google, Bing and all the other directories that clients might use)
Reference to the official website (this becomes particularly relevant if the publisher does not coincide with the business owner)
Reference to the Google My Business entity (the 5% lift – we have seen above is indeed related to this specific piece of information) using the hasMap property
Location data (and here, as you might image, we can do a lot more than just adding the address as a string of text)
In order to improve the markup and to add the hasMap property on hundreds of pages we’ve added a new functionality in WordLift’s WordPress plugin (that also works already for non-WordPress websites) that helps editors:
Trigger the reconciliation using Google Maps APIs
Review/Approve the suggestions
Improve structured data markup for Local Business
From the screen below the editor can either “Accept” or “Discard” the provided suggestions.
WordLift reconciles an entity with a loose match with the name of the business, the address and/or the phone number.
Adding location markup using containedInPlace/containsPlace and linked data
As seen in the json-ld above we have added – in a previous iteration (and independently from the testing that was done this time) two important properties:
the inverse-property containsPlace (on the pages related to villages and regions) to help search engines clearly understand the location of the local businesses.
This data is also very helpful to compose the breadcrumbs as it will help the searcher understand and confirm the location of a business. Most of us, still make searches like “WordLift, Rome” to find a local business and more likely we will click on results where we can confirm that – yes, WordLift office is indeed located in Italy > Lazio > Rome.
To extract this information along with the sameAs links to Wikidata and GeoNames (one of the largest geographical databases with more than 11 million locations) we used our linked data stack and an extension called WordLift Geo to automatically populate the knowledge graph and the JSON-LD with the containedInPlace and containsPlace properties.
We have seen a +5.09% increase in clicks (after only one week) on pages where we added the hasMap property and improved the consistency of NAP (business name, address and phone number) on a travel website listing over 500+ local businesses
We did this by interfacing the Google Maps Places APIs and by providing suggestions for the editor to validate/reject the suggestions
Using containedInPlace/containsPlace is also a good way to improve the structured data of a local business and you should do this by adding also sameAs links to Wikidata and/or GeoNames to facilitate disambiguation
As most of the searches for local businesses (at least in travel) are in the form of “[business name][location where the business is located]”; we have seen in the past an increased in the CTR when schema Breadcrumb use this information from containedInPlace/containsPlace (see below 👇)
One key aspect in SEO, if you are a local business (or deal with local business), is to have the correct location listed in Google Maps and link your website with Google My Business. The best way to do that is to properly markup your Google Map URL using schema markup.
What is the hasMap property and how should we use it? In 2014 (schema v 1.7) the hasMap property was introduced to link a web page of a place with the URL of a map. In order to facilitate the link between a web page and the corresponding entity on Google Maps we can use the following snippet in the JSON-LD “hasMap”: “https://maps.google.com/maps?cid=YOURCIDNUMBER”
What is the Google CID number? Google customer ID (CID) is a unique number used to identify a Google Ads account. This number can be used to link a website with the corresponding entity in Google My Business.
How can I find the Google CID number using Google Maps? Search the business in Google Maps using the business nameView the source code (use view-source: followed by the url in your browser)Click CTRL+F and search the source code for “ludocid”The CID will be the string of numbers after “ludocid\\u003d” and before #lrd. You can alternatively use this Chrome extension.
If you are an SEO you constantly struggle to measure the impact of your strategy in the most precise and irrefutable way; Causal Impact, a methodology originally introduced by Google, helps you exactly with this. In this blog post I will share a Colab notebook that will help you, starting from data coming downloaded from the Google Search Console, to run a Causal Impact Analysis. The data you will find in Colab is related to a LocalBusiness markup optimization task.
What is Causal Impact Analysis?
It’s a statistical analysis that builds a Bayesian structural time series model that helps you isolate the impact of a single change being made on a digital platform.
Let’s say you have decided to improve the structured data markup of a local business and you want to know how this particular change has actually made impacts on traffic that we see coming from Google.
This sounds simple because you could just compare the measures before adding the new markup and after the change. But, it’s actually hard to measure in the real world because there are so many attributes that could influence the end results (e.g. clicks from Google). The so called “noise” makes it hard to say – yes, this actually has created a positive impact.
Google had the same problem and Kay Brodersen and the team at Google built this algorithm called Causal Impact to address this very challenge and open-sourced it as an R package. In the code I provide you here I am using a library called pycausalimpact developed by William Fuks 🙌
Let’s look at the code
1. Install libraries
2. Download data from Google Search Console and publish it using Google Sheets
3. Plot data
Here we only want to make sure we’re getting the right data into the analysis.
4. Configure pre and post periods
Here you will need to be careful and configure the dates before (pre_period) and after (post_period) the change.
Keep in mind that CI will create a prediction by analyzing the data in the pre_period and it will subtract the prediction from the post_period to see the actual impact.
4. Run the analysis and get the response
With a few simple instructions you will get:
the graphical response as well as
the detailed summary of your experiment in written form.
In the context of search, structured data are a predefined schema, helping search engines better understand and classify the information provided on a web page, thus making it more accessible to machines. That can also be used as an SEO marketing technique to improve your traffic.
What is structured data from a technical standpoint?
Structured data is data created using a predefined (fixed) schema and is typically organized in a tabular format. Think of a table where each cell contains a discrete value. The schema represents the blueprint of how the data is organized, the heading row of the table used to describe the value and the format of each column. The schema also imposes the constraints required to make the data consistent and computable
A relational database is an example of structured data: tables are linked using unique IDs and a query language like SQL is used to interact with the data.
Structured data is the best way for computers to interact with information. As opposed to semi-structured and unstructured data.
Semi-structured data is characterized by the lack of a rigid, formal structure. Typically, it contains tags or other types of markup to separate textual content from semantic elements. Semi-structured data is “self-describing” (tags are a good example, the schema is part of the data and the data evolves with the content but lacks consistency)
Unstructured data can be found in different forms: from web pages to emails, from blogs to social media posts, etc. 80% of the data we have is known to be unstructured. Regardless of the format used for storing the data, we are talking, in most cases, about textual documents made of sequences of words.
Structured data on the web
Structured data is a standardized format for providing information about a page and classifying that content on the page; for example, on a recipe page, what are the ingredients, the cooking time, the temperature, the calories, and so on.
Imagine a book supported in three different formats: ebook, paperback, and hardcover. Each has different weights, sizes and so on. So does Schema.org.
The Semantic Web movement, the creation of the Schema.org vocabulary and the importance that these technologies have on semantic search engines like Google, Bing, and Yandex have resulted in publishing online structured data on a previously unprecedented scale.
Structured Data Growth from the Common Web Crawl
Why structured data matter in SEO?
In the context of SEO, structured data are an effective tactic to pass critical information on a web page to search engines. In particular, in a recent update, Google clarified:
Content in structured data are eligible for display as rich results in search.
In short, the search engine is able to provide additional featured on the search results pages, that will enhance the visibility of your content. For instance, when asked about structured data, that is how the search engine might extract content from a web page, and place it into an answer box, called a featured snippet:
Example of a featured snippet coming from the WordLift blog, that with the help of structured data helps the search engine extract critical information. This sort of feature has a high click-through rate. It means that a large number of users finding it will land on your site thanks to a better real estate on Google’s pages.
A Knowledge Panel is a visualization that appears on top of search results (on mobile) or at the right side of them (on desktop) which provides authoritative information about any entity or concept. Structured data help trigger this feature, by enabling Google to pull critical data from your web pages, thus making your brand more visible on its search results.
Other rich elements triggered by structured data are event snippets, which can pull up critical information for an event directly on the search results, thus making your brand the most authoritative on that specific event. By creating an association in the mind of users between that event and your brand:
Example of an event snippet. With the WordLift team, an event page was created by using the mapping of our software to pass key information about the event, which was taken by the search engine as an authoritative source of information on that specific event.
From the technical standpoint, structured data are predefined (fixed) schema and are typically organized in a tabular format that helps machines understand how data are organized.
From the marketing standpoint Structured Data by leveraging on the Schema.org vocabulary, can help search engines better understand, interpret and process the information provided on the web page. Thus making it easier for the search engine (Google in particular) to show that data directly on its search results as a rich element.
Rich elements can be of various types. From featured snippets, knowledge panels, event snippets, top stories, Google news, People Also Ask, reviews and more. Those rich elements can become a key driver of qualified traffic and visibility toward your website.
We are talking about the importance of Topical Hubs because of the Hummingbirds algorithm, that lately has really changed the way Google works, simplifying the way we find results on the SERP.
Not so long ago, working on long-tail keywords was the new SEO trick and a true goldmine, but since the advent of Hummingbird, this is not working anymore.
Now, in order to optimize our website for semantic SEO and conversational queries, we have to think in entities, understanding the connection between the entities, and we have to be really sure about the context we are creating.
This new mindset will guide our keyword research.
Start your research focusing on entities and not on keywords;
The categories on our website are our topical hubs;
Create your content for the audience you’re serving.
Start focusing on entities
First of all, we need to focus on the concept related to the concept we want to explore. Let’s say we’re going to talk about books because we are a bookshop. What kind of books do we deal with? We can explore the history behind each book, their context, everything about the author and about book fairs all around the world.
We can focus on many different facts related to the concept of book.
Also, if we own a bookshop, this means we have a local site. We should also start thinking about entity searches related to our neighborhood. Once we have all these entities, we could start thinking about the ontology we want to use that we can extract from these entities, then deciding how to group them and create our categories for the website.
Our categories on the website are Topical Hubs, basically
On any given website, we usually have the homepage, then the category and product pages. How can we make our category pages rank? Let’s consider them as topical hubs.
Think of your site as if it was a composition of contextually connected microsites. So each category page could be considered as a new site.
So now we can start optimizing our page, our content hub, with the keywords Google itself is extracting from the best SERPs of entities. We are creating a well-optimized page not only in terms of keywords but also in relation to entities.
When we talk about topical hubs, we talk about the context, and the context is not just related to old classic SEO. Focusing on the context means giving value to the category page.
Create content for the audience you’re serving
The third step would be to do a good audience analysis, in order to understand that in our area we might have a certain demographic, so we can organize the content on the hub page focusing on that demographic. We can add some sort of values for our audience so that all our content could add some contextual and semantic value for Google too.
Now we can start focusing on the content we can create, in order for it to be contextually relevant both for our entity search and for our audience.
So, back to our bookshop, we can start adding a category for Recipes, extracting recipes from famous and not-so-famous books and tag them with schema.org/Recipe markup. We can record videos and mark them with schema.org/VideoObject. We can start writing guides, or submitting Q&A.
We could especially start to leverage events about the location we’re in, which is the neighborhood our bookshop is set.
By doing all these things we are creating something truly relevant about in a semantic way because we are really targeting our site to all of the entities related to our micro-topic. Having optimized it both on a keyword level and on a semantic search level we have created content which is relevant and responding both to our audience and Google.
A knowledge graph acquires and integrates information into an ontology and applies a reasoner to derive new knowledge.
(Lisa Ehrlinger and Wolfram Wöß – University of Linz in Austria)
Useful concepts, places, people, organizations, etc. are organized into entities that can be brought up to display important information about the entity from a wide database. These entities connect through each other through various relationships. It utilizes machine learning technology to develop applications to collect and display custom search engine results programmatically. Some knowledge graphs utilize special programming such as Google, who uses JSON API and schema.org markup to create schematic data that enables website content to be embedded in the SERP.
The term knowledge graph has been frequently used in research and business, in close association with Semantic Web technologies, linked data, web-scale data analytics, and cloud computing. At SEMANTiCS, a few years ago, a research paper titled “Towards a Definition of Knowledge Graphs” by the Institute for Application Oriented Knowledge Processing of the University of Linz was presented to propose a definition of the knowledge graph that focuses on data modeling and reasoning.
“Towards a definition of a Knowledge Graph”
The popularity of the term is strictly connected with the launch of the Google Knowledge Graph in 2012, and by the introduction of other large databases by major tech companies, such as Yahoo, Microsoft, AirBnB and Facebook, that have created their own “knowledge graphs” to power semantic searches and enable smarter processing of data.
In the context of Semantic Web, a knowledge graph is a way of representing knowledge. In short, you start from a few triples and those triples are put in relationship to build a graph. For instance, let’s have a closer look – using Semantic Web technologies – at the Apology of Socrates entity on this blog:
As you can see we have a set of triples that tell us a story: The Apology of Socrates, also known as Apology of Socrates is about Socrates, has been written by Plato and mentions the concepts of Daemon and Socratic Dialogue.
A knowledge graph doesn’t speak any particular language. Language is human; a knowledge graph gets expressed in open linked data, which is the language of machines.
Imagine your entire website built upon a large knowledge graph made of all the metadata that describes the thing that you write about. That knowledge graph becomes part of a larger graph that comprises the new web. That is the power of Semantic Web.
Popular Knowledge Graphs
There are many different types of knowledge graphs developed by different companies that are used for different purposes. While many companies use an internal or smaller knowledge graph for online functions, some of the biggest ones are being used by many people all over the world. Below lists a selection of some of the largest knowledge graphs to date from Microsoft, Google, Facebook, IBM and eBay.
Purpose & Function
Stage of Development
Uses knowledge graph for the Bing search engine, LinkedIn data & Academics.
Actively used in products
Knowledge graph is used as a massive categorization function across Google’s devices and directly imbedded in the search engine.
Actively used in products
Develops connections between people, events and ideas, mainly focusing on news, people and events related to the social network.
Actively used in products
Provides a framework for other companies and/or industries to develop internal knowledge graphs.
Actively used by clients
Currently developing a knowledge graph that functions to provide connections between users and products provided on the website.
Early Stage of Development
Most people conducting SEO will tend to focus on the Google Knowledge Graph as it’s the most frequently used and relevant knowledge graph for SEO. As Google, being the most popular search engine and the driver behind a lot of search engine innovation, it’s important to focus on developing entities and embedding them into the knowledge graph. Microsoft’s knowledge graph is still something to pay close attention to, as while not as many people use Bing, plenty of people do use Microsoft’s services, including LinkedIn. So while Google may be the primary focus of SEO and entity development, it’s important not to forget about Microsoft. Thankfully, they both use schema markup, so developing entries for both of them shouldn’t be too difficult.
Other knowledge graphs may be useful in SEO in certain circumstances. For example, Facebook’s knowledge graph might be useful for branding, local businesses, and people hosting events for embedding in their social network. IBM’s knowledge graph might be useful in working within the internal knowledge graphs of other companies but may still hold value for SEO. The same goes for eBay’s knowledge graph, though it is more uncertain as their knowledge graph is still in the early stages of implementation and development. There are also many more knowledge graphs not listed above that are used by many publishers and developers across many different platforms.
The History of Knowledge Graphs
Believe it or not, but the history of knowledge graphs and their use predates both search engines and the internet as a whole. Many of you may remember the graphs used in Geometry, Algebra and Calculus classes in high school or college. Knowledge graphs function in a similar manner, only instead of lines and shapes being used to connect points on a graph, entities are connected to other entities through lines of structured data and schema markup.
In 1735, famous mathematician Leonhard Euler was presented with a problem in the city of Königsberg, Prussia in what is now Kaliningrad in modern day Russia. There were 7 bridges in the city that connect through a central island and run across the Pregel River, and the challenge was whether or not it was possible to form a straight path where every bridge is crossed only once. Rather than making every possible walk throughout the city to figure this out, he chose to create the first chart and plot out the city and the end points of every bridge. While his ultimate conclusion was that this was not possible, he ended up inventing graph theory in the process.
The famous Königsberg bridge problem that helped Euler develop graph theory in trying to solve it.
While Graph theory continued to develop throughout the centuries, the first major breakthrough in developing the basis of knowledge graphs came in 1966 with Joseph Weizenbaum’s development of the ELIZA computer program. The program would use a system of code known as DOCTOR to conduct machine learning in a manner that would allow the program to communicate with humans as though it were an empathetic therapist. This enabled the software to conduct a primitive version of queries and delivering results. Although Weizenbaum began to question the nature of his program and the role of artificial intelligence, ELIZA laid the important foundations of knowledge graphs to come.
Taking both graph theory and machine learning technology provided by ELIZA, the development of the enterprise graph was the next major step in the development of knowledge graphs. Enterprise graphs were developed to organize and identify all available information on a specific topic, field, person, product, etc. within an organization. Think of it as an internal knowledge graph or database utilized within a specific company, organization or industry. It’s not known when exactly the first enterprise graph was developed, though the development of knowledge graphs by Google and Facebook in organizing their content and entities. While it took a while before knowledge graphs could properly get off the ground, they’re certainly starting to shine today.
The Development of Knowledge Graphs
The first knowledge graph was launched by Google on May 12th, 2012 as a means of enhancing the value of knowledge provided by the search engine. By using structured data through schema markup, a user could provide information in HTML code that could then be picked up and used in knowledge cards and the newly developing featured snippets. What this meant is that websites could gain traction by providing answers to queries directly on the SERP.
Other search engines and companies began to also develop knowledge graphs. Some, like Microsoft, developed them for a similar purpose to Google; while others, like Facebook, have been developing them for different reasons, namely people and events as opposed to general knowledge about the world. When it comes to SEO, schema markup has expanded to be compatible with Yahoo and Microsoft Bing in addition to Google, making it easy to provide markup for the top search engines.
Today, knowledge graphs are being utilized in many companies and provide a knowledge network for a variety of different functions. As knowledge graph technology continues to develop and evolve, it faces many new challenges. Issues like changing knowledge, potential consequences of machine learning, managing identities, as well as concerns regarding security and privacy are all ongoing issues facing many knowledge graph developers. These issues will continue to be a challenge for some time to come and perhaps more so in the future.
In this blog-post we will discuss how the semantic web has changed our experience on the internet, both as users and editors, and why building a vocabulary of concepts for your website can be essential for your business and very easy to do, with one single WordPress plug-in: WordLift.
Semantic What? What’s In It For Me
The semantic web can be defined as a web of data. It originates from the transformation of the Web into an environment in which published documents carry a “hidden side”, an inner layer of data commonly known as metadata: It was 2009 when the inventor of the web, Tim-Berners Lee asked everyone to introduce the semantic information needed to help machine understand information being published.
The context around each set of metadata is what reveals to search engines the intent of the users. The same word can mean different things to different people or at different times: for example, typing french fries at 8am on a laptop can load different results than if searched at 8 pm from a smartphone. The context gives the hint of a clear intent: in the morning you might just be curious about the word itself, why it is called french etc. whereas in the evening you might want to look for the place that serves the best french fries in your town or order them online. The metadata structure behind both pages allows search engines to match the contextualized users’ intent with the most useful results.
The semantic web significantly increased the possibilities offered by the online world, making it easier for software to organize and classify content. Search engines are a perfect example as they leverage metadata to provide results from the most relevant onwards: relevance is now chosen according to the metadata that each page embodies, so you must structure your page correctly if you want to rank on Google, with the right information and the perfect connection to other pages’ themes.
In 2001 Tim-Berners Lee expected the web to become semantic soon; 15 years later we are testifying the change that semantic web is imposing and enjoying the benefits. The content of your website must follow this new concept of an organized web as well; properly organized content provides a better navigation experience, higher search engine rankings and readers’ engagement. But how to do it and how can it be useful to a website’s target audience?
Here is where WordLift comes at hand. WordLift is a plug-in for WordPress that helps you create, organize and beautify the content of your websites, blogs and any digital editorial products. The plug-in takes you by the hand during the whole process of creating, writing and publishing your content, and the metadata attached to it.
While you are writing, WordLift analyses the terms used and identifies the most meaningful ones based on the context of the post; these key-words are suggested in the form of entities, concepts you should focus on that are crucial for your target audience. For each entity you select, a specific page with text and images is created, so that readers can deepen the matter; the plug-in draws this material from the universal encyclopedia that is the web, more precisely from the wealth of open data available, and structures it in the form of web pages enriched with Schema.org markups, the classification system used by Google and all other search engines in the world. WordLift adds the schema.org markup to the page to make it SEO-friendly and readable by computers.
The sum of all the entity pages you create, forms the vocabulary of your website, already linked to open data vocabularies in the web.
The aim of the vocabulary is to organize content for a referring audience which is composed of personas; to connect each content to the other depending on the context; to optimize it for search engines in order to rank to interested readers, or more precisely, to the target audience of the page.
How to Create a Vocabulary
A good vocabulary should contain 70-80 entities to start with, but where to extract it from? Which entities or keywords should be part of the vocabulary?
Think about your audience first – performing a good old basic keyword research can help you understand your readers; analyzing patterns and identifying the intents of searches on Google, can help you identify not only your targets, but the 10 or 100 or 1000 readers a month you need to make a difference in your business (find out more on this subject in this article from our consultancy blog and the work we did in the travel industry for the Salzburg State Board of Tourism.) Once the right audience is targeted, it is easy to build a set of recurring concepts that should eventually become entities in your vocabulary.
The devil is in the details – Look closely at your business model and at what really makes a difference. A real estate agent working in a fancy neighborhood must know not only the properties’ value or the crime rate in the area but also the mood of the little café down the street or the teachers’ skills in the local high school – these details help potential clients understanding your offer; turning these details into entity pages create context and context builds trust.
The vocabulary is your content hub – Think of each entity as a hub around which should revolve a set of content on your site – if you have enough material on a specific concept or if you plan to add this content to your editorial plan – then start with an entity that describes it. These pages can become search magnets and in some cases they can also be designed as clear answers to specific questions and enter the realm of Rank #0.
You are what you share – Entities are a great mean to explain a complex topic to your audience – when doing so you are also creating great content to promote your business on social networks.
Moreover when creating a vocabulary remember that:
We are all in the relationship business – Each entity should be connected to blog-posts and to other entities in the same knowledge domain – we always recommend our users to link each entity at least to another one. These relationships amongst entities translate into specific links in your graph and can be used to discover more content on your website. When linking data, be careful and follow a strategy, with the most value for your users in mind.
The real value is the “hidden side” – Curating the metadata box behind each entity is good for humans as well as machines –
Example: the entity of Rudolph Schindler (an Austrian modernist architect) should be linked to Frank Lloyd Wright’s, who in 1918 asked him to work on a project in L.A. together; this can be done by filling up the schema-org:knows property in the metadata box of Schindler’s entity page, providing new ways to discover content you might have on both Schindler and Frank Lloyd Wright.
Be the Wikipedia of your niche – Always curate your entities and customize their content to fit your offers and your targets. If you decide to create an entity just to add the schema.org markup to your page, you should add a no-index on the entity page to avoid any SEO issue with content duplication.
Keep it super simple – Use your properly structured vocabulary to affect the architecture of your website, to make sure that your reader have quick access to the information they need. Entities can be grouped in your navigation by types (i.e. all schema-org:places) or using custom WordPress taxonomies to fit your needs.
Why Should You Care?
Organizing and enriching content is becoming more and more of a necessity in what has been called the birth of Web 3.0. Users are no longer searching for queries on the web but finding answers. At this point only the linked will survive! Connect your website to the rest of the web, within itself and with your target audience. It is very easy to do it, and you only need one single plug-in: WordLift.
What if I already have pillar articles that could become entities?
Yes, you can now convert your existing articles or pages into entities with a simple click. This helps you reuse your pillar content to reorganize your website and improve the search rankings of these pages.