If you are confused about meta descriptions in SEO, why they are important and how to nail it with the help of artificial intelligence, this article is for you. If you are eager to start experimenting with an AI-writer, read the full article. At the end, I will give you a script to help you write meta descriptions on scale using BERT: Google’s pre-trained, unsupervised language model that has recently gained great momentum in the SEO community after both, Google and BING announced that they use it for providing more useful results. I used to underestimate the importance of meta descriptions myself: after all Google will use it only on 35.9% of the cases (according to a Moz analysis from last year by the illustrious @dr_pete). In reality, these brief snippets of text, greatly help to entice more users to your website and, indirectly, might even influence your ranking thanks to higher click-through-rate (CTR). While Google can overrule the meta descriptions added in the HTML of your pages, if you properly align:
the main intent of the user (the query you are targeting),
the title of the pageand
the meta description
There are many possibilities to improve the CTR on Google’s result pages. In the course of this article we will investigate the following aspects and, since it’s a long article, feel free to jump to the section that interests you the most — code is available at the end.
What are meta descriptions?
As usual I tend to “ask” “experts” online a definition to get started, and with a simple query on Google, we can get this definition from our friends at WooRank:Meta descriptions are HTML tags that appear in the head section of a web page. The content within the tag provides a description of what the page and its content are about. In the context of SEO, meta descriptions should be around 160 characters long.
Here’s an example of what a meta description usually looks like (from that same article):
How long should your meta description be?
We want to be, as with any other content on our site, authentic, conversational and user-friendly. Having said that, in 2020, you will want to stick to the 155-160 characters limit (this corresponds to 920 pixels). We also want to keep in mind that the “optimal” length might change based on the query of the user. This means that you should really do your best in the first 120 characters and think in terms of creating a meaningful chain by linking the query, the title tag and the meta description. In some cases, within this chain it is also very important to consider the role of the breadcrumbs. As in the example above from WooRank I can quickly see that the definition is coming from an educational page of their site: this fits very well with my information request.
What meta descriptions should we focus on?
SEO is a process: we need to set our goals, analyze the data we’re starting with, improve our content, and measure the results. There is no point in looking at a large website and saying, I need to write a gazillion of meta descriptions since they are all missing. It would simply be a waste of time.
Besides the fact that in some cases – we might decide not to add a meta description at all. For example, when a page covers different queries and the text is already well structured we might leave it to Google to craft the best snippet for each super query (they are super good at it ?). We need to look at the critical pages we have – let’s not forget that writing a good meta description is just like writing an ad copy — driving clicks is not a trivial game.
As a rule of thumb I prefer to focus my attention on:
Pages that are already ranking on Google (position > 0); adding a meta description to a page that is not ranking will not make a difference.
Pages that are not in the top 3 positions: if they are already highly ranked, unless I can see some real opportunities – I prefer to leave them as they are.
Pages that have a business value: on the wordlift website (the company I work for), there is no point in adding meta descriptions to landing pages that have no organic potential. I would rather prefer to focus on content from our blog. This varies of course but is very important to understand what type of pages I want to focus on.
This criteria can be useful, especially if you plan to programmatically crawl our website and choose where to focus our attention using crawl data. Keep on reading and we’ll get there, I promise.
A quick introduction to single-document text summarization
Automatic text summarization is a challenging NLP task to provide a short and possibly accurate summary of a long text. While, with the growing amount of online content, the need for understanding and summarizing content is very high. In pure technological terms, the challenge for creating well formed summaries is huge and results are, most of the time, still far from being perfect (or human-level).The first research work on automatic text summarization goes back to 50 years ago and various techniques. Since then, they have been used to extract relevant content from unstructured text. “The different dimensions of text summarization can be generally categorized based on its input type (single or multi document), purpose (generic, domain specific, or query-based) and output type (extractive or abstractive).”— A Review on Automatic Text Summarization Approaches, 2016.
Extractive vs Abstractrive
Let’s have a quick look at the different methods we have for compressing a web page. “Extractive summarization methods work by identifying important sections of the text and generating them verbatim; […] abstractive summarization methods aim at producing important material in a new way. In other words, they interpret and examine the text using advanced natural language techniques in order to generate a new shorter text that conveys the most critical information from the original text”— Text Summarization Techniques: A Brief Survey, 2017.With simple words with extractive summarization we will use an algorithm to select and combine the most relevant sentences in a document. Using abstractive summarization methods, we will use sophisticated NLP techniques (i.e. deep neural networks) to read and understand a document in order to generate novel sentences. In extractive methods a document can be seen as a graph where each sentence is a node and the relationships between these sentences are weighted edges. These edges can be computed by analyzing the similarity between the word-sets from each sentence. We can then use an algorithm like Page Rank (we will call it Text Rank in this context) to extract the most central sentences in our document-graph.
The carbon footprint of NLP and why I prefer extractive methods to create meta descriptions
In a recent study, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models with focus on language models and NLP tasks. They found that training a complex language model can emitfive times the lifetime emissions of the average American car (including whatever is required to manufacture the car itself!).While automation is key we don’t want to contribute to the pollution of our planet by misusing the technology we have. In principle, using abstract methods and deep learning techniques offers a higher degree of control when compressing articles into 30-60 word paragraphs but, considering our end goal (enticing more clicks from organic search), we can probably find a good compromise without spending too many computational (and environmental) resources. I know it sounds a bit naïve but…it is not and we want to be sustainable and efficient in everything we do.
What is BERT?
BERT: The Mighty Transformer
Now, provided the fact that a significant amount of energy has been already spent to train BERT (1,507 kWh according to the paper mentioned above), I decided it was worth testing it for running extractive summarization. I have also to admit that It has been quite some time since I entertained myself with automatic text-summarization of online content and I have experimented with a lot of different methods before getting into BERT. BERT is a pre-trained unsupervised natural language processing model created by Google and released as an open source program (yay!) that does magic on 11 of the most common NLP tasks.BERTSUM, is a variant of BERT, designed for extractive summarization that is now state-of-the-art (here you can find the paper behind it). Derek Miller, leveraging on these progresses, has done a terrific work for bringing this technology to the masses (myself included) by creating a super sleek and easy-to-use Python library that we can use to experiment BERT-powered extractive text summarization at scale. A big thank you also goes to the HuggingFace team since Derek’s tool uses their Pytorch transformers library ?.
Long live AI, let’s scale the generation of meta descriptions with our adorable robot [CODE IS HERE]
So here is how everything works in the code linked to this article.
We start with a CSV that I generated using the WooRank’s crawler (here you can tweak the code and use any CSV that helps you detect where on the site MDs are missing and where it can be useful to add them); the file provided in the code has been made available on Google Drive (this way we can always look at the data before running the script).
We analyze the data from the crawler and build a dataframe using Pandas.
We then choose what URLs are more critical: in the code provided I basically work on the analysis of the wordlift.io website and focus only on content from the English blog that has already a ranking position. Feel free to play with the Pandas filters and to infuse your own SEO knowledge and experience to the script.
We then crawl each page (and here you might want to define the CSS class that the site uses in the HTML to detect the body of the article – hence preventing you from analyzing menus and other unnecessary elements in the page).
We ask BERT (with a vanilla configuration that you can fine-tune) to generate a summary for each page and to write it on a csv file.
With the resulting CSV we can head back to our beloved CMS and find the best way to import the data (you might want to curate BERT’s suggestions before actually going live with it – once again – most of the cases we can do better then the machine).
Super easy, not too intensive in computational terms and…environmentally friendly ?Have fun playing with it! Always remember, it is a robot friend and not a real replacement of your precious work. BERT can do the heavy lifting of reading the page and highlighting what matters the most but it might still fail in getting the right length or in adding the proper CTA (i.e. “read more to find …”).
Final thoughts and future work
The beauty of automation and agentive SEO is in general, as I like to call it, that you gain super powers while still remaining in full control of the process. AI is far from being magic or becoming (at least in this context) a replacement for content writers and SEOs, rather AI is a smart assistant that can augment our work. There are some clear limitations with extractive text summarization that are related to the fact that we deal with sentences and if we have long sentences in our web page, we will end up having a snippet that is far too long to become a perfect meta description. I plan to keep on working to fine-tune the parameters to get the best possible results in terms of expressiveness and length but…so far only a 10-15% is good enough and doesn’t require any extra update from our natural intelligence. A vast majority of the summaries look good and it is substantial but still goes beyond the 160 character limits. There is, of course, a lot of potential in these summaries beyond the generation of meta descriptions for SEO – we can for instance create a “featured snippet” type of experience to provide relevant abstracts to the readers. Moreover, if the tone of the article is conversational enough, the summary might also become a speakable paragraph that we can use to introduce the content on voice-enabled devices (i.e. “what is the latest WordLift article about?”). So, while we can’t let the machine really run the show alone, there is a concrete value in using BERT for summarization.
As you arrived to the end of this long article, it is time to remind us all that none of this could be possible without the work of many people and enlightened organizations that are committed to open source technologies and that are enabling and encouraging practitioners around the world to make (well, hopefully) the web a better place! It is also thanks to mavericks and SEOs with a data-driven mindset like Paul Shapiro and Hamlet that I got interested in the topic and ready to experiment with new tools! Give a spin to the code on the Google Colab and send me any comments or suggestions over Twitter or LinkedIn!Want to scale your marketing efforts with Woorank and WordLift SEO management service? I can’t wait to learn more about your challenges!
A rich snippet is a specialized form of snippet result that seeks to provide information and deliver answers to inquiries directly in the SERP. Rich snippets are generally considered more reliable and engaging compared to regular blue links. These snippets can be interacted with and provide a variety of differing functions. While these snippets are more convenient and useful, they can also be more complicated and require some work to implement them. Often times, they may require structured data markup in order for a website’s content to appear in a featured snippet.
How a Rich Snippet differs from a Regular Blue Link
While the standard blue links can be found all over the SERP and contain little more than a title, URL and meta-description, a rich snippet provides much more specialized results. They can feature more information, a longer description, pictures, ratings, sitelinks and more. Rich snippets almost always appear at the very top of the SERP, even above the first blue links results. Rich snippets are more engaging and appealing to users as they both deliver queries directly and are more trusted by Google compared to standard blue links.
Types of Rich Snippets
There are a wide variety of rich snippet types and even more variations of these types to perform different functions. There are a few primary ones that carry over into many subcategories and varying types. Among these include:
People Also Ask – A Question and Answer type that asks commonly inquired questions from other users and answers using information from third-party websites.
FAQ – A Question and Answer type that provides questions and direct answers from a specific website.
HowTo – A box providing a step-by-step instructions to a problem, commonly provides technical answers and advice.
Knowledge Card – A card displaying the entity of a search query from the Google Knowledge Graph. Applies to people, brands, companies, organizations, sports teams, events and media properties.
Carousel – A selection of scrollable cards displaying entities of people, locations, dishes, or other objects tied together by a shared entity or piece of information.
SiteLinks – Links to different sections on a single website.
A SERP featuring many different types of rich snippets, like a knowledge card, video carousel, image carousel, and people also ask.
There are also many, many more types of featured snippets for more specialized functions. Examples can include: Movie Carousels for movies of a specific genre or feature the same actress, Recipes to display different online recipes for a specific dish, or Flights which display a series of flights to a specific destination or similar destinations. Each have their own use and their own requirements for your content to be featured on the SERP.
The Relationship between Rich Snippets and Structured Data
While some content can be featured on it’s own or through information from existing entities, others require structured data in order to be utilized. In the latter case, a specific type of structured data must be added using the required markup from schema.org. You can do this with WordLift, which makes things far easier than having to code everything yourself.
Different types of rich snippets may require different markup for structured data. Some types may match the name of the rich snippet, like the HowTo snippet uses HowTo markup. However, others may use less obvious types or multiple markup types at once. What you need depends on the kind of content you want to provide and what rich snippet you want to feature. You can search for different vocabulary types on the Schema glossary.
If you would like to learn more about rich snippets and how to implement them using the schema markup on your website using WordLift, check out our spectacular guide here on the WordLift blog.
In WordLift, a Context Card is a preview of the content of the page of an entity that is linked to another page.
What is it for and how does it work?
The Context Card opens when the mouse is hovered over the link and provides the user with some initial information before even clicking on the link, helping him to decide whether to investigate the topic or not.
An optimal context card includes a short and direct definition of the entity and a representative image. In this way, the user can get an idea of the content at a glance and immediately identify which concept, person, place or whatever is talking exactly.
Just as semantic markup has the function of disambiguating the content for search engines, the context cart offers the reader a more precise contextualization. Moreover, thanks to its captivating format, the context card encourages the reader to deepen the topic to which the entity refers.
How are the context cards produced?
To show user context cards, WordLift automatically extracts the featured image (the main image of the entity) and the first lines of content of the entity page. For this reason, to take full advantage of this tool on a website, it is important to take care of the pages of the entities and to provide a useful definition in the first few lines, but which can also stimulate the user to go further and visit the entity‘s page. .
In the context of search, structured data are a predefined schema, helping search engines better understand and classify the information provided on a web page, thus making it more accessible to machines. That can also be used as an SEO marketing technique to improve your traffic.
What is structured data from a technical standpoint?
Structured data is data created using a predefined (fixed) schema and is typically organized in a tabular format. Think of a table where each cell contains a discrete value. The schema represents the blueprint of how the data is organized, the heading row of the table used to describe the value and the format of each column. The schema also imposes the constraints required to make the data consistent and computable
A relational database is an example of structured data: tables are linked using unique IDs and a query language like SQL is used to interact with the data.
Structured data is the best way for computers to interact with information. As opposed to semi-structured and unstructured data.
Semi-structured data is characterized by the lack of a rigid, formal structure. Typically, it contains tags or other types of markup to separate textual content from semantic elements. Semi-structured data is “self-describing” (tags are a good example, the schema is part of the data and the data evolves with the content but lacks consistency)
Unstructured data can be found in different forms: from web pages to emails, from blogs to social media posts, etc. 80% of the data we have is known to be unstructured. Regardless of the format used for storing the data, we are talking, in most cases, about textual documents made of sequences of words.
Structured data on the web
Structured data is a standardized format for providing information about a page and classifying that content on the page; for example, on a recipe page, what are the ingredients, the cooking time, the temperature, the calories, and so on.
Imagine a book supported in three different formats: ebook, paperback, and hardcover. Each has different weights, sizes and so on. So does Schema.org.
The Semantic Web movement, the creation of the Schema.org vocabulary and the importance that these technologies have on semantic search engines like Google, Bing, and Yandex have resulted in publishing online structured data on a previously unprecedented scale.
Structured Data Growth from the Common Web Crawl
Why structured data matter in SEO?
In the context of SEO, structured data are an effective tactic to pass critical information on a web page to search engines. In particular, in a recent update, Google clarified:
Content in structured data are eligible for display as rich results in search.
In short, the search engine is able to provide additional featured on the search results pages, that will enhance the visibility of your content. For instance, when asked about structured data, that is how the search engine might extract content from a web page, and place it into an answer box, called a featured snippet:
Example of a featured snippet coming from the WordLift blog, that with the help of structured data helps the search engine extract critical information. This sort of feature has a high click-through rate. It means that a large number of users finding it will land on your site thanks to a better real estate on Google’s pages.
A Knowledge Panel is a visualization that appears on top of search results (on mobile) or at the right side of them (on desktop) which provides authoritative information about any entity or concept. Structured data help trigger this feature, by enabling Google to pull critical data from your web pages, thus making your brand more visible on its search results.
Other rich elements triggered by structured data are event snippets, which can pull up critical information for an event directly on the search results, thus making your brand the most authoritative on that specific event. By creating an association in the mind of users between that event and your brand:
Example of an event snippet. With the WordLift team, an event page was created by using the mapping of our software to pass key information about the event, which was taken by the search engine as an authoritative source of information on that specific event.
From the technical standpoint, structured data are predefined (fixed) schema and are typically organized in a tabular format that helps machines understand how data are organized.
From the marketing standpoint Structured Data by leveraging on the Schema.org vocabulary, can help search engines better understand, interpret and process the information provided on the web page. Thus making it easier for the search engine (Google in particular) to show that data directly on its search results as a rich element.
Rich elements can be of various types. From featured snippets, knowledge panels, event snippets, top stories, Google news, People Also Ask, reviews and more. Those rich elements can become a key driver of qualified traffic and visibility toward your website.
Some Schema.org types are beneficial for most of the businesses out there. If you have a website you want to help search engines index its content in the most simple and effective way and to do that you can start from…well, the most important page: your homepage. Technical SEO experts like Cindy Krum describes schema markup (as well as XML feeds like the one that you can provide to feed Google Shopping via the Google Merchant Center) as your new sitemap. And it is true when crawling a website (whether you are Google or any other automated crawler you might think of), getting the right information about a website is a goldmine.
Let’s get started with our homepage. We want to let Google know from our homepage the following:
The organization behind the website (Publisher)
The logo of this organization
The URL of the organization
The contact information of the organization
The name of the website
The tagline of the website
The URL of the website
How to use the internal search engine of the website
The Sitelinks (the main links of the website)
We can do all of this by implementing the WebSite structured data type on the homepage of our website. A few more indications from Google on this front:
Add this markup only to the homepage, not to any other pages
very important and unfortunately on a lot of websites, you still find this markup on every single page. It should not happen: it is unnecessary.
Always add one SearchAction for the website, and optionally another if supporting app search (if you have a mobile app – this will help users searching from a mobile device to continue their journey on the mobile app).
The Google Knowledge Graph is a system added to Google Search and launched by Google on May 16, 2012. It is a knowledge base to provide more useful and relevant results to searches using a semantic-search technique. Knowledge Graph Panels have been added to the SERP of Google to provide links to external websites, general information (such as locations, reviews, opening hours etc.) and direct answers to questions. Information embedded in the Google Knowledge Graph is extracted from multiple sources, including structured data encoded in web pages using Microdata and JSON-LD formats.
The primary function of the Google Knowledge Graph is to learn about general facts of the world, organize these pieces of information together and understand how they can connect with each other. These ideas and information are organized into entities, each presenting different kinds of information and how each piece of information embedded within it can connect to another entity. For example, if you look up the search query “roman forum,” you will find a knowledge panel that provides a plethora of relevant information about the ancient forum. This includes things such as: photos, reviews, the option to buy tickets online, operating hours, an address and even related search queries of other major attractions in the city of Rome like the Trevi Fountain and the Pantheon. For Google to understand the Roman Forum as an entity, it connects it to other entities like Rome (as a city), Ancient Rome (as a subject), Place (as a visitable location), as well as a LocalBusiness (where you can buy tickets, attend events and has operating hours).
These entities and the Google Knowledge Graph’s use of these entities are not only a means for Google to organize this information, but it’s also essential in presenting this information on mobile results and through voice search. The Google Knowledge Graph organizes this information not only to be informational but conversational as well. On mobile, this information can be delivered in a more presentable and easier to digest manner than a series of blue links and small text. For voice, a user can tell Google, “Book three tickets to the Roman Forum and Colosseum for next wednesday.” Google can understand what the user is asking by knowing that the “Roman Forum” and “Colosseum” are both attractions and local businesses as well as that next wednesday means April 17th. From there, Google can take this information to complete a task or ask the user for more information or clarification if not enough has been provided.
From “strings” to “things“: Google’s endless quest for clarity
No one really knows for sure when and why languages arose in human history. As British anthropologist, Robin Dunbar suggests, language may have emerged to allow humans to build, foster and maintain relationships. That is what the “Gossip Hypothesis” states. Whether or not this theory holds true, we cannot deny that we live in a world filled with small talk. Yet, through chit-chatting we strive for clarity. When we ask a question, we don’t want answers. We want the answer! That is what Google Knowledge Graph has been attempting to do since 2012. If out of the blue I say “you’re like Socrates,” chances are you’ll be thinking about the greek philosopher. Yet if I said that while playing soccer with my friends that would change everything. Why? Because Socrates + Soccer = Brazilian soccer player That may sound trivial for humans as we do it naturally, but not for search engines!
The Google Knowledge Graph leverages on the relationships between words and concepts to understand the context, thus assigning a specific meaning to a word.
How to get on the Google Knowledge Graph
To get your entities embedded in the Google Knowledge Graph, you will need to establish an online presence and become an authority on your subject. Establishing an online presence is easy, such as creating website content with structured data, registering your site with Google My Business and Search Console and any other important platforms such as Facebook and YouTube. To become an expert on your subject (in the eyes of the Google Knowledge Graph Team), create content using keyword research and structured data. Doing these can help Google create entities using your content and the knowledge you’ve provided.
Google has introduced a new way for any person, brand, company, organization, sports team, event and media property with an existing Knowledge Panel, to get verified and to suggest edits to the information presented by Google. It is a very simple process and the most direct way to suggest edits that will be added to the Google Knowledge Graph.
You can start by clicking on the phrase that starts with “Do you manage the online presence forxyz?”
How to claim your entity on Google
You will be required to share with Google your social profiles (by taking a screenshot of the browser window where it appears you are logged in) and a photo of yourself holding an identity card. Once this step has been completed, the The Google Search Team will verify the information and you will receive an email from them (mine arrived after two weeks)that will allow you to keep the data on Google always up to date.
The email from The Google Search Team after being verified.
Updating your entity in the Google Knowledge Graph
Once you’re on the knowledge graph, you will be able to click theSuggest an edit link (or suggest edits if on mobile)at the top of the knowledge panel and from there click the information that you want to change and propose the changes. In the response box you shall write a short description that:
Clearly state your suggested change
Explain why your suggestion is correct and should replace the existing content
Include a publicly accessible URL that confirms your suggested change
If there are multiple changes or suggestions you’d like to make, make sure you submit the necessary feedback for each change you plan on making.
You can update your featured image and some of the properties being displayed accordingly to each entity type (read here for more information).
The Google Knowledge Graph Search API
The Google Knowledge Graph Application Program Interface (API) enables you to find entities embedded within the Google Knowledge Graph using schema.org markup.This can be used to: get a ranked listing of notable entities in a certain criteria, completing entities in a search box, and annotate/organize content. The API can be found in Google’s API explorer.
Using the API to find entities is relatively easy. Providing information for the different strings can help you find the thing you’re looking for. For example, if you’re looking at the knowledge cards for british monarchs, you can fill out the strings for the query “Pope” and the type “person” to find information about Pope Francis, previous Popes, the Vatican and the Catholic Church. The API then provides data on all the different, relevant entities that exist for the Pope.
Finding entites on the Google Knowlede Graph Search API.