Let’s start with the end. In the experiment I am sharing today we measured the impact of a specific improvement on the structured data of a website that references 500+ Local Business (more specifically the site promotes Lodging Business such as hotels and villas for rent). Before diving into the solution; let’s have a look at the results that we obtained using a Causal Impact analysis. If you are a marketing person or an SEO you constantly struggle to measure the impact of your actions in the most precise and irrefutable way; Casual Impact, a methodology originally introduced by Google, helps you exactly with this. It’s a statistical analysis that builds a Bayesian structural time series model that helps you isolate the impact of a single change being made on a digital platform.
In a week, after improving the existing markup, we could see a positive increase of +5.09% of clicks coming from Google Search – this improvement is statistically relevant, unlikely to be due to random fluctuations and the probability of obtaining this effect by chance is very small 🔥🔥
We did two major improvements to the markup of these local businesses:
Improve the quality of NAP (Name, Address and Phone number) by reconciling the entities with entities in Google My Business (viia Google Maps APIs) and by making sure we had the same data Google has or better;
Adding, for all the reconciled entities, the hasMap property with a direct link to the Google CID Number (Customer ID Number), this is an important identifier that business owners and webmasters should know – it helps Google match entities found by crawling structured data with entities in GMB.
Google My Business is indeed the simplest and most effective way for a local business to enter the Google Knowledge Graph. If your site operates in the travel sector or provides users with immediate access to hundreds of local businesses, what should you do to market your pages using schema markup against a fierce competition made of the business themselves or large brands such as booking.com and tripadvisors.com?
How can you be more relevant for both travelers abroad searching for their dream holiday in another country and for locals trying to escape from large urban areas?
The approach, in most of our projects, is the same regardless of the vertical we work for: knowledge completion and entity reconciliation; these reallyare two essential building blocks of our SEO strategy.
By providing more precise information in the form of structured linked data we are helping search engines find the searchers we’re looking for, at the best time of their customer journey.
Another important aspect is that, while we’re keen on automating SEO (and data curation in general), we understand the importance of the continuous feedback loop between humans and machines: domain experts need to be able to validate the output and to correct any inaccurate predictions that the machine might produce.
There is no way out – tools like WordLift needs to facilitate the process and web scale it but they cannot replace human knowledge and human validation (not yet at least).
LocalBusiness markup works for different types of businesses from a retail shop to a luxury hotel or a shopping center and it comes with sub-types (here is the full list of the different variants from the schema.org website).
All the sub-types, when it comes to SEO and Google in particular, shall contain the following set of information:
Name, Address and Phone number (and here consistency plays a big role and we want to ensure that the same entity on Yelp shows the same data on Apple Maps, Google, Bing and all the other directories that clients might use)
Reference to the official website (this becomes particularly relevant if the publisher does not coincide with the business owner)
Reference to the Google My Business entity (the 5% lift – we have seen above is indeed related to this specific piece of information) using the hasMap property
Location data (and here, as you might image, we can do a lot more than just adding the address as a string of text)
In order to improve the markup and to add the hasMap property on hundreds of pages we’ve added a new functionality in WordLift’s WordPress plugin (that also works already for non-WordPress websites) that helps editors:
Trigger the reconciliation using Google Maps APIs
Review/Approve the suggestions
Improve structured data markup for Local Business
From the screen below the editor can either “Accept” or “Discard” the provided suggestions.
WordLift reconciles an entity with a loose match with the name of the business, the address and/or the phone number.
Adding location markup using containedInPlace/containsPlace and linked data
As seen in the json-ld above we have added – in a previous iteration (and independently from the testing that was done this time) two important properties:
the inverse-property containsPlace (on the pages related to villages and regions) to help search engines clearly understand the location of the local businesses.
This data is also very helpful to compose the breadcrumbs as it will help the searcher understand and confirm the location of a business. Most of us, still make searches like “WordLift, Rome” to find a local business and more likely we will click on results where we can confirm that – yes, WordLift office is indeed located in Italy > Lazio > Rome.
To extract this information along with the sameAs links to Wikidata and GeoNames (one of the largest geographical databases with more than 11 million locations) we used our linked data stack and an extension called WordLift Geo to automatically populate the knowledge graph and the JSON-LD with the containedInPlace and containsPlace properties.
We have seen a +5.09% increase in clicks (after only one week) on pages where we added the hasMap property and improved the consistency of NAP (business name, address and phone number) on a travel website listing over 500+ local businesses
We did this by interfacing the Google Maps Places APIs and by providing suggestions for the editor to validate/reject the suggestions
Using containedInPlace/containsPlace is also a good way to improve the structured data of a local business and you should do this by adding also sameAs links to Wikidata and/or GeoNames to facilitate disambiguation
As most of the searches for local businesses (at least in travel) are in the form of “[business name][location where the business is located]”; we have seen in the past an increased in the CTR when schema Breadcrumb use this information from containedInPlace/containsPlace (see below 👇)
One key aspect in SEO, if you are a local business (or deal with local business), is to have the correct location listed in Google Maps and link your website with Google My Business. The best way to do that is to properly markup your Google Map URL using schema markup.
What is the hasMap property and how should we use it? In 2014 (schema v 1.7) the hasMap property was introduced to link a web page of a place with the URL of a map. In order to facilitate the link between a web page and the corresponding entity on Google Maps we can use the following snippet in the JSON-LD “hasMap”: “https://maps.google.com/maps?cid=YOURCIDNUMBER”
What is the Google CID number? Google customer ID (CID) is a unique number used to identify a Google Ads account. This number can be used to link a website with the corresponding entity in Google My Business.
How can I find the Google CID number using Google Maps? Search the business in Google Maps using the business nameView the source code (use view-source: followed by the url in your browser)Click CTRL+F and search the source code for “ludocid”The CID will be the string of numbers after “ludocid\\u003d” and before #lrd
SERP analysis is an essential step in the process of content optimization to outrank the competition on Google. In this blog post I will share a new way to run SERP analysis using machine learning and a simple python program that you can run on Google Colab.
SERP (Search Engine Result Page) analysis is part of keyword research and helps you understand if the query that you identified is relevant for your business goals. More importantly by analyzing how results are organized we can understand how Google is interpreting a specific query.
What is the intention of the usermaking that search?
What search intent Google is associating with that particular query?
The investigative work required to analyze the top results provide an answer to these questions and guide us to improve (or create) the content that best fit the searcher.
While there is an abundance of keyword research tools that provide SERP analysis functionalities, my particular interest lies in understanding the semantic data layer that Google uses to rank results and what can be inferred using natural language understanding from the corpus of results behind a query. This might also shed some light on how Google does fact extraction and verification for its own knowledge graph starting from the content we write on webpages.
Falling down the rabbit hole
It all started when Jason Barnard and I started to chat about E-A-T and what technique marketers could use to “read and visualize” Brand SERPs. Jason is a brilliant mind and has a profound understanding of Google’s algorithms, he has been studying, tracking and analyzing Brand SERPs since 2013. While Brand SERPs are a category on their own the process of interpreting search results remains the same whether you are comparing the personal brands of “Andrea Volpini” and “Jason Barnard” or analyzing the different shades of meaning between “making homemade pizza” and “make pizza at home”.
Hands-on with SERP analysis
In this pytude (simple python program) as Peter Norvig would call it, the plan goes as follow:
we will crawl Google’s top (10-15-20) results and extract the text behind each webpage,
we will look at the terms and the concepts of the corpus of text resulting from the download, parsing, and scraping of web page data (main body text) of all the results together,
we will then compare two queries “Jason Barnard” and “Andrea Volpini” in our example and we will visualize the most frequent terms for each query within the same semantic space,
After that we will focus on “Jason Barnard” in order to understand the terms that make the top 3 results unique from all the other results,
Finally using a sequence-to-sequence model we will summarize all the top results for Jason in a featured snippet like text (this is indeed impressive),
At last we will build a question-answering modelon top of the corpus of text related to “Jason Barnard” to see what facts we can extract from these pages that can extend or validate information in Google’s knowledge graph.
Text mining Google’s SERP
Our text data (Web corpus) is the result of two queries made on Google.com (you can change this parameter in the Notebook) and of the extraction of all the text behind these webpages. Depending on the website we might or might not be able to collect the text. The two queries I worked with are “Jason Barnard” and “Andrea Volpini” but you can query of course whatever you like.
One of the most crucial work, once the Web corpus has been created, in the text mining field is to present data visually. Using natural language processing (NLP) we can explore these SERPs from different angles and levels of detail. Using Scattertext we’re immediately able to see what terms (from the combination of the two queries) differentiate the corpus from a general English corpus. What are, in other words, the most characteristic keywords of the corpus.
And you can see here besides the names (volpini, jasonbarnard, cyberandy) other relevant terms that characterize both Jason and myself. Boowa a blue dog and Kwala a yellow koala will guide us throughout this investigation so let me first introduce them: they are two cartoon characters that Jason and his wife created back in the nineties. They are still prominent as they appear on Jason’s article on a Wikipedia as part of his career as cartoon maker.
Visualizing term associations in two Brand SERPs
In the scatter plot below we have on the y-axis the category “Jason Barnard” (our first query), and on the x-axis the category for “Andrea Volpini”. On the top right corner of the chart we can see the most frequent terms on both SERPs – the semantic junctions between Jason and myself according to Google.
Not surprisingly there you will find terms like: Google, Knowledge, Twitter and SEO. On the top left side we can spot Boowa and Kwala for Jason and on the bottom right corner AI, WordLift and knowledge graph for myself.
Comparing the terms that make the top 3 results unique
When analyzing the SERP our goal is to understand how Google is interpreting the intent of the user and what terms Google considers relevant for that query. To do so, in the experiment, we split the corpus of the results related to Jason between the content that ranks in position 1, 2 and 3 and everything else.
Summarizing Google’s Search Results
When creating well-optimized content professional SEOs analyze the top results in order to analyze the search intent and to get an overview of the competition. As Gianluca Fiorelli, whom I personally admire a lot, would say; it is vital to look at it directly.
Since we now have the web corpus of all the results I decided to let the AI do the hard work in order to “read” all the content related to Jason and to create an easy to read summary. I’ve experimented quite a lot lately with both extractive and abstractive summarization techniques and I found that, when dealing with an heterogeneous multi-genre corpus like the one we get from scraping web results, BART (a sequence-to-sequence text model) does an excellent job in understanding the text and generating abstractive summaries (for English).
Let’s it in action on Jason’s results. Here is where the fun begins. Since I was working with Jason Barnard a.k.a the Brand SERP Guy, Jason was able to update his own Brand SERP as if Google was his own CMS 😜and we could immediately see from the summary how these changes where impacting what Google was indexing.
Here below the transition from Jason marketer, musicians and cartoon maker to Jason full-time digital marketer.
Can we reverse-engineer Google’s answer box?
As Jason and I were progressing with the experiment I also decided to see how close a Question Answering System running Google , pre-trained models of BERT, could get to Google’s answer box for the Jason-related question below.
Quite impressively, as the web corpus was indeed, the same that Google uses, I could get exactly the same result.
This is interesting as it tells us that we can use question-answering systems to validate if the content that we’re producing responds to the question that we’re targeting.
Ready to transform your marketing strategy with AI?Let's talk!
Lesson we learned
We can produce semantically organized knowledge from raw unstructured content much like a modern search engine would do. By reverse engineering the semantic extraction layer using NER from Google’s top results we can “see” the unique terms that make web documents stand out on a given query.
We can also analyze the evolution over time and space (the same query in a different region can have a different set of results) ofthese terms.
While with keyword research tools we always see a ‘static’ representation of the SERP by running our own analysis pipeline we realize that these results are constantly changing as new content surfaces the index and as Google’s neural mind improves its understanding of the world and of the person making the query.
By comparing different queries we can find aspects in common and uniqueness that can help us inform the content strategy (and the content model behind the strategy).
Are you ready to run your first SERP Analysis using Natural Language Processing?
All of this wouldn’t happen without Jason’s challenge of “visualizing” E-A-T and brand serps and this work is dedicated to him and to the wonderful community of marketers, SEOs, clients and partners that are supporting WordLift. A big thank you also goes to the open-source technologies used in this experiment:
In several cases you might need to mix structured data using different formats like microdata and json-ld; in this article we review the do’s and don’ts for these edge cases.
Can I mix microdata and json-ld?
Yes,it is totally fine to use both syntaxes side by side on the same page but Google will not be able to merge attributes for the same entity using the item ID unless you are using json-ld ONLY.
Let’s get into the details:
I can have on the same page both syntaxes (microdata and json-ld); for instance I might use microdata to render WebPage and use json-ld for Organization;
I can also merge attributes related to the same entity when all the data is available in json-ld but …
I cannot combine information related to the same entity by item ID when this information is written in microdata and json-ld. While this is possible in principle, and a pure RDF application would be able to do it, Google does not support it, which means properties won’t be merged and, most importantly, this won’t satisfy the Rich Snippets‘ requirements.
This topic is particularly relevant as microdata remains today the most widely used format for structured data (see data below collected by Aaron Bradley from the 2019 Common Crawl’s sample) and there is a huge demand to improve structured data to gain additional visibility on Google’s SERP.
Before engaging with the community we created two examples HTML pages:
json-ld + microdata: here is the result validated with the Google Structured Data Testing Tool (where you will see the “Unspecified Type” error since GSDTT cannot merge the two syntaxes);
json-ld + json-ld: here we can see that GSDTT supports the merge by type ID when data is written in json-ld
Interesting enough the first example would be properly rendered by the Structured Data Linter: a tool designed to help webmaster validate structured data markup. Here follows the information from the Twitter thread and the messages by Dan Brickley and Jarno van Driel:
in general you can use both syntaxes side by side, but you won’t get the fine-grained merging of triples by ID that a pure RDF application might expect
Here are my SEO predictions for 2020. In a nutshell: we need to re-think content marketing from the ground up and we – as tool makers – really need to design features that help you cope with an ever changing search landscape; organic opportunities on mobile shrunk by 9% in 2019 (according to Mary Meeker’s 2019 Internet Trends Report) and in 2020, competition will be even fiercer.
The success of our company depends on our vision and here is what we are truly betting on.
What are the top trends for SEO in 2020?
The trends for SEO in 2020, one way or another, are related to Google becoming more and more a digital walled garden: a closed ecosystem with full control over all the applications. Here arethe top 10 trendsyou need to watch in 2020:
A Giant Panda just walked into our office ?(courtesy of Google 3D and Augmented Reality results)
1. Google’s SERP gets richer
With Cameos on Google, mini-apps, 3D images and AR within search we expect the SERP to become a true multimedia hub. We also learned that each new element of this media-rich SERP is driven by its one specific ranking algorithm (see what Jason Barnard calls Darwinism in Search to learn more about it) and that the core algorithm combines all of these rankings into one holistic overview. Following Google I/O, the support for 3D object and mini-apps have been announced; this will further expand in 2020. There are already many apps supporting AR and enabling consumers to see how 3D objects look in your home or how a new pair of shoes will look on you will become more popular (have a look at our snowman 3D example to get a taste of it). This is a complete new perspective – that yet it requires a savvy use of structured data (3D object use the so called 3D Markup). We will also see even more interactivity with Mini Apps – these are custom built applications that you can build within Google Search. We’re developing a first prototype and yes – this is also a game changer for SEO.
Hijacking Google SERP with Mini Apps
WHAT CAN YOU DO ABOUT IT?
Forget about being 1° on the SERP; keep on innovating with high quality content that can appeal your audience across different search channels (from images to videos, from tweets to news articles). Use structured data to improve your SERP visibility and get ready to experiment with Mini-Apps and Google’s new ways of engaging with users.
2. Voice search and Voice apps are here to stay
Voice is no longer a new trend: voice-enabled device interaction is becoming part of consumer day-to-day information diet.
While the vast majority of users are still using voice primarily for basic tasks (i.e. “call my mum”) we’ve seen concrete opportunities in two areas that we believe will keep on growing also in 2020:
Long tail informational queries. We call it Voice Search SEO, and it is about optimizing content for long-tail queries likely to be spoken aloud, providing answers using FAQ markup and creating content like recipes and news articles that the Google Assistant can interact with. Once you see traffic coming in for these long-tail queries on your website, you might want to consider creating your own skill on Alexa or application for the Google Assistant. See a practical example below ?
Start by analyzing long tail queries that matter for your business, make sure (if you have a local business) to optimize your presence on Google My Business (GMB) and improve the interplay with your website. Focus on calls and direct messages and always improve consistency. Remember your offline presence is strategic for your online visibility.
3. Intent-focused content optimization is the new mantra
Long tail queries will keep gaining momentum as voice search becomes more pervasive and Google gets better at understanding the intent behind each query. The game here is to create content that works for your audience by leveraging on intent-focused content optimization, entity-centric content modeling and in-depth analysis of user personas. On-point and authoritative content that respond to specific information need will win. This leads – in SEO terms – to a clear understanding of your target audience (for this you might want to use our Web Analytics Dashboard) and massive restructuring of already existing content to ensure that only your best articles, for a given topic, survive. Get to work – analyze your strengths, spent time understanding your readers and prune anything that doesn’t fit their need.
WHAT CAN YOU DO ABOUT IT?
Spend time analyzing your readers, their behaviour and their journey. SEO is about creating an engaging experience that fits the need of the users. Make sure you only keep the best for them, consolidate content and spend time on improving your content model. Want to learn more about our entity-based content model? Book a call with us, we’re here to help you grow your business.
4. Video keeps growing
Video will grow and YouTube, besides being the second largest search engine after Google, has become your new TV (6 out of 10 people in 2019 prefer YouTube over TV). Current internet users (especially from younger generations) tend to prefer getting information from on-line videos. In pure SEO terms video is a terrific channel and whether you use YouTube or your own video platform getting videos on Google Search, Google Images and Google Discover is strategic. Using structured data here is a must and opens the door to an engaging user experience.
WHAT CAN YOU DO ABOUT IT?
It’s time to setup your killer video studio and start filming. You don’t need to break your bank, things can be done on budget and with . high quality. Use structured data to promote your contenton the website and define a healthy strategy to engage with new users on YouTube.
5. Branding and reputation are essential
Branding and reputation are essential in modern SEO; earning your presence in the knowledge graph has a tremendous impact across multiple platforms (from Google Search to Google Images, from Google Discover to Bing Search) – and requires consistency, strategy, some understanding of linked data publishing and content quality (for all your E-A-T challenges and SEO questions Lily Ray is the right person to engage with).
Creating your digital brand means cultivating, nurturing and optimizing your presence in Google and Bing Knowledge Graph. Verifying, claiming the entity, using structured data and helping the gatekeepers (Google, Bing, Facebook etc.) let you interact with your audience. It also means monitoring the changes and to this regard you might want to read the recent study Jason Barnard did while tracking changes in the Google Knowledge Graph.
WHAT CAN YOU DO ABOUT IT?
Get on the Google’s Knowledge Graph and Bing’s Graph, curate your entity on your own websites using structured linked data and remember, SEO is about branding as much as it is about disseminating high quality content. Define a clear KPI to make sure you keep on improving your brand visibility. Need help? Give us call, take the time to visit us in Rome ? and start improving your branding.
6. Queryless search goes mainstream
We saw publishers traffic skyrocketing in 2019 because of Google Discover, we expect these peaks to be normalized and trimmed as more sites gets into Google’s massive content recommendation machine. Remember here the prophetic tweet from @methode (Google’s Gary Illyes) on this topic and the SEO debate that followed it.
The basic idea is that things will change very quickly on this front. We have seen websites that in less than 10 months have accumulated a staggering amount of clicks and sites that got no clicks at all. I do expect this to change and the traffic to be distributed across a larger spectrum of websites in 2020.
Google Discover Report
WHAT CAN YOU DO ABOUT IT?
Once again here it is all about proper structured data implementation, AMP support and hi-res images. Here is my updated checklist to optimize your content for Google Discover: read it all up and get ready for a completely new stream of traffic.
7. Structured data is your new sitemap and…a lot more
Using schema is not only vital to let search engines present your content via featured snippets but has also become a way to help Google understand how the content on your site is connected. Learn schema markup and start thinking like a crawler (or let WordLift do the work for you ?) translates whatever content you think is important in a structured linked graph. We’ve seen literally magic happening this year with sites using our SEO-designed knowledge graphs (i.e. a 68% organic growth on the Salzburgerland website in an ultra-competitive landscape such as traveling and with a growing number of queries ending up in zero clicks). I also expect to see more concrete use-cases where structured data is not only used for SEO but becomes a building block of the content strategy (our Semantic Web Analytics Dashboard has been a success and we expect more people using on-page structured data to improve analytics, to improve user experience, to train new recommendation systems and to structure content).
WHAT CAN YOU DO ABOUT IT?
Do what Google does: build your own knowledge graph and automate structured data markup with WordLift. Structured data is no longer about rich results, it is an essential building block of your content strategy. Contact us to learn more
8. Cutting-edge language models boost your writing creativity and provide further help with SEO ?
Programming is no longer the same, machine learning and natural language processing are a natural fit with creativity, content writing and content optimization in general. 2019 will be remembered as the year when NLP literally exploded, we witnessed one innovation after the other (from GPT-2 to BERT, from DistilledBERT to ALBERT) and an unprecedented level of improvements across the most daring NLP tasks such as content understanding, QAs answering, content generation and more. Unlike simpler language generation approaches like the good ol’ Markov chains, which only work with a limited vocabulary, ML models using the transformer architecture can learn larger patterns of grammar and semantics and re-apply them in completely different contexts.
The improvements of transfer learning from large-scale language models in 2019
Smaller, faster, cheaper and improved language models have revolutionized the world of NLP in 2019 (read this article here from one of the team driving this revolution of large scale language models).
Finally, the night has came and I can play with Question Answering using #Bert Large whole-word version on quotes from Umberto Eco, “The Name of the Rose” #NLP@TheodoraPetkova isn’t this new wave of language models an incredible creative opportunity for content writers? pic.twitter.com/BSwN7elmAh
Needless to say, I see this trend evolving in 2020 and we’re doing our very best to bring these innovations to WordLift (have a look at how we’re planning to help you summarize blog posts using BERT). This is by far the area where I see most of the opportunities in 2020, not only in terms of SEO automation but also in terms of content creation and optimization. Stay tuned, follow me on Twitter and get ready to improve your publishing workflow with the help of AI.
9. Page Load Time continues to rule your world
The speed of websites and the different tasks that concur to the loading of a webpage will remain a key factor in SEO. Google has imposed – in the summer of 2018 – to improve the loading speed of websites and will continue to do so in 2020. The more we interact with our content using Google Search, Google Discover, Google Images and Google Assistant and the more our website need to be top performing. From improving Time To First Byte to skimming down complexity from HTML files we will continue to spend a lot of energy to make our websites blazing fast. Nothing new really – this was exactly the same target we had last year – it remains highly important but we have now more technology and more metrics to keep on improving (GSC, introduced a brand new speed report in 2019).
WHAT CAN YOU DO ABOUT IT?
Always remember: Google will see and evaluate your website as a user with an average mobile phone connected on a 3G network. Do your best to reduce complexity, optimize content delivery and improve loading performances (from compressing images, to shrinking CSS, from web fonts loading to server response time). Start by analyzing your website with tools like GTMetrix and Web.dev and make sure to use plugins like WP Rocket. It’s a technical investment, it requires effort but it really pays off.
10. New search engines emerge
As Google plays more and more the Walled Garden strategy to increase its market share by literally devouring the user experience across multiple verticals (from flights to hotels, from cooking to jobs etc.) the need for an unbiased and possibly privacy-safe search experience keeps growing and paves the way to new search engines. Not only DuckDuckGo kept growing in 2019 and will continue to do so in 2020 but also Ahrefs (a well known SEO tool) announced the launch of a new search engine to encourage innovation (and to leverage on the raising complaints against Google from publishers). I also expect more SEO-related techniques to improve content visibility across the Amazon platform for anyone selling products online.
Do your best inside and outside Google Walled Garden. It’s pure fun, Google is still bringing a tremendous amount of value to web publishers and, with most of our clients in 2019, we have seen a double digit growth on organic. So,are you ready to innovate on SEO in 2020? Still have a question? Book a call with us and join our list of happy customers!
Google Cameos is a new – invite-only – Google Search experiment to let people (that already appear in Google Knowledge Graph) record videos for answering simple questions related to their work and/or life experience.
Video-answers appear in Google Search right below the knowledge panel; the same answers are also presented on Google Discover and can pop up on the Google Assistant when a user asks something about that person.
Cameos on Google lets you be the authority on you. This is how Google explains it.
Here is how Google Cameos work
Quite simply, here is how it worked for me:
I received an invitation by email
I downloaded the Google Cameos APP on my phone (you can find it for both Android and iOS)
Upon starting the app (and here is where the fun begins), it will start generating questions by looking at the information Google has in its Knowledge Graph; these questions are divided into two categories:
– For the fans (things that are more closely related to the information Google has about you)
– Trending topics (most frequent questions on topics that relates to you)
Simply by choosing a question you can start recording and, if you like the preview, you can send the video and
The video gets published in an hour or so on Google Search below the knowledge panel
You get from the app a quick overview in terms of “Total Impressions” and “Watches”
Cameos on Google
What do you need to use Google Cameos?
Right now the experiment is limited and you will need to get an invite to partecipate but, here is what I did before getting the invitation.
Get your name/entity in Google’s Knowledge Graph (not trivial but these days not so difficult either)
Get your entity verified – if your name appears with a Knowledge panel you can get started with the verification process from there – alternatively you can you start from Posts on Google
Make sure the content about you is always fresh and up to date (you can suggest edits on the information available on the Knowledge Panel)
From Cameos to Google Discover
Is Google trying again to become a Social Network?
Well, yes in a way the medium is similar, the user is entice to invest on his own personal branding and to engage with his/her audience on Google‘s channels. While we only see it happening for people in the Knowledge Graphit is easy to expect that anyone is already in Google’s Knowledge Graph one way or another. Try to ask Google to call a friend from Twitter and you will find yourself in the awkward position of accessing a phone number of a person that, yes you know, but who is not in your phone’s contact list, and all of this displayed with a nice-looking material card containing a photo of that person taken from…well, the web.
The 3 things I learnt from the Google Cameos experiment
The SERP is getting richer and richer to let people interact with each others in all sort of ways;
Google‘s interactions and activation channels are built on top of its ever-growing knowledge graph; the more data you provide the easier it gets for Google to let you connect with your audience – this is valid for a small business, an individual and a brand. In this particular case the most exciting piece of technology is the machinery used to generate the questions by looking at the data in the Knowledge Graph. Let me give you an example, since Google knows that I have co-founded several companies and I am currently holding a CEO position – my questions are all gravitating about being a CEO, starting up a company and acting as a founder. Generating novel questions from Knowledge Graph is one of these tech challenges the ML/DL community is very excited about; as a marketer this means that the more I expand the data related to my entities the more occasions I get of interacting with my audience;
Google’s heavily investing on its own Walled Garden by providing an AI-driven communication platform where everyone can also buy ads – it’s interesting to see as these experiments on the “organic” front tend to have their own “paid” counterpart (think for instance of the lead generation that has been recently introduced on Google Ads) – this means that a lot is changing in the way that organic search work and the stronger your brand is the more chances you have in capturing users attention.
Now, it’s really time to get famous and start playing with Cameos ?