We had the opportunity to interview Bill Slawski, Director of SEO Research at Go Fish Digital, Creator and Author of SEO by the Sea. Bill Slawski is among the most authoritative people in the SEO community, a hybrid between an academic researcher and a practitioner. He has been looking at how search engines work since 1996. With Andrea Volpini we took the chance to ask Bill a few questions to understand how SEO is evolving and why you should understand the current picture, to keep implementing a successful SEO strategy!
When did you start with SEO?
Bill Slawski: I started doing SEO in 1996. I also made my first site in 1996. The brother of one of the people I worked on that site, she was selling computers for a digital equipment corp at that time., she sent us an email saying, “Hey, we just started this new website. You guys might like it.” It was the time in which AltaVista was a primary search engine. This was my first chance to see a search engine in action. My client said, “We need to be in this.” I tried to figure out how, and that was my first attempt at doing SEO!
After the launch of Google Discover, it seems that we live in a query-less world? How has SEO changed?
Bill Slawski: It has changed, but it hasn’t changed that much. I remember in 2007 giving a presentation in an SEO meetup on named entities. Things have been in the atmosphere. We just haven’t really brought them to the forefront and talked about them too much. Query-less searches example? You’re driving down the road 50 miles an hour, you wave your phone around in the air and it’s a signal to your phone asking you where you’re going. “Give me navigation, what’s ahead of us? What’s the traffic like? Are there detours?” And your phone can tell you that. It can say there’s a five-minute delay up ahead. You really don’t need a query for that.
What do you then, If you don’t need a query?
Bill Slawski: Well, for the Google Now, for it to show you search suggestions, it needs to have some idea of what your search history is like, what you’re interested in. In Google Now, you can feed it information about your interests, but it can also look at what you’ve searched for in the past, what you look like you have an interest in. If you want to see certain information about a certain sports team or a movie or a TV series, you search for those things and it knows you have an interest in them.
Andrea Volpini: It’s a context that gets built around the user. In one analysis that we run from one of our VIP customers, by looking at the data from the Google search console I found extremely interesting how it had reached 42%! You can see actually this big bump is due to the fact that Google started to account this data. This fact might be scaring a lot of people in the SEO industry. As, if we live in a query-less world, how do you optimize for it?
Can we do SEO in a query-less world?
Bill Slawski: They (SEO practitioners) should be happy about it. They should be excited about it.
Andrea Volpini: I was super excited. When I saw it, for me, it was like a revelation, because I have always put a lot of effort into creating data and metadata. Before we arrived to structure data, it’s always been a very important aspect of the website that we build. I used to build CMS, so I was really into creating data. But I underestimated the impact of a content recommendation through Google Discover when it comes to the traffic of a new website. Did you expect something like this?
Bill Slawski: If you watch how Google is tracking trends, entity search, and you can identify which things are entities by them having an entity type associated with them, something other than just search term, so you search for a baseball team or a football team and you see search term is one category associated with it, and the other category might be professional Chicago baseball team. The professional Chicago baseball team is the entity. Google’s tracking entities. What this means is when they identify interests that you may have, they may do that somewhat broadly, and they may show you as a searcher in Google Now in Discover things related to that. If you write about some things with some level of generalization that might fit some of the broader categories that match a lot, you’re gonna show up in some of those discovery things.
It’s like when Google used to show headers in search results, “Search news now,” or “Top news now,” and identify your site or something you wrote as a blog post as something fits top news now category, you didn’t apply to have that. You were a beneficiary of Google’s recommendation.
Andrea Volpini: Yes. When I saw this, I started to look a little bit at the data in the Google search console of this client and then another client and then another client again. What I found out by comparing these first sites is that Google is tending not to make an overlap with Google search and Discover, meaning that if it’s bringing traffic on Google search, the page might not be featured on Discover. The pages that are featured on Discover that are also on Google search as high ranking. But I found extremely interesting the fact that pages that didn’t receive any organic traffic had been discovered by Google Discover as if Google is trying to differentiate these channels.
Is this two-level search effect widening?
Bill Slawski: I think they’re trying to broaden, we might say, broaden our experience. Give us things that we’re not necessarily searching for, but are related. There’s at least one AI program I’ve worked with where it looks at my Twitter stream and recommends storage for me based upon where I’ve been tweeting. I see Google taking a role like that: “These are some other things they might be interested in that they haven’t been searching for. Let me show them to them.”
There’s a brilliant Google contributor video about the Semantic Search Engine. The first few minutes, he starts off saying, “Okay, I had trouble deciding what to name this video. I thought about The Discover Search Engine. Then I thought about A Decision Search Engine and realized Bing had already taken that. A Smart Search Engine. Well, that’s obvious.”
But capturing what we’re interested in is something Google’s seeming to try to do more of with the related questions that people also ask. We’re seeing Google trying to keep us on search results pages, clicking through, question after question, seeing things that are related that we’re interested in. Probably tracking every click that we make as to what we might have some interest in. With one box results, the same type of thing. They’ll keep on showing us one box results if we keep on clicking on them. If we stop clicking on them, they’ll change those.
Andrea Volpini: Where are we going with all of these? How do you see the role of SEO is changing? What would you recommend to an SEO that starts today, what should he become? You told us how you started in ’96 with someone asking you to be on AltaVista, and I remember AltaVista quite well. I also worked with AltaVista myself, and we started to use AltaVista for intranet.
What would you recommend to someone that starts SEO today?
Bill Slawski: I’m gonna go back to 2005 to a project I worked on then. It was for Baltimore.org. It was a visitor’s center of Baltimore, the conference center. They wanted people to visit the city and see it and see everything they had to offer. They were trying to rank well for terms like Baltimore bars and Baltimore sports. They got in their heads that they wanted to rank well for Baltimore black history. We tried to optimize a page for Baltimore black history. We put the words “Baltimore Black History” on the page a few times. There were too many other good sites which were talking about Baltimore’s black history. We were failing miserably to rank well for that phrase. I turned to a copywriter and I said, “There are great places in Baltimore to see they have something to do with this history. Let’s write about those. Let’s create a walking tour of the city. Let’s show people the famous black churches and black colleges and the nine-foot-tall statue of Billie Holiday, the six townhomes that Frederick Douglas bought in his 60s.
“He was an escaped slave at one point in time, came back to Baltimore as he got older and a lot richer and started buying properties and became a businessman. Let’s show people those places. Let’s tell them how to get there.”
We created a page that was walking tour of Baltimore. After three months, it was the sixth most visited page on that site, a site of about 300 pages or so. That was really good. That was successful. It got people to actually visit the city of Baltimore. They wanted to see those things.
Aaron Bradley ran this series of tweets the other day where one of the things he said was, “Don’t get worried about the switch in search engines to entities. Entities are all around us. They surround us. They’re everywhere. They’re everything you can write about. They’re web pages. They’re people. They’re places.”
It’s true. If we switch from a search based on words, on matching words, on documents to words and queries, we’re missing the opportunity to write about things, to identify attributes, properties associated with those things to tell people about what’s in the world around us, and they’re gonna search for those things. That’s a movement that search engine makes you, being able to understand that you’re talking about something in particular and return information about that thing.
Andrea Volpini: The new SEO should become basically a contextual writer, someone that intercept the intents and can create good content around it.
Is there something else in the profession of SEO in 2020?
Bill Slawski: One of the things I read about recently was something called entity extraction. Search engine being able to read a page, identify all the things that are on that page that are being written about, and all the contexts that surround those things, all the classes, all the … you see the example on the post I wrote about was a baseball player, Bryce Harper. Bryce Harper was a Washington National. Bryce Harper hits home runs. That’s the context. He’s hit so many home runs over his career. Having search engine being able to take facts on a page, understand them, and make a collection of those facts, compare them to what’s said on other pages about the same entities, so they can fact check. It can do the fact check in itself. It doesn’t need some news organization to do that.
Andrea Volpini: Well, this is the reason when we started our project, my initial idea was to create a semantic editor to let people create link data. I didn’t look at SEO as a potential market, but then I realized that immediately, all the interest was coming from, indeed, the SEO community. For instance, we created your entity on the WordLift website. This means that when we annotate the content with our tool, we have this permanent linked data ID. In the beginning, I thought it was natural to have permanent linked data IDs, because this was the way that the semantic web worked. But then I suddenly realized there is a very strong SEO effect in doing that because Google is also crawling this RDF that I’m publishing.
I saw a few months back that it’s actually a different class of IP that Google uses for crawling this data.
Do you think that it still makes sense to publish your own linked data ID, or it’s okay to use other IDs? Do you see value in publishing data with your own systems?
Bill Slawski: Something I haven’t really thought about too much. But it’s worth considering. I’ve seen people publishing those. I’ve tried to put one of those together, and I asked myself, “Why am I doing this? Is there gonna be value to it? Is it gonna be worthwhile?” But when I put together my homepage, a page about me, I wanted to try it, see what it was capable of, to see what it might show in search engines for doing that. Some of it showed some of it didn’t. It was interesting to experiment with and try and see what the rest of the world is catching onto when you do create that stuff.
Andrea Volpini: But this is actually how the entity of Gennaro Cuofano was born in the Knowledge Graph. We started to add a lot of reference in telling Google, “Here is Gennaro, is also authors of these books.” As soon as we injected this information into our Knowledge Graph and into the pages, for Google it was easier to collect the data and fact-check and say, “Okay, this is the guy that wrote the book and now works for this company,” and so on and so forth.
Gennaro Cuofano: and Google provided a Knowledge Panel with a complete description. It was something that before, it was not showing up in search, or at least it was just partial information. It felt like, by providing this kind of information, we allowed the search engine, actually Google, to have a better context and fact-check the information which gave it authority to the information that I provided.
Bill Slawski: Have you looked at Microsoft’s Concept Graph?
Andrea Volpini: Yes! It’s even more advanced. I found it more advanced in a way. It’s also very quick in getting the information in. We have a lot more easy experience when we are someone that wants to be in Bing because as soon as we put such data it gets it into the panel.
Bill Slawski: It surprised me because, for a while, stuff that Microsoft Research in Asia was doing was disappearing. They put together probates and it stopped. Nothing happened for a couple of years. It’s been revived into the Microsoft Concept Graph, which is good to see. It’s good to see they did something with all that work.
Gennaro Cuofano: Plus, we don’t know how much integration is also Bink and LinkedIn APIs
Andrea Volpini: It’s pretty strong! Probably the quickest entry in the Satori, the Knowledge Graph of Microsoft, is now for a person to be on LinkedIn, because it is like they’re using this information.
What other ways can we use the structure data currently for SEO?
Bill Slawski: One of the things I would say to that is augmentation queries. I mentioned those on the presentation. Google will not only look at queries associated with pages about a particular person, place or thing, but it will also query the log information and will look at structured data associated with the page, and it will run queries based upon those. It’s doing some machine learning to try to understand what else might be interesting about pages of yours. If these augmentation queries, the test queries that it runs about your page, tend to do as well as the original queries for your page in terms of people selecting things, people clicking on things. It might combine the augmentation query results with the original query results when it shows people them for your page.
New schemas from the latest version of Schema 3.5 is the “knows about” attribute. I mentioned with the knows about attribute, you could be a plumber, you could know about drain repair. Some searches will send you plumbers, and they expect to see information just about Los Angeles plumbers, they may see a result from a Los Angeles plumber that talks about drain repair. That may be exactly what they’re looking for. That may expand search results, expand something relevant to your site that you’ve identified as an area of expertise, which I think is interesting. I like that structured data is capable of a result like that.
What is your favorite new addition to Schema 3.5?
Bill Slawski: FAQ page!
On Schema.org there’s such a wide range. They’re gonna update that every month now. But just having things like bed type is good.
What do you think is the right balance when I add structured data to my pages between an over-complicated data structuring and simplicity?
Bill Slawski: I Did SEO for a site a few years ago that was an apartment complex. It was having trouble renting units. There was a four-page apartment complex, and it showed up its dog park really well. It didn’t show off things like the fact that if you took the elevator to the basement, you got let out to the DC metro where you could travel all throughout Washington DC, northern Virginia, and southern Maryland and visit all 31 Smithsonian, and a lot of other things that are underground, underneath that part of Virginia. It was right next to what’s called Pentagon City, which is the largest shopping mall in Virginia. It’s four stories tall, all underground. You can’t see it from the street. Adding structured data to your page to identify those is something you can do. It’s probably something you should include on the page itself.
Maybe you want to include information, more information, on your pages about entities and include them in structured data, too, in a way that is really precise. You’re using that language identified and Schema that subject matter experts describe as something people might want to know. It defines it well. It defines it easily.
What you’re saying is do what you do with your content with your data. If you put emphasis on an aspect content-wise, then you should also do the proper markup for it?
Bill Slawski: Right! With the apartment complex I was talking about, location sells. It gets people to decide, “This is where I want to live.” Tell them about the area around them. Put that on your page and put that in your data. Don’t show pictures of the dog park if you want to tell them what the area schools are like and what the community’s like, what business is around, what opportunities there are. You can go to the basement, this apartment complex, and ride to the local baseball stadium or the local football stadium. You’re blocks away. DC traffic is a nightmare. If you ride the metro line everywhere, you’re much better off…
Andrea Volpini: That’s big. Also metro in real estate, we say it, it’s always increased 30% the value of the real estate if you have a metro station close by. Definitely is relevant. Something that is relevant for the business should be put into consideration also when structuring the page.
Is it worth also exploring Schema which is not yet officially used by Google?
Bill Slawski: You can anticipate things that never happen. That’s possible. But sometimes, maybe anticipating things correctly can be a competitive advantage if it comes into fruition that it’s come about. You mentioned real estate. Have you seen things like walkability scores being used on realty sites? The idea that somebody can give you a metric to tell you where you can compare easily one location to another based on what you can do without a car, it’s a nice feature. Being able to find out data about a location could be really useful.
Andrea Volpini: This is why, getting back to the linked data ID, this is why having a linked data ID for the articles and the entities that describe the article become relevant because then you can query the data yourself, and then you can make an analysis of what neighborhood that the least amount of traffic, and see, “Okay, did I write about this neighborhood or not?” This is also one of the experiments that we do these days is that we bring the entity data from the page into Google Analytics to help the editorial team think about what traffic entities are generating across multiple pages. Entities in a way can also be used internally for organizing things and for saying, “Yes, in this neighborhood, for instance, we have the least amount of criminality” or things like that. You can start cross-checking data, not only waiting for Google to use the data. You can also use the data yourself.
Is there any other aspect worth mentioning about how to use structured data for SEO?
Bill Slawski:Mike Blumenthal wrote an article based upon something I wrote about, the thing about entity extraction. He said, “Hotels are entities, and if you put information about hotels, about bookings, about locations, about amenities onto your pages so that people can find them, so people can identify those things, you’re making their experience searching for things richer and more …”
Andrea Volpini: We had a case where we had done especially this for lodging business. We have seen that as soon as we have started to add amenities as structured data, and most importantly, as soon as we had started to actually add geographic references to the places that this location we’re in, we saw an increase, not in pure traffic terms. The traffic went up. But we also saw an interesting phenomenon of queries becoming broader. The site, before having structured data to the hotels and to the lodging business, received traffic from very few keywords. As soon as we started to add the structured data and typing amenities and services, we also added the Schema action for booking, we saw that Google was bringing a lot more traffic on long tail keywords for a lot of different location that this business had hotels in, but it was not being visible on Google.
Bill Slawski: It wasn’t just matching names of locations on your pages to names of locations and queries, it was Google understanding where you were located-
What do you think Schema Actions are useful for?
Bill Slawski: There was a patent that came out a couple of years ago where Google said, “You can circle an entity on a mobile device and you can register actions associated with those entities.” Somebody got the idea right and the concept wrong. They were thinking about touchscreens instead of voice. They never really rewrote that so that it was voice activated, so you could register actions with spoken queries instead of these touch queries. But I like the idea. Alexa has the programs, being able to register actions with your entities is not too different from what existed in Google before. Think about how you would optimize a local search page where you would make sure your address was in a postal format so that it was more likely to be found and used. Of course, you wanted people to drive to a location, you’d want to give them driving directions, and that’s something you can register in action for now, but it’s already in there. It feels like you’re helping Google implement things that it should be implementing anyway, or you’re likely to be.
Andrea Volpini: Of course. I think that’s a very beautiful point, that we’re doing something that we should do. We’re now doing it for Google, but that’s the way it should be done. I like it. I like it a lot.
How much do you think structured data’s gonna help for voice search?
Bill Slawski: I can see Schema not being necessary because of other things going on, like the entity extraction, where Google is trying to identify. But Google tends to do things in a redundant way. They tend to have two different channels to get the same thing done. If one gets something correct and the other doesn’t, it fails to, they still have it covered. I think Schema gives them that chance. It gives site owners a chance to include things that maybe Google might have missed. If Google captures stuff and they have an organization like Schema behind them, which isn’t the search engine, it’s a bunch of volunteers who are subject matter experts in a lot of places or play those on TV, some are really good at that. Some of them miss some things. If you are a member of the Schema community mailing list, the conversations that take place where people call people on things, like, “Wouldn’t you do this for this? Wouldn’t you do that? Why aren’t you doing this?” It’s interesting to read those conversations.
Andrea Volpini: Absolutely. I always enjoy the mailing list of Schema, because as you said, you have a different perspective and different subject matter expert that of course are in the need of declaring what their content is about. Yeah, I think that Schema, I see it as a site map for data. Even though Google can crawl the information, it always values the fact that there is someone behind that it’s curating the data and that might add something that they might have missed, as you say, but also give them a chance to come to check and say, “Okay, this is true or not?”
Bill Slawski: You want a scalable web. It does make sense to have editors curating what gets listed. That potentially is an issue with Wikipedia at some point in the future. There’s only so much human edited knowledge it’s gonna handle. When some event changes the world overnight and some facts about some important things change, you don’t want human editors trying to catch up as quickly as they can to get it correct. You want some automated way of having that information updated. Will we see that? We have organizations like DeepMind mining sites like the DailyMail and CNN. They chose those not necessarily because they’re the best sources of news, but because they’re structured in a way that makes it easy to find that.
What SEOs should be looking at as of now? What do they need be very careful about?
Bill Slawski:It would be not to be intimidated by the search engine grabbing content from web pages and publishing it in knowledge panels. Look for the opportunities when they’re there. Google is business, and as a business, they base what they do on advertising. But they’re not trying to steal your business. They may take advantage of business models that maybe need to be a little more sophisticated than “how tall is Abraham Lincoln? “You could probably build something a little bit more robust than that as a business model. But if Google‘s stealing your business model from you in what they publish on knowledge panels, you should work around its business model and not be intimidated by it. Consider how much of an opportunity it is potentially to have a channel where you’re being focused upon, located easily, by people who might value your services.
When I started Four-Week MBA back in 2015, I had in mind a portal where people could find practical business insights they could readily apply. Opposite to the concept of traditional business school where you invest two years of your life with a full-time commitment, massive financial resources and get out of the job market.
I wanted to create a place where people could find easily executable and practical advice from other practitioners. I wanted it to be the farthest thing I could imagine from the purely academic world. Yet, even though I had managed to bring traffic and some awareness to the site, it didn’t generate enough organic reach. And in December 2017 the situation was quite depressing:
I made up my mind and as a New Year’s resolution. I decided I needed to create some traction for the blog. Yet as often happens with New Year’s resolutions I didn’t do anything about it for three months.
Until on March-April 2018, I started to experiment a bit to try to figure out how to make an amateurish blog, a professional one. In the last six months I accelerated the experimentation, and after many trials and errors this is where I got:
What lessons did I learn along the way? I want to show you a framework I used to get where I am.
In a Ph.D. program, you need to focus on a single area of expertise for years. At that stage you won’t be satisfied anymore with a superficial knowledge of a subject; you’ll look for understanding. The step from knowledge to understanding is not an easy one. It requires years of research, study and thinking about that subject.
The reason why my blog hadn’t been successful was lack of focus. Thus, I asked myself a simple question, which would have massive implications on my editorial strategy “what topic would I be so passionate about to be worth my time for at least five-ten years of research?”
In short, I thought in terms of completing a Ph.D. program. When you decide to go for it, it isn’t a simple choice; you need to love it at the point of knowing you’ll be spending the next years researching into the topic. In my case, after a few weeks thinking about it, it came up, and it was “business modeling.” I knew I wanted to know everything about the topic and that I’d be willing to devote hours of research on the subject to master it at the point to be as good as a Ph.D.
The reason for narrowing down so much is to allow your blog to gain traction quicker. Indeed, you have to think of a blog just like you would with a startup or small business. For instance, when PayPal started out, it didn’t go for the whole market right away, but it identified a niche, a thousand power users that could help it gain traction, quickly.
From that focus, I started moving toward an editorial strategy.
Branding vs. traction
Early 2018, I was looking for potential keywords that could be part of my editorial strategy, and I fell into the “search volume trap.” In short, this consists of going after large volume keywords that though look hot in reality are not worth the effort in terms of real traction. Those are keywords that have a search intent that is purely informational. While this distinction is clear in theory, it’s often hard to catch in practice.
In short, when it comes to informational keywords, people look for how, what, when, where and why of things. And if you’re building a publishing business, but also a blog around your company’s products, many of the keywords might be informational.
Yet not all the informational keywords are born equal. Indeed, as we’re moving to the voice search world, many of those traffic opportunities that existed in the past are getting lost. A trivial example is about asking Google “today’s weather:”
In the past, weather websites gained millions of visits a day from Google. Those days are long gone. Thus, if you want to target an informational keyword that has a real value for the business you need to think in terms of what can and what can’t be answered fully via Google search results pages.
Therefore, your content will need to:
Give a short answer accessible via Google as a branding strategy
Elicit users to click through via a long, in-depth content
For instance, when I created a guide about business models, I targeted several keywords and a particular keyword “types of business models.” I did that for a simple reason. If I could trigger Google’s featured snippet on that query with a list, this was also a significant traffic opportunity. This is what happened:
Can you notice anything particular on this snippet? One critical aspect is that it does offer a list, but it is limited. In fact, in the guide, I offer 30 business models, while Google only shows seven of them. For those that wanted to learn more about the topic, it means clicking through the snippet. Not surprisingly this query has a 5.6% click-through rate (CTR) so far, which is more than double of the overall average CTR of my site, at 2.6%.
Yet even for those that are not landing on the page, this is still a critical keyword for branding my site in that topical area. Therefore, it is like Google is removing the noise for me. Only those most interested in the topic and more in line as my target audience will land on the site. The rest will see my website and have a touch point which helps build my brand.
Understanding the difference between branding and traction is critical as it allows you to structure the content so that you can use Google as your tool for leveraging the brand and bringing qualified opportunities to your site. That connects to the next point.
A barbell based editorial strategy
Back in the late 1990s, when Google started out, it was an overnight success as it was 10x better than most existing search engines in the space. What made Google business model successful was not just its ability to give better results, it was its distribution strategy. On the one hand, Google relied on a free tool used by a growing number of people. On the other hand, Google made money via businesses that wanted to get more visibility for their brand, by bidding on specific keywords.
This is what I call a barbell strategy. You have two opposite targets – that might seem unrelated in the short-run – but are tied together from an overall long-term strategy. Indeed, the more free users joined Google, the more the search engine acquired valuable data that could sell back to businesses. Those massive network effects made Google become the tech giant we know today, which as of 2017 still generated 86% of its revenue from advertising.
Going back to a barbell editorial strategy it would work in this way:
Target established keywords with a large volume and low click-through rate only if they matter to your brand. In this case, success will be measured in terms of impressions
Win those keywords without established volume or medium volume, but high click-through rate as those will allow your blog to gain momentum. In this case, success will be measured primarily via how many clicks you get
You need to identify established keywords with a large volume and low click-through rate which identified with the brand you want to build.
For instance, in my case “business model” identified with the brand I wanted to build. Therefore, I ranked what a few months before was a dead blog, right after Wikipedia, Investopedia, and Harvard Business Review on that keyword:
Ranking my blog there required a certain amount of effort, way higher than ranking on other long tail keywords. However, as of now, it has a meager click-through rate of 0.9%, well below the average for my blog.
Yet, I don’t judge this keyword on the amount of traffic it brings, but rather on how many times people see my blog on that page, associated with Investopedia and HBR. In short, according to my barbell editorial strategy, this keyword is not meant to bring much traffic, but to generate awareness about my brand. To give you a bit of context in the last three months almost eighteen thousand people saw my blog related to the keyword “business model” right after Investopedia and HBR!
Instead, other keywords are meant to do the opposite. Even though they might not have any search volume yet. Those will allow you to gain momentum. For instance, when I covered the “DuckDuckGo business model,” I knew I wouldn’t create much buzz, but I knew I was going to create qualified traffic from people highly interested in the topic.
The keyword “duckduckgo business model” – that as of now doesn’t have yet established volume on Google – is among the ones that brought most traction to the blog with an astonishing 52.7% click through rate!
If you were to ask an average SEO, he would tell you to avoid targeting this kind of keywords as they don’t have volume and might not be necessary for your editorial strategy. However, a smart SEO expert would ask you first “what’s your gut feeling about this keyword?” “do you think people will want to search for it in the next years?” and if so he’ll suggest going after it.
Indeed, you will be the first in the space to be there. Second, you will avoid competition and gain traction. Third, people coming to your blog via those keywords might be your real audience base.
Voice search is here to stay
A morning back in March 2017, It seemed a regular day, if not for a scene that has changed forever the way I thought about the web. Crossing the door of the office, I saw Andrea Volpini, founder of WordLift, talking to a Google Home device we had at the office.
He was talking to it just like a man speaks to a kid, slowly to have it understand and process the information. After a few trials, finally at the question “what is WordLift” Google Home answered with a nice and clear voice “WordLift is a start-up founded in 2017 and based in Rome, Italy. The company has developed the homonymous WordPress plugin which, through the use of semantic technologies…”
At that point, I asked him – just like you would with a magician – to show me the trick. How did he do that? Andrea’s answer puzzled me; he said: “it’s the snippet!”
In short, a so-called featured snippet showing on the Google search results got also used by Google to reproduce an answer in the search device. While I was already familiar with featured snippets, I didn’t realize how powerful they were, especially going toward voice search. That day I got obsessed with them. I wanted to understand what made them possible, why Google triggered them and most importantly how they could help to make the jump from traditional search to voice!
Going back to the example of “duckduckgo business model” when I ask the Google assistant “how does DuckDuckGo make money?” this is what I get:
After over a year and a half of studying, implementing and gaining featured snippets I realized a few key lessons, which I summarized below:
Use an entity-based content model, where primary pages become entities
Identify long-tail keywords opportunities that can trigger featured snippets
Use structured data as the foundation for your featured snippet strategy
Use visuals and infographics to make your content more appealing and steal featured snippets opportunities
Set up redirections from those images toward the blog post to which they belong
Brand those infographics to generate search volume around your branded keyword
These guides we put together will help you through the process:
The search experience in the coming years might look completely different. Rather than a person going on the Google blank page looking for something. It might probably be skewed toward a device which pushes that information to you even before you type it. Where algorithms will become better and better at predicting what we want, those same algorithms might give us an answer before we ask it.
In that scenario, voice search will play a vital role in the transition from search to discovery!
Bet on the future with a Moonshot thinking approach
If you look under the hood of Google (now Alphabet), you’ll find out the company isn’t just a massive advertising machine. The company has been widely investing in other bets. Those comprise companies that span from life science to self-driving. In short, Google isn’t just waiting for the future to happen; it is shaping it.
This is what they call a Moonshot thinking approach matured by the Google X factory, which tries to “create radical new technologies to solve some of the world’s hardest problems.” Going back to your business and editorial strategy, if you just follow what Google tells you is relevant right now you’ll end up in a competitive space where everyone is trying to run for the same piece of land.
Instead, to be successful on the long-run, you want to be creative and get out from looking at just metrics and Google data and trust also your gut instinct and your understanding of an industry. Also, you might want to go for those key results that might give you a 10X advantage rather than an incremental one.
Thus, the question you need to ask over and over is not “how do I get a 10% increase” but rather “how do I gain that featured snippet?” or “how do I go from position 100+ to page one in four weeks on a competitive keyword?” (which is what I did, but we’ll leave this story for a later article).
When you change the mindset, you’ll also change the way you tackle the issue.
When I did a New Year’s resolution back in 2017, I thought that revamping my dead blog would have been an easy win. It took me three months to understand that if I really wanted to make it though I needed to be on top of the game in the area I picked, and I needed to commit and focus.
In the top 5% of content producers who blog in your field / to your audience
Able to work for months or years to become in the top 5% of those producers
In a field with very few decent, online content producers
In possession of a large, loyal fanbase that will consume what you produce even if it’s not particularly good
Overall I think the effort is worth it for a simple reason: a blog is still the place on the web where you have total control. Social media and other distribution channels are good to integrate into your digital strategy, but you don’t’ control any of them. Also, a blog is the place where you’ll be able to transition toward voice search!
As search engines move toward voice search, mobile personal assistants adoption is growing at a fast rate. While the transition is already happening, there is another interesting phenomenon to notice. The SERP has changed substantially in the last couple of years. As Google rolls out new features that appear on the “above the fold” (featured snippets, knowledge panels and featured snippets filter bubbles) those allow us to understand how voice search might look like.
In this article, we’ll focus mainly on the knowledge panel, why it is critical and how you can get it too.
The Knowledge Panel: The Google’s above the fold worth billions
The knowledge panel is a feature that Google uses to provide quick and reliable information about brands (be them personal or company brands). For instance, in the case above you can see that for the query “who’s Gennaro Cuofano” on the US search results Google is giving both a featured snippet (on the left) and a knowledge panel (on the right).
While the featured snippet aim is to provide a practical answer, fast; the knowledge panel aim is to provide a reliable answer (coming from a more authoritative source) and additional information about that brand. In many cases, the knowledge panel is also a “commercial feature” that allows brands to monetize on their products. For instance, you can see how my knowledge panel points toward books on Amazon that could be purchased in the past.
This space on the SERP, which I like to call “above the fold” has become the most important asset on the web. While Google first page remains an objective for most businesses, it is also true, that going toward voice search traffic will be eaten more and more by those features that appear on the search results pages, even before you get to the first position.
How does Google create knowledge panels? And how do you get one?
Knowledge panel: the key ingredient is Google’s knowledge vault
When people search for a business on Google, they may see information about that business in a box that appears to the right of their search results. The information in that box, called the knowledge panel, can help customers discover and contact your business.
In most cases, you’ll notice two main kinds of knowledge panels:
While brand panels provide generic information about a person or company’s brand, local panels offer instead information that is local. In the example above, you can see how the local panel provides the address, hours and phone of the local business. In short, that is a touch point provided by Google between the user and the local business.
Where does Google get the information from the knowledge panel? Google itself specifies that “Knowledge panels are powered by information in the Knowledge Graph.”
What is a knowledge graph?
Back in 2012 Google started to build a “massive Semantics Index” of the web called knowledge graph. In short, a knowledge graph is a logical way to organize information on the web. While in the past Google could not rely on the direct meaning of words on a web page, the knowledge graph instead allows the search engine to collect information on the web and organize it around simple logical phrases, called triples (for ex. “I am Gennaro” and “Gennaro knows Jason”).
Those triples are combined according to logical relationships, and those relationships are built on top of a vocabulary called Schema.org. In short, Schema.org defines the possible relationships available among things on the web.
Thus, two people that in Schema are defined as entity type “person” can be associated via a property called “knows.” That is how we might make clear to Google the two people know each other.
From those relationships among things (which can be people, organizations, events or any other thing on the web) a knowledge graph is born:
Example of a knowledge graph shaped on a web page from FourWeekMBA that answers the query “Who’s Gennaro Cuofano”
Where does Google get the information to comprise in its knowledge graph? As pointed out on Go Fish Digital, some of the sources are:
In short, there isn’t a single source from where Google mines the information to include in its knowledge panels.
Is a knowledge panel worth your time and effort?
Is it worth it to gain a knowledge panel?
A knowledge panel isn’t only the avenue toward voice search but also an organic traffic hack. It’s interesting to see how a good chunk of Wikipedia traffic comes from Google’s knowledge panels. Of course, Wikipedia is a trusted and authoritative website. Also, one consequence of knowledge panels might be the so-called no-clicks searches (those who don’t necessarily produce a click through from the search results pages).
Yet, as of now, a knowledge panel is an excellent opportunity to gain qualified traffic from search and get ready for voice search.
As search is evolving toward AEO, it also changes the way you need to look at content structuring. As Google SERP adds features, such as featured snippets and knowledge panels, those end up capturing a good part of the traffic. Thus, as a company, person or business you need to understand how to gain traction via knowledge panels. The key is Google’s knowledge graph, which leverages on Google knowledge vault.
It is your turn now to start experimenting to get your knowledge panel!
When I started blogging back in 2015, I thought I only had to produce the so-emphasized quality content to rank on Google.
Quality content seems almost a utopia in the digital marketing world. Everyone talks about it, we all agree on it. Yet we all think about different things.
So what do I mean when I say “quality content?”
In my mind quality content can be summarized in three simple ways: in-depth but essential, useful and well researched, educational yet actionable.
In short, you don’t need to write a 2000+ words article, just because Google is thirsty for content. You need to follow the length that is congenial to the article, based on the topic you’re covering.
Of course, quality content might mean – at least for me – something that is useful for an audience and well researched. Many small business owners are too busy focusing on keeping their enterprise profitable to spend time on researching SEO or other related topics.
Last but not least, you want to make sure people learn something they can apply quickly. However, it doesn’t have to be dull. In short, also a “how to” can be compelling if adequately written.
Long story short, when I started to write content that fit those guidelines, I didn’t get any traffic. Nothing at all! What was going on?
I simply missed the proper mindset. In this article, I want to show you the SEO hacking mindset. Based on continuous experimentation, curiosity and lack of preconceived ideas of what works and what not to compete against large publishing outlets!
A few weeks back in one of our daily conversations with Andrea Volpini, WordLift‘s CEO, we were discussing a few SEO strategies.
With Andrea, we often discuss at great length about SEO, the future of the internet and how search engines, Google, in particular, react to that.
Our conversations are a way to brainstorm ideas. That day we were walking through Via Giulia, an old street in the historic center of Rome; a road that runs parallel to the Tiber River.
One of the most famous streets in Rome during the Renaissance, Via Giulia became the home of antique dealers in the past decades. Yet today due to the crisis of antique trade, the street has become home of modern shops.
Among those shops there is WordLift. A small startup that operates at the cutting edge of semantic technologies applied to the web. That is where I work as a Head Of Business Development.
When we walk through Via Giulia, a feeling of being part of something greater permeates us, and it feels like you go back to the past, when Rome was the most powerful Empire ever existed. These feelings make ideas flow incessantly.
In that scenario, Andrea had just revealed to me a secret about Google.
I’ll summarize in this way: images have their own life in Google’s SERP, and if you take the time to produce original images and redirect them to your blog. That can become an effective SEO strategy to bring traffic back to your site.
It all started from there. When I heard that I began to run a few experiments.
This strategy is beginning to pay off. In this featured snippet on the keyword “cash conversion cycle” with a volume of 18,100 Google is picking the content from Investopedia and the image from my blog.
You’ll notice that the image redirects back to my blog. In this article, I show you what I did and how you can do it too.
Let’s take a few steps back.
It all starts with an editorial strategy
Many think of an editorial strategy as a calendar filled up with articles for the next year. That is not the way I see it.
An editorial strategy for me is about having clear in mind what are the 2-3 topics you want to cover at great length.
Based on that you need to be flexible, and opportunist. In short, you want to keep an eye open for opportunity windows that allow you to rank on large volume keywords.
Long story short I’ve implemented an editorial strategy on FourWeekMBA.com by creating content that targeted specific keywords around business modeling.
Yet those articles would hardly rank for those keywords as I was competing against large websites like Investopedia.
What to do then? Either I had to change my editorial strategy or be doomed to failure. Unless…
All you need is passion and an audience
I didn’t want to change my editorial strategy. In fact, I’m passionate about business modeling, and I know I can keep researching this topic for years.
Also, that is a topic with a broad audience. Thus, I had the two conditions I believe are critical to building a profitable website. I also needed a secret recipe to start ranking on those keywords though.
I began creating companion infographics with my articles, which had the size of a LinkedIn post. The aim was to provide a snapshot of business, quickly.
Those graphics would target the same keywords in the article.
Images have their own life on Google SERP, take advantage of that
If you look at the percentage of search results from Google there is one phenomenon to notice:
In this analysis by SparkToro in collaboration with jumpshot one thing is straightforward: images play a key role in Google search results.
As pointed out by Rand Fishkin, there are two things to take into account. Number one, “Google Images shrunk, but almost entirely because Google web search took that traffic for themselves (dropping the tabs to image search, embedding more image results in the web SERPs, etc.).”
Number two “given that Google Images is sending out an even more significant portion of traffic (due to their recent changes on “view image”), investing in visual content that can perform there (and appear in a web search) feels like a no-brainer for content creators.”
In other words, Google is integrating more and more images in the search results, thus making them part of the user experience. Therefore, you should be not surprised to see your pictures floating around the web, disjoined from your content and inserted in other contexts.
In fact, more and more often, images appear in the so-called featured snippets. Other times they are included in the knowledge panels.
That opens up an exciting opportunity: having original images might be a critical strategy to power up your SEO strategy. And I want to show you how to use them to gain featured snippets on very competitive keywords.
However that also opens up another challenge: as Google is thirsty for relevant content, be it text, image or video, it will happen more often than these contents will be stripped out from the context of your website to get offered in different formats.
From search results, featured snippets, knowledge panels, or voice assistants. It doesn’t matter how Google will serve that content; it will be Google to make the rules of the game unless you make sure to follow two strategies.
Number one, bring all the traffic generated through those graphics or infographics toward the original blog post that features them with a simple redirection.
Second, make sure to label your images with your brand, website or whatever can help you build awareness and help you build up search volume around the so-called “branded keyword.”
We’ll see those two aspects more in detail in a few paragraphs.
How to find featured snippet opportunities for your images
The first step I took when I was looking for featured snippet opportunities by using original graphics was to look at featured snippets where Google only had text. Some examples below:
As you can notice from this example, this featured snippet is very competitive as you might be going after Wikipedia, the same applies to the other examples:
One thing you might have noticed though is that this featured snippet only comprises text. There is no image in it. Why?
One possible reason might be that Google didn’t find relevant images to include in those featured snippets.
In short, even though Wikipedia is a trusted source of information, it doesn’t seem to provide original, and valuable images that Google can use within the featured snippet.
That is where the opportunity to get in that featured snippet comes in!
Let’s go back to our case study and what I did to get there.
Target a competitive keyword, but with your original infographic
When I was looking for featured snippet opportunities, I had identified a long-tail keyword with a large volume: amazon cash conversion.
That keyword, at the time of this writing, has an 18,100 in monthly volume. Even though that keyword was already taken and it looked like this:
In other words, Investopedia had a text featured snippet, but there was no image. The reason being they didn’t have any compelling infographic in the text, as you can see from below:
That is where I could have an opportunity even though there was no way Google would have taken my content in place of Investopedia. So I wrote the piece and worked on an original “companion” infographic.
Make your infographic relevant, compelling by targeting the featured snippet
The infographic is informative, it is quick, and it is branded.
I also picked a color that resembled The Economist infographics. Thus, making it more trusted at first glance (at least for people that know The Economist).
Structured data is the foundation of your featured snippet strategy
Structured data is about converting your content in a format that search engines can efficiently process. In other words, to make your content more sticky for the Google algorithms, structured data has become a must.
For instance, in this specific case, I used WordLift to mark up the content on my blog posts:
The significant part is the WordLift converts my content in structured data in a few clicks and with no coding so that Google can better process it. And it does that by using a format called JSON-LD that doesn’t affect the performance of the page:
WordLift passes up a set of metadata to search engines that describe the context of the page. That also includes relevant information about the infographic featured in the article!
Redirect traffic from your infographic back to your blog post
As I use WordPress as CMS, I used a simple plugin to have traffic redirected from the image – in case it ranked through Google – to the original blog post it belonged.
In short, when you have a website, you have a list of pages that you can prioritize based on a so-called sitemap. Put it shortly; the sitemap is the way you want Google and other search engines to look at your website.
That doesn’t mean Google will stick to it, but that is an indication that helps it understands a website.
Within the sitemap, you need to have also your images. In this way, you allow Google to more easily index, thus rank them (again Google will decide whether or not it makes sense, yet you give it an indication).
Once I had the images in the sitemap, I made sure those images would be redirected via Yoast Plugin:
Within the “Search Appearance” on Yoast, you need to go inside “Media” and set as “Yes” at the question “Redirect attachment URLs to the attachment itself?”
Here you go:
When you click on the image of the featured snippet it will open it up:
If you click on that image, see where it goes:
Exactly, back to my blog post!
Create search volume for your branded keyword
Another critical aspect of your infographic is about generating search volume for your branded keyword. A branded keyword is merely a keyword that represents your brand.
For instance, in my case, that would be FourWeekMba, or four-week-mba and other possible variations.
Why is that important? A branded keyword is significant for several reasons.
First, gaining search volume on a branded keyword might tell Google that your brand/website is relevant.
Second, when you build up volume over time, you also start diversifying your marketing mix.
Thus, you’ll notice more direct traffic to your blog. That’s good as you don’t need to rely solely on Google for a consistent stream of traffic.
With those infographics I’ve been building a bit of search volume around my brand:
Of course, that is still very small. Yet when I look at my search console you can see some interesting findings:
For the sake of simplicity, I’m just showing you the main branded keyword. In fact, in my search console, I have other variations (like fourweekmba).
What’s interesting here is that on this branded keyword the click-through rate is pretty high (47,83%).
In fact, the click-through rate shows how many times, based on the number of impressions from the search results, the user clicks on my page.
That is critical as that points to Google that this is what the user was looking for. Thus, that – in theory – should make my website more trusted (this is wholly speculative).
There is also another aspect that is critical for this story.
SEO is not a short-term game
I published the article on February 26th, 2018 and for a while I just let it rank organically.
A few days ago, at the beginning of July I started to notice in the last days I was getting some organic traffic on the keyword “cash conversion cycle” yet I could not find it on the SERP:
In fact, I was on the third page. So that made me think, and I went to check right away what happened and that is what I saw:
In short, after about three months I managed to gain a half featured snippet with an image with my small blog by competing with Investopedia, one of the largest and most trusted sites when it comes to business.
If you’ll be implementing a strategy based on positioning your images on the featured snippet, how can you make sure you’re doing it right?
How to check the traffic coming from images
One way to keep track of this strategy is by using the Google search console. All you need is to go in Search Traffic > Search Analytics and filter it by images:
In this way, you can track your marketing effort in gaining organic traffic via original infographics.
As search engines evolve new opportunities arise. Thus, by keeping an open eye, you can take advantage of those opportunities even if you have a small blog.
In fact, in this case study, we saw how you can take advantage of existing text featured snippets by following this process:
look for featured snippet opportunities: when you see a featured snippet that has only text there is an excellent opportunity to position your infographic
create content on that featured snippet opportunity together with a compelling infographic targeting the same keyword
use structured data as the foundation for your featured snippet strategy
set up redirections from those images toward the blog post to which they belong
brand those infographics to generate search volume around your branded keyword
be patient and wait for it to be positioned in the featured snippet. SEO is not a short-term game!
By the time to time, you can check whether your effort is paying back by filtering the search traffic to see organic traffic from images
when that happens, double down on that strategy to gain more visibility
One key aspect to keep in mind. SEO hacking is not about finding the latest trick to win some traffic.
SEO hacking is a mindset – that mixed with limited resources, experimentation, and creativity – allows you to gain traction even on competitive terms. You just need to think unconventionally and experiment quickly.
Oh, wait! There is another critical aspect. That pertains to voice search. What do I mean? Look at this short video:
When you shift your mindset and start targeting featured snippets, interesting things can happen.
Indeed, this is even more interesting than the featured snippet itself. In fact, in the featured snippet the image is too small for users to click on. Yet on the voice search assistant on your smartphone, the opposite happens.
The infographic “eats up” the text coming from Wikipedia!
In fact, not only the image seems part of Wikipedia (Google is tricking you) but when you tap on it you land on my blog!
But that is another story we’ll tackle in another case study 😉
Looking back ten years from now, we’ll probably say: “it all started with a hair salon reservation.” In fact, what seemed a simple conversation, in reality, opened up a Pandora box, for better or worse. Yet beside socio-cultural evaluations, this will have an enormous impact on businesses, online and offline.
In many other articles about Google Duplex, several perspectives have been taken into account. In this article instead, I want to give you a different angle on why, from the business standpoint, it makes sense for Google to move in that direction. In fact, when companies like Google, which most important asset is its users’ data, make a move, I believe it is essential to understand why.
The Turing Test is a thing of the past
When in the 1950s Alan Turing was thinking about machine intelligence he started with a what seemed a simple question: “Can machines think?” However, this questions carries many hidden philosophical problems. Not the least, how would you define thinking? That is why Alan Turing turned the question upside down. Rather than thinking or define thinking. Alan Turing decided to look at the problem from another perspective: “Can machines do what we (as thinking entities) can do?”
Listening to this conversation, would you even guess that this is a conversation between a human and a machine? I didn’t, and I bet you neither. But how did we get here and what implications does this have for the future?
The digital divide of small localized businesses vs. large tech conglomerates
When Google is working on practical applications for its Google assistants, deciding where to focus their effort is critical. In fact, if we look at the data related to small businesses digitalization, you realize how they are slow at adapting to the modern technological landscape.
In other words, although things like AI and machine learning resonate in the marketing world and it is the primary concern of tech giants like Google, Facebook or Amazon. In reality, small business owners not only are unconcerned about those topics. But they are still in the process of understanding why they need digitalization at all for their businesses. For instance, if you think about a small restaurant or a hair salon, which are businesses taken as an example of Google’s Duplex experiment. You realize that is easier for Google to get offline, rather than those small businesses join the online world.
As those local activities mainly rely on word of mouth and traditional media, it would be tough for Google to reach those businesses (although Google has already moved in that direction with Google my business). What to do then?
If a small business doesn’t go online, Google goes offline
We give Google for granted. Yet It’s hard to keep in mind that Google, as a digital business, monetizes thanks to the data of people that are always online. What about people offline? Google Duplex might be a way to close the digital divide and leverage on people always connected to start gathering data about businesses that are offline:
In other words, Google Duplex becomes the middleman that allows Google’s Assistants to collect critical data about offline businesses for voice search.
While technologies pass, data stay, and Google Duplex can be the growth engine toward voice search
AI, machine learning and the plentiful of new technological applications that are springing up thanks to those are at the center of today’s debate. However, although technologies play a crucial role what truly matters is data. In fact, on the one hand, new machine learning models allow the processing of large amounts of data. Thus, if in the past the data gathered couldn’t be of much use by companies or governments as we didn’t have the computing power and intelligence to process that. Now, this is possible.
On the other hand, we have to keep in mind that data is what matters. When Google and Facebook offer free services to users, they are not volunteering; they are building up a business. As voice search is expected to become a $40 billion market (in US alone) by 2022; Google Duplex can really become the growth engine that allows Google to gather the most important data through voice to take over the market.
“Hey Google,” this is a country for old men!
If you think about it, this might the most ingenious business strategy. While in the last two decades Google used the data of users to build up a business that as of 2017 made over $95 billion from advertising there was still a disconnect. In fact, while the gathering of data from Google depended and it still does from the level of digitalization of its users; in the future, it will not.
If you think about digital assistants, like Google Home, those are consumer products ready to be in any home, independently from the use of a computer. In fact, in a few months over six million home speakers were sold.
You might expect though that voice search will disrupt – after all – the usage of computers. Isn’t – in a way – voice search the natural evolution of traditional search? In reality, if we look at the statistics, voice search will not just take over some market share related to traditional search, but it will take over old media:
In short, people asked about what media were Smart Speakers replacing; the answers were staggering. In the top seven media replaced by Smart Speakers, four of them (Radio, TV, Printed Press and Sonos) are traditional media.
Why is this important at all? For a few reasons – I argue. First, Voice Search might disrupt once and forever old media. In fact, while the web is still on a race against traditional media (it was just in 2017 that digital ad spending surpassed TV ad spending), Voice Search has the potential to disrupt it as it will have many potential practical applications for worldwide households.
Second, the web created a greater divide between generations. This might not be the case for Voice Search. Smart Speakers can be activated with something humans have been using forever: spoken language.
Third, as the Mark Zuckerberg Senate Testimony showed, those tech giants business models aren’t easy to understand. We saw the scenes of struggling adult men and women trying to make sense of Facebook. In a way, many on the web read this, as a lack of intelligence from those politicians. In reality, it seems clear to me that companies like Facebook and Google, thanks to their asymmetric business models, make it hard for people to understand how they operate. As Voice Search will make it hard for Google to monetize on ads (imagine the only answer given by the Smart Speaker was an ad, would you trust it?) would they be willing to experiment with alternative, symmetric business models?
In this article, we saw how Google Duplex might be opening new business scenarios for Google. However, we also saw how Google Duplex would help the tech giant from Mountain View to target a few things at once. The critical aspects are:
Close the digital divide between tech giants and small offline businesses
Start collecting critical data by using connected users to collect data from offline small businesses
Although AI and machine learning are critical technologies that allow Google to become more sophisticated; the real asset is data
While the web is still competing with traditional media, Voice Search isn’t only taking market share from the web, but mostly from media like TV, Radio and Printed
While computers are still hard to understand for older or less tech-savvy people. Voice search is pretty much a technology that can be used by anyone
As Voice Search will make it harder for Google to monetize with ads alone, would this be an opportunity to experiment with more symmetrical business models?
Those are open questions. It is clear though that the power of Voice Search is its ability to unite digital to non-digital, millennials to baby-boomers, tech-savvy to non-tech savvy. From the business standpoint, it will be all about Voice Search domination!