The way we communicate and interact online is constantly changing. Users have come to expect a much more personal and tailored experience, the type that can’t be provided using traditional ways of interaction.
When looking at the words conversational marketing, some people might be wondering what exactly that is. Well, it basically is a strategy that gives customers the personalized value they are looking for and allows businesses to scale while saving time and resources. We found out that through conversational marketing and therefore through live chats, chatbots, and social monitoring it’s possible to promote genuine conversations and real relationships. The goal here is, of course, to enhance the user’s experience while minimizing friction.
Long gone are the days when consumers were passive recipients of marketing messages who had to be bombarded with a blatantly pushy sales pitch in order to be convinced to make a purchase. New, interactive technologies enabled them to break the fourth wall and have their say about how they feel about brands and what they expect from them. This means that the time has come for brands to learn how to listen actively while their customers do the talking. Marketing is a two-way street, and that’s the essence of conversational marketing.
What’s Conversational Marketing?
Unlike traditional marketing which heavily relied on TV commercials, billboards, newspaper ads, direct mail, and similar methods which customers learned to ignore successfully, conversational marketing enables brands to have relevant, meaningful, one-on-one conversations with their audiences across different channels of communication.
Live chat and chatbots are the first things that come to mind when it comes to conversational marketing. However, this strategy is much more than these two tools, and it can be extended to social media, phone calls, SMS, and IMs – pretty much any channel that your customers prefer.
Some of the benefits of such an approach include:
Being available 24/7. This is something that your customers will appreciate as you’re putting their needs first, and override your regular working hours which are somewhat limiting. AI-powered bots can answer customers’ questions in real time, be it 7 a.m or midnight. No wonder that by 2020, more than 85% of customer support interactions will be handled by chatbots.
Getting to know your audience on a more profound level. These chats and conversations are a gold mine of customer information, and they can help you understand your audience better and start using their language in your messaging.
Humanizing your brand. By combining live chat, bots, and social media, your outreach will be much more natural, and you’ll avoid using generic request forms which your customers don’t consider particularly promising in terms of providing them with timely responses.
1. Sephora’s Virtual Artist
The upscale beauty retailer stepped up its marketing game by introducing the Sephora Virtual Artist feature in their Facebook Messenger bot.
This innovative AR functionality allows the brand’s customers to “try on” makeup by uploading their selfies and applying different lipstick shades, eyeshadows, and false lashes.
Besides being fun and making it easy for customers to share their makeover photos with friends in order to get valuable feedback or add them to Facebook Stories, Visual Artist offers something much more important – a try-before-you-buy experience without having to visit a physical store.
What’s even better, once a prospective shopper makes their purchasing decision, they can order the products they want directly from the thread, which additionally streamlines and improves the customer journey. The brand reports that Sephora Assistant, a similar Messenger bot for booking makeovers in one of its stores, accounts for an 11% conversion rate increase.
eBay’s Google Assistant App tremendously facilitates browsing through the company’s vast online shopping inventory and lets customers start their search by saying “Ok, Google, ask eBay to find me…”, and this smart app will ask you additional questions in an attempt to narrow down your search and provide you with the most relevant results. Once it finds the best deal, the chatbot will ask you whether to send the results to your smartphone so that you can complete your purchase.
Given that Siri, Alexa, Amazon Echo, and other voice-based assistants are increasingly popular, it’s clear that implementing such a tool can significantly boost customer engagement.
This widget comes after the online retailer’s Facebook Messenger ShopBot, which uses AI and Machine Learning in order to personalize the shopping experience based on a deeper understanding of customer intent.
Planning and executing such an effective conversational marketing strategy can be a complex endeavor, which is why it’s a good idea to consult experts from digital marketing agencies and see what the best approach will be for your company and how to make it work within your budget.
3. Domino’s AnyWare
Domino’s wants to make the process of ordering pizza as easy as pie.
Back in 2015, the company encouraged its customers to tweet or text a pizza emoji and have a pizza sent their way.
This concept evolved further, so that now with Domino’s AnyWare it’s possible to order your favorite items from their menu through a number of available options – Google Home, Alexa, Slack, Facebook Messenger, Twitter, or even a Smart TV. This versatility and abundance of different channels of communications is something that’s of vital importance to today’s picky customers, and Domino’s does everything g to meet its patrons’ preferences.
Again, personalization and an in-depth understanding of customers needs is exactly what helps Domino’s build loyalty thus making sure that its clients will come back knowing that they can easily reorder their favorite item from the menu with a single click, tweet, or word, as well as track their order and see when it will be delivered.
4. General Motors and Social Media
Although conversational marketing is mostly related to innovative chatbots powered by the latest tech, social media is another tool that can make this strategy work.
One of the best examples of this approach is General Motors and the way it dealt with the 2014 ignition switch recalls, a crisis which threatened to ruin not only the company’s finances but also its reputation.
Over the course of several months, more than 30 million cars worldwide were recalled, while the switch ignition flaw resulted in the deaths of 124 people. G.M. was transparent about the issue and owned it, raising the bar on customer support and experience along the way.
Customers flooded the company’s social media channels with distressed comments and negative feedback, and the auto giant had its customer support reps address each and every individual complaint and offered to help on the spot.
Some customers got loaner cars until their problems were solved while others were given a refund for the travel expenses caused by the malfunction of their vehicles. Instead of trying to hush things up and switching to traditional tactics such as emails, phone calls, and other more private communication channels General Motors chose to listen to their customers, hear their objections, and proactively handle this huge blunder in the public eye.
It’s time to jump on the conversational marketing bandwagon, if you already haven’t, and take a cue from these companies who mastered the art of customer experience and satisfaction with the help of this powerful strategy.
Nina is a technical researcher & writer at DesignRush, a B2B marketplace connecting brands with agencies. She loves to share her experiences and meaningful content that educates and inspires people. Her main interests are web design and marketing. In her free time, when she's away from the computer, she likes to do yoga and ride a bike. You can also find her on Twitter.
In this article, we explore how to evaluate the correspondence between title tags and the keywords that people use on Google to reach the content they need. We will share the results of the analysis (and the code behind) using a TensorFlow model for encoding sentences into embedding vectors. The result is a list of titles that can be improved on your website.
“A title tag is an HTML element that defines the title of the page. Titles are one of the most important on-page factors for SEO. […]
They are used, combined with meta descriptions, by search engines to create the search snippet displayed in search results.”
Every search engine’s most fundamental goal is to match the intent of the searcher by analyzing the query to find the best content on the web on that specific topic. In the quest for relevancy a good title influence search engines only partially (it takes a lot more than just matching the title with the keyword to rank on Google) but it does have an impact especially on top ranking positions (1st and 2nd according to a study conducted a few years ago by Cognitive SEO). This is also due to the fact that a searcher is likely inclined to click when they find good semantic correspondence between the keyword used on Google and the title (along with the meta description) displayed in the search snippet of the SERP.
What is semantic similarity in text mining?
Semantic similarity defines the distance between terms (or documents) by analyzing their semantic meanings as opposed to looking at their syntactic form.
“Apple” and “apple” are the same word and if I compute the difference syntactically using an algorithm like Levenshtein they will look identical, on the other hand, by analyzing the context of the phrase where the word apple is used I can “read” the true semantic meaning and find out if the word is referencing the world-famous tech company headquartered in Cupertino or the sweet forbidden fruit of Adam and Eve.
A search engine like Google uses NLP and machine learning to find the right semantic match between the intent and the content. This means the search engines are no longer looking at keywords as strings of text but they are reading the true meaning that each keyword has for the searcher. As SEO and marketers, we can also now use AI-powered tools to create the most authoritative content for a given query.
There are two main ways to compute the semantic similarity using NLP:
we can compute the distance of two terms using semantic graphsand ontologies by looking at the distance between the nodes (this is how our tool WordLift is capable of discerning if apple – in a given sentence – is the company founded by Steve Jobs or the sweet fruit). A very trivial, but interesting example is to, build a “semantic tree” (or better we should say a directed graph) by using the Wikidata P279-property (subclass of).
we can alternatively use a statistical approach and train a deep neural network to build – from a text corpus (a collection of documents), a vector space model that will help us transform the terms in numbers to analyze their semantic similarity and run other NLP tasks (i.e. classification).
There is a crucial and essential debate behind these two approaches. The essential question being: is there a path by which our machines can possess any true understanding? Our best AI efforts after all only create an illusion of an understanding. Both rule-based ontologies and statistical models are far from producing a real thought as it is known in cognitive studies of the human brain. I am not going to expand here but, if you are in the mood, read this blog post on the Noam Chomsky / Peter Norvig debate.
Text embeddings in SEO
Word embeddings (or text embeddings) are a type of algebraic representation of words that allows words with similar meaning to have similar mathematical representation. A vector is an array of numbers of a particular dimension. We calculate how close or distant two words are by measuring the distance between these vectors.
In this article, we’re going to extract embedding using the tf.Hub Universal Sentence Encoder, a pre-trained deep neural network designed to convert text into high dimensional vectors for natural language tasks. We want to analyze the semantic similarity between hundreds of combinations of Titles and Keywords from one of the clients of our SEO management services. We are going to focus our attention on only one keyword per URL, the keyword with the highest ranking (of course we can also analyze multiple combinations). While a page might attract traffic on hundreds of keywords we typically expect to see most of the traffic coming from the keyword with the highest position on Google.
We are going to start from the original code developed by the TensorFlow Hub team and we are going to use Google Colab (a free cloud service with GPU supports to work with machine learning). You can copy the code I worked on and run it on your own instance.
Our starting point is a CSV file containing Keyword, Position (the actual ranking on Google) and Title. You can generate this CSV from the GSC or use any keyword tracking tool like Woorank, MOZ or Semrush. You will need to upload the file to the session storage of Colab (there is an option you can click in the left tray) and you will need to update the file name on the line that starts with:
df = pd.read_csv( … )
Here is the output.
Let’s get into action. The pre-trained model comes with two flavors: one trained with a Transformer encoder and another trained with a Deep Averaging Network (DAN). The first one is more accurate but has higher computational resource requirements. I used the transformer considering the fact that I only worked with a few hundreds of combinations.
In the code below we initiate the module, open the session (it takes some time so the same session will be used for all the extractions), get the embeddings, compute the semantic distance and store the results. I did some tests in which I removed the site name, this helped me see things differently but in the end, I preferred to keep whatever a search engine would see.
The semantic similarity – the degree to which the title and the keyword carry the same meaning – is calculated, as the inner products of the two vectors.
An interesting aspect of using word embeddings from this model is that – for English content – I can easily calculate the semantic similarity of both short and long text. This is particularly helpful when looking at a dataset that might contain very short keywords and very long titles.
The result is a table of combinations from rankings between 1 and 5 that have the least semantic similarity (Corr).
It is interesting to see that it can help, for this specific website, to add to the title the location (i.e. Costa Rica, Anguilla, Barbados, …).
With a well-structured data markup we are already helping the search engine disambiguate these terms by specifying the geographical location, but for the user making the search, it might be beneficial to see at a glance the name of the location he/she is searching for in the search snippet. We can achieve this by revising the title or by bringing more structure in the search snippets using schema:breadcrumbs to present the hierarchy of the places (i.e. Italy > Lake Como > …).
In this scatter plot we can also see that the highest semantic similarity between titles and keywords has an impact on high rankings for this specific website.
Semantic Similarity between keywords and titles visualized
Start running your semantic content audit
Crawling your website using natural language processing and machine learning to extract and analyze the main entities, greatly helps you improve the findability of your content. Adding semantic rich structured data in your web pages helps search engines match your content with the right audience. Thanks to NLP and deep learning I could see that to reduce the gap – between what people search and the existing titles – it was important – for this website – to add the Breadcrumbs markup with the geographical location of the villas. Once again AI, while still incapable of true understanding, helps us become more relevant for our audience (and it does it at web scele on hundreds of web pages).
Solutions like the TF-Hub Universal Encoder bring, in the hands of SEO professionals and marketers, the same AI-machinery that modern search engines like Google use to compute the relevancy of content. Unfortunately, this specific model is limited to English only.
Are you ready to run your first semantic content audit?
In this article, I will share my findings while attempting to use neural networks to describe the content of images. Images greatly contribute to a website’s SEO and improve the overall user experience. Fully optimizing images is about helping users, and search engines, better understand the content of an article.
The SEO community has always been quite keen in recommending publishers to invest on visual elements and this has become even more important in 2019 as Google keeps on revamping Google Image Search by adding new filters and new functionalities.
Google’s Image Search user interface
There are several aspects that Google mentions in its list of best practices for images but the work I’ve been focusing on, for this article, is about providing alt text and captions in a semi-automated way. Alt text and captions, in general, improve accessibility for people that use screen-readers or have limited connectivity and help search engines understand what the content of an article is about.
“Google Images and Video search is often overlooked, but they have massive potential.”
“We simply know that media search is way too ignored for what it’s capable doing for publishers so we’re throwing more engineers at it as well as more outreach.”
– Gary Illyes, Google’s Chief of Sunshine and Happiness & trends analyst
Let’s start with the basic of Image SEO with this historical video from Matt Cutts that, back in 2007, explained to webmasters worldwide the importance of descriptive alt text in images.
Agentive SEO: AI that works for webmasters…sort of
The work we do at WordLift with our partner WooRank aims at building agentive technologies for digital marketers. I had the pleasure of meeting Christopher Noessel in San Francisco and learned from him the principles of agentive technology (Chris has written a terrific book that I recommend you to read called Designing Agentive Technologies). One of the most important aspects in designing agentive tech is to focus on efficient workflows to augment humans intelligence with the power of machines by taking into account the strengths and the limitations of today’s AI.
Make Your Website Smarter with AI-Powered SEO: just focus on your content and let the AI grow the traffic on your website!
The workflow to enrich image metadata in WordPress
In this experiment we proceed as follow:
we start by downloading the XML export feed for media files using the WordPress Export tool
we send a request to the Microsoft Vision APIs
we store the results in a CSV file that we can later use to check and validate the outcome of the analysis with Google Sheets (or Excel) using the power of our natural intelligence 😎
we add back the descriptions in the CMS with an importer (I didn’t develop this part yet but there are already plugins that import data stored in CSV files in the WordPress database).
Purely relying on machines is not really an option to improve your image SEO and I will show you why. Nevertheless, a strong-willed editor with the code described in this article can curate hundreds of images in a few hours.
Keep on reading if you are interested in ML experiments or simply jump at the end of the article to get the code I finally used to enrich the media library of one of the clients of our SEO managed services.
Get Comfortable with experiments
Machine learning requires a new mindset: way different from the mindset we have in traditional programming. You tend to write less code and to focus most of the attention in the data being used for training the model but … in the end, will the model you are building be usable in a real-world environment? Can you really rely on it to improve your search rankings? Hard to say from the start.
The advantages of setting up your own pipeline for training an ML model are obvious – especially if, like us, you are building a product that thousands of people will use:
You are totally independent of external providers (this usually means you keep control of the costs)
You can fine-tune the data as well as the model for the needs of your users
The implementation is based on a combination of two different networks:
A pre-trained resnet-152 model that acts as an encoder. It transforms the image in a vector of features that is sent to the decoder
A decoder that uses an LSTM network (LSTM stands for Long short-term memory and it is a Recurrent Neural Network) to compose the phrase that describes the featured vector received from the encoder. LSTM, I learned along the way, are used by Google and Alexa for speech recognition, Google also uses it in the Google Assistant and in Google Translate.
One of the main dataset used for training in image captioning is called COCO and is made of a vast number of images, each image has 5 different captions that describe it.
I quickly realized that training the model on my laptop would have required almost 17 days no-stop with the CPU running at full throttle. I had to be realistic and I downloaded the pre-trained model that was available.
Needless to say, I remained speechless as soon as everything was in place and I was ready to make the model talk for the first time. By providing the image below the result was encouraging.
Unfortunately, as I moved forward with the experiments and from the giraffes moved into a more mundane scenery (the team in the office) the results were bizarre, to use a euphemism, and far from being usable in our competitive SEO landscape.
Don’t settle for less than the best model
As I kept experimenting with different images, while happy that I was now able to fully control all the parameters I had to accept that this implementation of the Show and Tell paper was not good enough for our users. Great for generative poetry perhaps but, no good for SEO.
While I am still evaluating new alternatives (there is a very promising attention model implementation in TensorFlow that I would love to test) I had to focus on what the industry considers state-of-the-art for this specific tasks: the Microsoft Vision API. You can play directly online using the http://captionbot.com website and you will see that the results are significantly different than my homebrewed image captioning model in PyTorch.
Microsoft wisely offers a freemium model and you have up to 5.000 API calls per month to get started without opening your wallet.
Fasten your seatbelts and run the analysis
In order to optimize the description of images for anyone running WordPress, I prepared a script in Python that uses the Microsoft Computer Vision API and that you can find on GitHub.
You will need an API key from Microsoft and the export of your WordPress Media Library in XML that can be generated using the WordPress Export Tool.
The result, from running the script, is a CSV file that contains the URL of the image, the title of the image, the proposed description of the image and a confidence score. This confidence score is very useful to quickly filter the results and to focus your attention where is needed the most (as you can see from the image below there is a big difference between the first image that has a score of 0.5 and the image right after that has a score of 0.8).
Once the data is validated by an editor using Excel or Google Sheet it can be imported back into WordPress using any plugin that imports CSV in the database or a custom script (still need to write it).
Follow the instructions on GitHub or write me an email if are interested in doing image SEO with the help of machine learning. The code is far from perfect and has been only tested on a couple of websites (please use it at your own risk).
Experimenting in ML is essential. A great wealth of resources including pre-trained machine learning models are available and can encode knowledge to help us in SEO tasks.
While the state-of-the-art neural network from Microsoft still interprets a young Bill Slawski (alongside an even younger Neil Patel) as … yes, a woman with a proper workflow you can still get very useful results to scale up your SEO productivity for image tagging.
Bill Slawski and Neil Patel
In the coming weeks, we will keep on testing this approach and hopefully measuring some positive impact in terms of organic traffic (this blog post is still really a work in progress). It is also worth keep on testing new ML networks that take advantage of hierarchical neural attention; these new approaches are superseding models based on RNN / LSTM (here is a good article on the topic).
Spoken Languages: Arabic (Native Speaker), Italian (C2), English (C1).
Bio: An SEO Expert and Digital Marketing Specialist based in Rome. His expertise includes Digital Marketing, Search Engine Optimization, Search Engine Marketing, Keywords Research, and Conversion Rate Optimization. He can’t say no to pizza.
Let’s Get to Know Doreid
What’s your Superpower? Analysis and numbers, studying the main web metrics, keyword research and discovery, data analysis, competitor analysis, and content optimization to get results and managing the development process.
Where have you lived? Where did you grow up? I was born and grew up in Syria then I moved to Lebanon where I spent some time before settling in Italy in 2013. I worked in the Hospitality & Tourism Sector moving from hotels in my country to the Royal Group of Rome and finally with Marriott International along with the Digital Marketing Sector.
What do you like to do in your free time? Football, computer, TV & traveling.
If you could describe yourself with an app, what would it be and why? Google Ads App that keeps campaigns running smoothly-no matter where your business takes you, because I am results-oriented, constantly checking in with the goal to determine how close or how far away we are and what it will take to make it happen.
If you could be in the movie of your choice, what movie would you choose and what character would you play? La Casa de Papel “Money Heist”, I think I would be a perfect fit for the role of “The Professor”.
3 things you love the most about being a Wordlifter: Working with a highly skilled passionate and well-organized team. Making SEO in all the languages I speak for WordLift international clients. The variety, it is always changing and evolving and I enjoy watching the process of a creative idea grow into a successful business.
Technology is all around us, and there is no escaping it. The best thing is that it is continually evolving as technological breakthroughs are seen almost on a daily basis. One of the technologies which are getting better every day and making our lives easier is voice search.
When Did It All Begin?
More than half a century ago, IBM introduced IBM Shoebox which was the first speech recognition tool. The father of voice recognition devices was able to recognize 16 words and the digits from 0 to 9. As you will see in the infographic below by SEOTribunal, voice recognition technology has come a long way to become what it is today since its beginnings. Mostly implemented by mobile device manufacturers, today’s voice technology gives users the ability to do online searches, find information about products, ask questions, ask for directions or for the weather forecast, and many other things just by talking to a device.
How Does Voice Search Work?
First of all, it processes and transcribes the human speech into text before analyzing it in order to detect questions and commands. After that, it connects to external data sources such as search engines to find the relevant information and translates that information into a digestible format to fulfill the user’s intent.
What Are The Best Voice Search Engines?
There is a continuous battle between big players to make the best voice search engine. This is good news as voice search assistants are becoming more and more sophisticated, thus making our lives easier. Let’s take a look at the top brands manufacturing these devices.
Google Assistant is powered by AI and is primarily available on mobile and smart home devices. It was launched in 2016, and the thing which makes it different from its predecessors is that it can engage in two-way conversations.
Microsoft’s Cortana was released in April 2014. Available in multiple languages, it has the ability to set reminders, recognizes the natural voice, and answer questions by using the information found on Bing.
Amazon’s Echo has many capabilities, such as voice interaction, music playback, creating to-do lists, and streaming podcasts, to name just a few. The best thing about it is that it can be extended by installing other functions which are developed by third-party vendors.
Samsung’s Bixby is a voice-powered digital assistant introduced in 2017. It is a major reboot for S Voice. Aside from being used on smartphones and other mobile devices, Bixby is included in Samsung’s Family Hub 2.0 refrigerators, which are the first non-mobile products to include a virtual assistant.
Apple’s Siri is part of its iOS, iWatch, MacOS, HomePod, and Apple Tv operating systems. It works by using voice queries and a natural-language user interface to do actions such as answering questions, checking for information, navigating, and many other things.
What Lies Ahead?
As of January 2018, around 1 billion voice searches were made per month, and in the next couple of years, 50% of searches will be made using the voice-enabled technology. It is also predicted that over the next years the voice recognition market is going to experience huge growth, with an estimated $601 million in 2019 only.
We can definitely say that the future of voice-enabled technology is bright. The constant need for improvement is the driving force that makes companies produce the best voice assistants out there. Both the companies and their customers bear the fruit from it.
As the Marketing Manager at SEO Tribunal, part of Tina’s daily engagements involve raising awareness of the importance of digital marketing when it comes to the success of small businesses. As her first step towards this journey was in the field of content marketing, she’s still using every opportunity she gets to put her thoughts into educational articles.
Helping editors organize, monitor and optimize search rankings
An entity-centric approachthat uses the knowledge graph to help editorial teams improve the organic visibility of their content. In this presentation, you will see how a knowledge graph can become a new powerful digital marketing asset to improve the organic traffic on your website.
Small and mid-sized editorial teams may struggle to identify and prioritize topics within their editorial planning. What if they don’t know what to write next? Or which topics performed the best in the past 3 months? How they can be sure that their pieces match the interests of their target audience? And how could it be possible to improve the organic traffic on their site? These are the questions we want to answer by bringing actionable data in the hands of web editors.