{"id":10366,"date":"2019-03-11T17:32:10","date_gmt":"2019-03-11T17:32:10","guid":{"rendered":"https:\/\/wordlift.io\/blog\/en\/?p=10366"},"modified":"2021-11-02T17:33:13","modified_gmt":"2021-11-02T16:33:13","slug":"title-tag-seo-using-ai","status":"publish","type":"post","link":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/","title":{"rendered":"Title tag optimization using deep learning"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">In this article, we explore how to evaluate the correspondence between title tags and the keywords that people use on Google to reach the content they need. We will share the results of the analysis (and the code behind) using a TensorFlow <\/span><span style=\"font-weight: 400;\">model for encoding sentences into embedding vectors. The result is a list of titles that can be improved on your website. <\/span><\/p>\n<table>\n<tbody>\n<tr>\n<td><img decoding=\"async\" class=\"aligncenter wp-image-10367\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/colab_favicon_256px.png\" alt=\"\" width=\"83\" height=\"73\"><\/td>\n<td><span style=\"font-weight: 400;\">Jump directly to the code: <\/span><a href=\"https:\/\/colab.research.google.com\/drive\/1c8LkZXQNe_9nGwSO5Ofp4kptGV61lIgZ\"><b>Semantic Similarity of Keywords and Titles &#8211; a SEO task using TF-Hub Universal Encoder<\/b><\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2><span style=\"font-weight: 400;\">Let\u2019s start with the basics. What is the title tag? <\/span><\/h2>\n<p style=\"padding-left: 30px;\"><span style=\"font-weight: 400;\">We read <\/span><a href=\"https:\/\/www.woorank.com\/en\/edu\/seo-guides\/title-tag-seo\"><span style=\"font-weight: 400;\">on Woorank<\/span><\/a><span style=\"font-weight: 400;\"> a simple and clear definition.<\/span><\/p>\n<p style=\"padding-left: 30px;\"><span style=\"font-weight: 400;\">\u201cA title tag is an HTML element that defines the title of the page. Titles are <\/span><b>one of the most important on-page factors for SEO<\/b><span style=\"font-weight: 400;\"><\/span>. [&#8230;]<\/p>\n<p style=\"padding-left: 30px;\"><span style=\"font-weight: 400;\">They are used, combined with meta descriptions, by search engines to create the search snippet displayed in search results.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Every search engine\u2019s most fundamental goal is to match the intent of the searcher by analyzing the query to find the best content on the web on that specific topic. In the quest for relevancy a good title influence search engines only partially (it takes a lot more than just matching the title with the keyword to rank on Google) but it does have an impact especially on top ranking positions (1st and 2nd according to a study conducted a few years ago by <\/span><a href=\"https:\/\/cognitiveseo.com\/blog\/11701\/title-influence-on-rankings\/#2\"><span style=\"font-weight: 400;\">Cognitive SEO<\/span><\/a><span style=\"font-weight: 400;\">). This is also due to the fact that a searcher is likely inclined to click when they find <\/span><b>good semantic correspondence<\/b><span style=\"font-weight: 400;\"> between the keyword used on Google and the title (along with the meta description) displayed in the search snippet of the SERP.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">What is semantic similarity in text mining? <\/span><\/h2>\n<p><b>Semantic similarity<\/b><span style=\"font-weight: 400;\"><\/span> defines the distance between terms (or documents) by <b>analyzing their semantic meanings<\/b><span style=\"font-weight: 400;\"> as opposed to looking at their syntactic form. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cApple\u201d and \u201capple\u201d are the same word and if I compute the difference syntactically using an algorithm like Levenshtein they will look identical, on the other hand, by analyzing the context of the phrase where the word apple is used I can \u201cread\u201d the true semantic meaning and find out if the word is referencing the world-famous tech company headquartered in Cupertino or the sweet forbidden fruit of Adam and Eve.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A search engine like Google uses <a class=\"wl-entity-page-link\" title=\"Natural language processing\" href=\"https:\/\/wordlift.io\/blog\/en\/entity\/natural-language-processing\/\" data-id=\"http:\/\/data.wordlift.io\/wl0216\/entity\/natural_language_processing;http:\/\/rdf.freebase.com\/ns\/m.05flf;http:\/\/dbpedia.org\/resource\/Natural_language_processing;http:\/\/be.dbpedia.org\/resource\/\u0410\u043f\u0440\u0430\u0446\u043e\u045e\u043a\u0430_\u043d\u0430\u0442\u0443\u0440\u0430\u043b\u044c\u043d\u0430\u0439_\u043c\u043e\u0432\u044b;http:\/\/ru.dbpedia.org\/resource\/\u041e\u0431\u0440\u0430\u0431\u043e\u0442\u043a\u0430_\u0435\u0441\u0442\u0435\u0441\u0442\u0432\u0435\u043d\u043d\u043e\u0433\u043e_\u044f\u0437\u044b\u043a\u0430;http:\/\/pt.dbpedia.org\/resource\/Processamento_de_linguagem_natural;http:\/\/bg.dbpedia.org\/resource\/\u041e\u0431\u0440\u0430\u0431\u043e\u0442\u043a\u0430_\u043d\u0430_\u0435\u0441\u0442\u0435\u0441\u0442\u0432\u0435\u043d_\u0435\u0437\u0438\u043a;http:\/\/lt.dbpedia.org\/resource\/Nat\u016bralios_kalbos_apdorojimas;http:\/\/fr.dbpedia.org\/resource\/Traitement_automatique_du_langage_naturel;http:\/\/uk.dbpedia.org\/resource\/\u041e\u0431\u0440\u043e\u0431\u043a\u0430_\u043f\u0440\u0438\u0440\u043e\u0434\u043d\u043e\u0457_\u043c\u043e\u0432\u0438;http:\/\/id.dbpedia.org\/resource\/Pemrosesan_bahasa_alami;http:\/\/ca.dbpedia.org\/resource\/Processament_de_llenguatge_natural;http:\/\/sr.dbpedia.org\/resource\/Obrada_prirodnih_jezika;http:\/\/en.dbpedia.org\/resource\/Natural_language_processing;http:\/\/is.dbpedia.org\/resource\/M\u00e1lgreining;http:\/\/it.dbpedia.org\/resource\/Elaborazione_del_linguaggio_naturale;http:\/\/es.dbpedia.org\/resource\/Procesamiento_de_lenguajes_naturales;http:\/\/cs.dbpedia.org\/resource\/Zpracov\u00e1n\u00ed_p\u0159irozen\u00e9ho_jazyka;http:\/\/pl.dbpedia.org\/resource\/Przetwarzanie_j\u0119zyka_naturalnego;http:\/\/ro.dbpedia.org\/resource\/Prelucrarea_limbajului_natural;http:\/\/da.dbpedia.org\/resource\/Sprogteknologi;http:\/\/tr.dbpedia.org\/resource\/Do\u011fal_dil_i\u015fleme\" >NLP<\/a> and <a class=\"wl-entity-page-link\" title=\"machine-learning\" href=\"https:\/\/wordlift.io\/blog\/en\/entity\/machine-learning\/\" data-id=\"http:\/\/data.wordlift.io\/wl0216\/entity\/machine_learning;http:\/\/dbpedia.org\/resource\/Machine_learning;https:\/\/www.wikidata.org\/wiki\/Q2539\" >machine learning<\/a> to find the right semantic match between the intent and the content. This means the search engines are no longer looking at keywords as strings of text but they are reading the true meaning that each keyword has for the searcher. As SEO and marketers, we can also now use AI-powered tools to create the most authoritative content for a given query.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">There are two main ways to compute the semantic similarity using NLP: <\/span><\/p>\n<ol>\n<li><span style=\"font-weight: 400;\">we can compute the distance of two terms <\/span><b>using semantic graphs<\/b> <b>and ontologies<\/b><span style=\"font-weight: 400;\"> by looking at the distance between the nodes (this is how our tool WordLift is capable of discerning if apple &#8211; in a given sentence &#8211; is the company founded by Steve Jobs or the sweet fruit). A very trivial, but interesting example is to, build a \u201c<\/span><i><span style=\"font-weight: 400;\">semantic tree<\/span><\/i><span style=\"font-weight: 400;\">\u201d (or better we should say <\/span><i><span style=\"font-weight: 400;\">a<\/span><\/i><i><span style=\"font-weight: 400;\"> directed graph<\/span><\/i><span style=\"font-weight: 400;\">) by using the Wikidata P279-property (subclass of)<\/span><span style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">.<\/span><\/span>\n<p><figure id=\"attachment_10369\" aria-describedby=\"caption-attachment-10369\" style=\"width: 1496px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" class=\"size-full wp-image-10369\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/semantic-tree-apple-wikidata.png\" alt=\"semantic tree for Apple by Wikidata\" width=\"1496\" height=\"630\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/semantic-tree-apple-wikidata.png 1496w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/semantic-tree-apple-wikidata-300x126.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/semantic-tree-apple-wikidata-1024x431.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/semantic-tree-apple-wikidata-768x323.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/semantic-tree-apple-wikidata-150x63.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/semantic-tree-apple-wikidata-1080x455.png 1080w\" sizes=\"(max-width: 1496px) 100vw, 1496px\" \/><figcaption id=\"caption-attachment-10369\" class=\"wp-caption-text\">You can run the query on Wikidata and generate a P279 graph for \u201capple\u201d (the fruit) <a href=\"http:\/\/tinyurl.com\/y39pqk5p\">http:\/\/tinyurl.com\/y39pqk5p<\/a><\/figcaption><\/figure><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">we can alternatively use <\/span><b>a statistical approach<\/b><span style=\"font-weight: 400;\"> and train a <\/span><b>deep neural network<\/b><span style=\"font-weight: 400;\"> to build &#8211; from a text corpus (a collection of documents), a <\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Vector_space_model\"><span style=\"font-weight: 400;\">vector space model<\/span><\/a><span style=\"font-weight: 400;\"> that will help us transform the terms in numbers to analyze their semantic similarity and run other NLP tasks (i.e. classification).<\/span><\/li>\n<\/ol>\n<p>There is a&nbsp;<span>crucial and essential debate behind these two approaches. The essential question&nbsp;being: is there a path by which our machines can possess any true understanding?&nbsp;Our best AI efforts after all only create an illusion of an understanding. Both rule-based ontologies and statistical models are far from producing a real thought as it is known in cognitive studies of the human brain. I am not going to expand here but, if you are in the mood, read this<a href=\"https:\/\/medium.com\/@danlovy1961\/noam-chomsky-has-weighed-in-on-a-i-where-do-you-stand-f478d1b0e0ea\"> blog post on the Noam Chomsky \/ Peter Norvig debate<\/a>.&nbsp; &nbsp;<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Text embeddings in SEO<\/span><\/h2>\n<p>[box] Word embeddings (or text embeddings) are a type of algebraic representation of words that allows words with similar meaning to have similar mathematical representation. A vector is an array of numbers of a particular dimension. We calculate how close or distant two words are by measuring the distance between these vectors.[\/box]<\/p>\n<p><span style=\"font-weight: 400;\">In this article, we\u2019re going to extract embedding using the <\/span><a href=\"https:\/\/tfhub.dev\/google\/universal-sentence-encoder-large\/3\"><b>tf.Hub Universal Sentence Encoder<\/b><\/a><span style=\"font-weight: 400;\">, <\/span><b>a pre-trained deep neural network<\/b><span style=\"font-weight: 400;\"> designed to convert text into high dimensional vectors for natural language tasks. We want to analyze the semantic similarity between hundreds of combinations of Titles and Keywords from one of the clients of our <\/span><a href=\"https:\/\/wordlift.io\/seo-management-service\/\"><span style=\"font-weight: 400;\">SEO management services<\/span><\/a><span style=\"font-weight: 400;\">. We are going to focus our attention on <\/span><b>only one keyword per URL<\/b><span style=\"font-weight: 400;\"><\/span>, the keyword with the highest ranking (of course we can also analyze multiple combinations). While a page might attract traffic on hundreds of keywords we typically expect to see most of the traffic coming from the keyword with the highest position on Google.<\/p>\n<p><span style=\"font-weight: 400;\">We are going to start from <\/span><a href=\"https:\/\/github.com\/tensorflow\/hub\/blob\/master\/examples\/colab\/semantic_similarity_with_tf_hub_universal_encoder.ipynb\"><span style=\"font-weight: 400;\">the original code<\/span><\/a><span style=\"font-weight: 400;\"> developed by the TensorFlow Hub team and we are going to use Google Colab (a free cloud service with GPU supports to work with machine learning). You can copy <\/span><a href=\"https:\/\/colab.research.google.com\/drive\/1c8LkZXQNe_9nGwSO5Ofp4kptGV61lIgZ#scrollTo=co7MV6sX7Xto\"><span style=\"font-weight: 400;\">the code<\/span><\/a><span style=\"font-weight: 400;\"> I worked on and run it on your own instance.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Our starting point is a CSV file containing Keyword, Position (the actual ranking on Google) and Title. You can generate this CSV from the GSC or use any keyword tracking tool like Woorank, MOZ or Semrush. You will need to upload the file to the session storage of Colab (there is an option you can click in the left tray) and you will need to update the file name on the line that starts with:<\/span><\/p>\n<pre><span style=\"font-weight: 400;\">df = pd.read_csv( \u2026 )<\/span><\/pre>\n<p><span style=\"font-weight: 400;\">Here is the output. <\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter size-full wp-image-10372\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/table-starting-csv.png\" alt=\"\" width=\"1396\" height=\"182\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/table-starting-csv.png 1396w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/table-starting-csv-300x39.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/table-starting-csv-1024x134.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/table-starting-csv-768x100.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/table-starting-csv-150x20.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/table-starting-csv-1080x141.png 1080w\" sizes=\"(max-width: 1396px) 100vw, 1396px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">Let\u2019s get into action. The pre-trained model comes with two flavors: one trained with a <\/span><b>Transformer encoder<\/b><span style=\"font-weight: 400;\"><\/span> and another trained with a <b>Deep Averaging Network<\/b><span style=\"font-weight: 400;\"> (DAN). The first one is more accurate but has higher computational resource requirements. I used the transformer considering the fact that I only worked with a few hundreds of combinations. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the code below we initiate the module, open the session (it takes some time so the same session will be used for all the extractions), get the embeddings, compute the semantic distance and store the results. I did some tests in which I removed the site name, this helped me see things differently but in the end, I preferred to keep whatever a search engine would see.<\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter size-full wp-image-10373\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/machine-learning-seo-experiment-tfhub.png\" alt=\"\" width=\"1866\" height=\"974\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/machine-learning-seo-experiment-tfhub.png 1866w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/machine-learning-seo-experiment-tfhub-300x157.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/machine-learning-seo-experiment-tfhub-1024x534.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/machine-learning-seo-experiment-tfhub-768x401.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/machine-learning-seo-experiment-tfhub-1536x802.png 1536w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/machine-learning-seo-experiment-tfhub-150x78.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/machine-learning-seo-experiment-tfhub-1080x564.png 1080w\" sizes=\"(max-width: 1866px) 100vw, 1866px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">The <\/span><span style=\"font-weight: 400;\">semantic similarity &#8211; &nbsp;the degree to which the title and the keyword carry the same meaning &#8211; is calculated, as the inner products of the two vectors. <\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-10374\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/Universal-Sentence-Encoder.png\" alt=\"\" width=\"670\" height=\"178\"><\/p>\n<p><span style=\"font-weight: 400;\">An interesting aspect of using word embeddings from this model is that &#8211; for English content &#8211; I can easily calculate the semantic similarity of both short and long text. This is particularly helpful when looking at a dataset that might contain very short keywords and very long titles. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">The result is a table of combinations from rankings between 1 and 5 that have the least semantic similarity (Corr). &nbsp;<\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter size-full wp-image-10375\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/pages-to-work-on.png\" alt=\"\" width=\"1290\" height=\"698\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/pages-to-work-on.png 1290w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/pages-to-work-on-300x162.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/pages-to-work-on-1024x554.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/pages-to-work-on-768x416.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/pages-to-work-on-150x81.png 150w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/pages-to-work-on-1080x584.png 1080w\" sizes=\"(max-width: 1290px) 100vw, 1290px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">It is interesting to see that it can help, for this specific website, to add to the title the location (i.e. Costa Rica, Anguilla, Barbados, \u2026). <\/span><\/p>\n<p><span style=\"font-weight: 400;\">With a well-structured data markup we are already helping the search engine disambiguate these terms by specifying the geographical location, but for the user making the search, it might be beneficial to see at a glance the name of the location he\/she is searching for in the search snippet. We can achieve this by revising the title or by <strong>bringing more structure in the search snippets using<\/strong><\/span><span style=\"font-weight: 400;\"><strong> schema<\/strong><\/span>:breadcrumbs<strong> to present the hierarchy of the places<\/strong> (i.e. Italy &gt; Lake Como &gt; \u2026).<\/p>\n<p><img decoding=\"async\" class=\"aligncenter size-full wp-image-10377\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/image-15.png\" alt=\"\" width=\"1026\" height=\"708\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/image-15.png 1026w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/image-15-300x207.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/image-15-1024x707.png 1024w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/image-15-768x530.png 768w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/image-15-150x104.png 150w\" sizes=\"(max-width: 1026px) 100vw, 1026px\" \/><\/p>\n<p><span style=\"font-weight: 400;\">In this scatter plot we can also see that the highest semantic similarity between titles and keywords has an impact on high rankings for this specific website.<\/span><\/p>\n<figure id=\"attachment_10378\" aria-describedby=\"caption-attachment-10378\" style=\"width: 715px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" class=\"wp-image-10378 size-full\" src=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/download-1.png\" alt=\"Semantic Similarity between keywords and titles visualized\" width=\"715\" height=\"612\" srcset=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/download-1.png 715w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/download-1-300x257.png 300w, https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/download-1-150x128.png 150w\" sizes=\"(max-width: 715px) 100vw, 715px\" \/><figcaption id=\"caption-attachment-10378\" class=\"wp-caption-text\">Semantic Similarity between keywords and titles visualized<\/figcaption><\/figure>\n<h2><span style=\"font-weight: 400;\">Start running your semantic content audit<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Crawling your website using natural language processing and machine learning to extract and analyze the main entities, greatly helps you improve the findability of your content. Adding <strong>semantic rich structured data<\/strong><\/span> in your web pages helps search engines match your content with the right audience. Thanks to&nbsp;NLP and deep learning I could see that to reduce the gap&nbsp;&#8211; between what people search and the existing titles &#8211; it was important &#8211; for this website &#8211; to add the Breadcrumbs markup with the geographical location of the villas. Once again AI, <em>while still incapable of true understanding<\/em>, <strong>helps us become&nbsp;more relevant <\/strong>for our audience (and it does it at web scele on hundreds of web pages).<\/p>\n<p><span style=\"font-weight: 400;\">Solutions like the TF-Hub Universal Encoder bring, in the hands of SEO professionals and marketers, the same AI-machinery that modern search engines like Google use to compute the relevancy of content. Unfortunately, this specific&nbsp;model is limited to English only.<\/span><\/p>\n<blockquote><p><span style=\"font-weight: 400;\">Are you ready to run your first semantic content audit? <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Get in contact with our&nbsp;<\/span><a href=\"https:\/\/wordlift.io\/seo-management-service\/\"><span style=\"font-weight: 400;\">SEO management service<\/span><\/a><span style=\"font-weight: 400;\"> team now!<\/span><\/p><\/blockquote>\n\n","protected":false},"excerpt":{"rendered":"<p>In this article, we explore how to evaluate the correspondence between title tags and the keywords that people use on Google to reach the content they need. We will share the results of the analysis (and the code behind) using a TensorFlow model for encoding sentences into embedding vectors. The result is a list of &hellip; <a href=\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/\">Continued<\/a><\/p>\n","protected":false},"author":6,"featured_media":10391,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"wl_entities_gutenberg":"","_wlpage_enable":"","footnotes":""},"categories":[28,8],"tags":[],"wl_entity_type":[30],"coauthors":[],"class_list":["post-10366","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-world-summit-ai","category-seo","wl_entity_type-article"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v22.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Title tag SEO using deep learning and TensorFlow<\/title>\n<meta name=\"description\" content=\"We explore how to evaluate the correspondence between title tags and keywords using a TensorFlow Hub model for encoding sentences into embedding vectors.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Title tag SEO using deep learning and TensorFlow\" \/>\n<meta property=\"og:description\" content=\"We explore how to evaluate the correspondence between title tags and keywords using a TensorFlow Hub model for encoding sentences into embedding vectors.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"WordLift Blog\" \/>\n<meta property=\"article:published_time\" content=\"2019-03-11T17:32:10+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-11-02T16:33:13+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2286\" \/>\n\t<meta property=\"og:image:height\" content=\"1200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Andrea Volpini\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Andrea Volpini\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/\"},\"author\":{\"name\":\"Andrea Volpini\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a\"},\"headline\":\"Title tag optimization using deep learning\",\"datePublished\":\"2019-03-11T17:32:10+00:00\",\"dateModified\":\"2021-11-02T16:33:13+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/\"},\"wordCount\":1589,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg\",\"articleSection\":[\"AI &amp; Machine Learning\",\"seo\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/\",\"name\":\"Title tag SEO using deep learning and TensorFlow\",\"isPartOf\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg\",\"datePublished\":\"2019-03-11T17:32:10+00:00\",\"dateModified\":\"2021-11-02T16:33:13+00:00\",\"description\":\"We explore how to evaluate the correspondence between title tags and keywords using a TensorFlow Hub model for encoding sentences into embedding vectors.\",\"breadcrumb\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#primaryimage\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg\",\"contentUrl\":\"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg\",\"width\":2286,\"height\":1200,\"caption\":\"title tag seo using deep learning\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog\",\"item\":\"https:\/\/wordlift.io\/blog\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Title tag optimization using deep learning\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#website\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/\",\"name\":\"WordLift Blog\",\"description\":\"AI-Powered SEO\",\"publisher\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/wordlift.io\/blog\/en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#organization\",\"name\":\"WordLift\",\"url\":\"https:\/\/wordlift.io\/blog\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png\",\"contentUrl\":\"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png\",\"width\":152,\"height\":40,\"caption\":\"WordLift\"},\"image\":{\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a\",\"name\":\"Andrea Volpini\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/image\/466a1652833e48ca11c81b363eba7c25\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg\",\"caption\":\"Andrea Volpini\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Title tag SEO using deep learning and TensorFlow","description":"We explore how to evaluate the correspondence between title tags and keywords using a TensorFlow Hub model for encoding sentences into embedding vectors.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/","og_locale":"en_US","og_type":"article","og_title":"Title tag SEO using deep learning and TensorFlow","og_description":"We explore how to evaluate the correspondence between title tags and keywords using a TensorFlow Hub model for encoding sentences into embedding vectors.","og_url":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/","og_site_name":"WordLift Blog","article_published_time":"2019-03-11T17:32:10+00:00","article_modified_time":"2021-11-02T16:33:13+00:00","og_image":[{"width":2286,"height":1200,"url":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg","type":"image\/jpeg"}],"author":"Andrea Volpini","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Andrea Volpini","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#article","isPartOf":{"@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/"},"author":{"name":"Andrea Volpini","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a"},"headline":"Title tag optimization using deep learning","datePublished":"2019-03-11T17:32:10+00:00","dateModified":"2021-11-02T16:33:13+00:00","mainEntityOfPage":{"@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/"},"wordCount":1589,"commentCount":0,"publisher":{"@id":"https:\/\/wordlift.io\/blog\/en\/#organization"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg","articleSection":["AI &amp; Machine Learning","seo"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/","url":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/","name":"Title tag SEO using deep learning and TensorFlow","isPartOf":{"@id":"https:\/\/wordlift.io\/blog\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#primaryimage"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg","datePublished":"2019-03-11T17:32:10+00:00","dateModified":"2021-11-02T16:33:13+00:00","description":"We explore how to evaluate the correspondence between title tags and keywords using a TensorFlow Hub model for encoding sentences into embedding vectors.","breadcrumb":{"@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#primaryimage","url":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg","contentUrl":"https:\/\/wordlift.io\/blog\/en\/wp-content\/uploads\/sites\/3\/2019\/03\/title-tag-seo.jpg","width":2286,"height":1200,"caption":"title tag seo using deep learning"},{"@type":"BreadcrumbList","@id":"https:\/\/wordlift.io\/blog\/en\/title-tag-seo-using-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog","item":"https:\/\/wordlift.io\/blog\/en\/"},{"@type":"ListItem","position":2,"name":"Title tag optimization using deep learning"}]},{"@type":"WebSite","@id":"https:\/\/wordlift.io\/blog\/en\/#website","url":"https:\/\/wordlift.io\/blog\/en\/","name":"WordLift Blog","description":"AI-Powered SEO","publisher":{"@id":"https:\/\/wordlift.io\/blog\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/wordlift.io\/blog\/en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/wordlift.io\/blog\/en\/#organization","name":"WordLift","url":"https:\/\/wordlift.io\/blog\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/","url":"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png","contentUrl":"https:\/\/mk0wordliftblog7j5te.kinstacdn.com\/wp-content\/uploads\/sites\/3\/2017\/04\/logo-1.png","width":152,"height":40,"caption":"WordLift"},"image":{"@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/574352082cc71dab8d164410f1cabe0a","name":"Andrea Volpini","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wordlift.io\/blog\/en\/#\/schema\/person\/image\/466a1652833e48ca11c81b363eba7c25","url":"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/6b9d3d311b50a8749201fe4b318907a8?s=96&d=mm&r=pg","caption":"Andrea Volpini"}}]}},"_wl_alt_label":[],"wl:entity_url":"http:\/\/data.wordlift.io\/wl0216\/post\/title_tag_optimization_using_deep_learning","_links":{"self":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/10366"}],"collection":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/comments?post=10366"}],"version-history":[{"count":0,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/posts\/10366\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/media\/10391"}],"wp:attachment":[{"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/media?parent=10366"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/categories?post=10366"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/tags?post=10366"},{"taxonomy":"wl_entity_type","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/wl_entity_type?post=10366"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/wordlift.io\/blog\/en\/wp-json\/wp\/v2\/coauthors?post=10366"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}