Title tag optimization using deep learning

Title tag optimization using deep learning

In this article, we explore how to evaluate the correspondence between title tags and the keywords that people use on Google to reach the content they need. We will share the results of the analysis (and the code behind) using a TensorFlow model for encoding sentences into embedding vectors. The result is a list of titles that can be improved on your website.

Jump directly to the code: Semantic Similarity of Keywords and Titles – a SEO task using TF-Hub Universal Encoder

Let’s start with the basics. What is the title tag?

We read on Woorank a simple and clear definition.

“A title tag is an HTML element that defines the title of the page. Titles are one of the most important on-page factors for SEO. […]

They are used, combined with meta descriptions, by search engines to create the search snippet displayed in search results.”

Every search engine’s most fundamental goal is to match the intent of the searcher by analyzing the query to find the best content on the web on that specific topic. In the quest for relevancy a good title influence search engines only partially (it takes a lot more than just matching the title with the keyword to rank on Google) but it does have an impact especially on top ranking positions (1st and 2nd according to a study conducted a few years ago by Cognitive SEO). This is also due to the fact that a searcher is likely inclined to click when they find good semantic correspondence between the keyword used on Google and the title (along with the meta description) displayed in the search snippet of the SERP.

What is semantic similarity in text mining?

Semantic similarity defines the distance between terms (or documents) by analyzing their semantic meanings as opposed to looking at their syntactic form.

“Apple” and “apple” are the same word and if I compute the difference syntactically using an algorithm like Levenshtein they will look identical, on the other hand, by analyzing the context of the phrase where the word apple is used I can “read” the true semantic meaning and find out if the word is referencing the world-famous tech company headquartered in Cupertino or the sweet forbidden fruit of Adam and Eve.

A search engine like Google uses NLP and machine learning to find the right semantic match between the intent and the content. This means the search engines are no longer looking at keywords as strings of text but they are reading the true meaning that each keyword has for the searcher. As SEO and marketers, we can also now use AI-powered tools to create the most authoritative content for a given query.

There are two main ways to compute the semantic similarity using NLP:

  1. we can compute the distance of two terms using semantic graphs and ontologies by looking at the distance between the nodes (this is how our tool WordLift is capable of discerning if apple – in a given sentence – is the company founded by Steve Jobs or the sweet fruit). A very trivial, but interesting example is to, build a “semantic tree” (or better we should say a directed graph) by using the Wikidata P279-property (subclass of).

    semantic tree for Apple by Wikidata

    You can run the query on Wikidata and generate a P279 graph for “apple” (the fruit) http://tinyurl.com/y39pqk5p

  2. we can alternatively use a statistical approach and train a deep neural network to build – from a text corpus (a collection of documents), a vector space model that will help us transform the terms in numbers to analyze their semantic similarity and run other NLP tasks (i.e. classification).

There is a crucial and essential debate behind these two approaches. The essential question being: is there a path by which our machines can possess any true understanding? Our best AI efforts after all only create an illusion of an understanding. Both rule-based ontologies and statistical models are far from producing a real thought as it is known in cognitive studies of the human brain. I am not going to expand here but, if you are in the mood, read this blog post on the Noam Chomsky / Peter Norvig debate.   

Text embeddings in SEO

Word embeddings (or text embeddings) are a type of algebraic representation of words that allows words with similar meaning to have similar mathematical representation. A vector is an array of numbers of a particular dimension. We calculate how close or distant two words are by measuring the distance between these vectors.

In this article, we’re going to extract embedding using the tf.Hub Universal Sentence Encoder, a pre-trained deep neural network designed to convert text into high dimensional vectors for natural language tasks. We want to analyze the semantic similarity between hundreds of combinations of Titles and Keywords from one of the clients of our SEO management services. We are going to focus our attention on only one keyword per URL, the keyword with the highest ranking (of course we can also analyze multiple combinations). While a page might attract traffic on hundreds of keywords we typically expect to see most of the traffic coming from the keyword with the highest position on Google.

We are going to start from the original code developed by the TensorFlow Hub team and we are going to use Google Colab (a free cloud service with GPU supports to work with machine learning). You can copy the code I worked on and run it on your own instance.

Our starting point is a CSV file containing Keyword, Position (the actual ranking on Google) and Title. You can generate this CSV from the GSC or use any keyword tracking tool like Woorank, MOZ or Semrush. You will need to upload the file to the session storage of Colab (there is an option you can click in the left tray) and you will need to update the file name on the line that starts with:

df = pd.read_csv( … )

Here is the output.

Let’s get into action. The pre-trained model comes with two flavors: one trained with a Transformer encoder and another trained with a Deep Averaging Network (DAN). The first one is more accurate but has higher computational resource requirements. I used the transformer considering the fact that I only worked with a few hundreds of combinations.

In the code below we initiate the module, open the session (it takes some time so the same session will be used for all the extractions), get the embeddings, compute the semantic distance and store the results. I did some tests in which I removed the site name, this helped me see things differently but in the end, I preferred to keep whatever a search engine would see.

The semantic similarity –  the degree to which the title and the keyword carry the same meaning – is calculated, as the inner products of the two vectors.

An interesting aspect of using word embeddings from this model is that – for English content – I can easily calculate the semantic similarity of both short and long text. This is particularly helpful when looking at a dataset that might contain very short keywords and very long titles.

The result is a table of combinations from rankings between 1 and 5 that have the least semantic similarity (Corr).  

It is interesting to see that it can help, for this specific website, to add to the title the location (i.e. Costa Rica, Anguilla, Barbados, …).

With a well-structured data markup we are already helping the search engine disambiguate these terms by specifying the geographical location, but for the user making the search, it might be beneficial to see at a glance the name of the location he/she is searching for in the search snippet. We can achieve this by revising the title or by bringing more structure in the search snippets using schema:breadcrumbs to present the hierarchy of the places (i.e. Italy > Lake Como > …).

In this scatter plot we can also see that the highest semantic similarity between titles and keywords has an impact on high rankings for this specific website.

Semantic Similarity between keywords and titles visualized

Semantic Similarity between keywords and titles visualized

Start running your semantic content audit

Crawling your website using natural language processing and machine learning to extract and analyze the main entities, greatly helps you improve the findability of your content. Adding semantic rich structured data in your web pages helps search engines match your content with the right audience. Thanks to NLP and deep learning I could see that to reduce the gap – between what people search and the existing titles – it was important – for this website – to add the Breadcrumbs markup with the geographical location of the villas. Once again AI, while still incapable of true understanding, helps us become more relevant for our audience (and it does it at web scele on hundreds of web pages).

Solutions like the TF-Hub Universal Encoder bring, in the hands of SEO professionals and marketers, the same AI-machinery that modern search engines like Google use to compute the relevancy of content. Unfortunately, this specific model is limited to English only.

Are you ready to run your first semantic content audit?

Get in contact with our SEO management service team now!

Machine Learning

Machine Learning

What is Machine Learning?

Machine learning (ML) is a subfield of Artificial Intelligence for the studying of algorithms that computer systems can use to derive knowledge from data.

Machine learning algorithms are used to build mathematical models of sample data, known as “training data”, in order to make predictions or decisions without being explicitly programmed to perform the task.

There are three different types of machine learning algorithms:

  1. Supervised Learning. The data is labeled with the expected outcome in a “training dataset” that will help the system train itself to predict the outcome on new (previously unseen) data samples.
  2. Unsupervised Learning. Here the machine has no inputs in terms of expected outcomes and labels but it simply gets the features as numerical attributes and will find the hidden structure of the dataset.
  3. Reinforcement Learning. It helps with decision-making tasks. The system gets a reward when it is capable of making measurable progress on a given action without knowing how to get to the end. A typical example is the chess game. The system learns by evaluating the results of a single action (i.e. moving of one square the horse).

Machine learning is a subset of AI, which is an umbrella term for any computer program that does something smart. In this blog, we focus on the use of machine learning for search engine optimizationnatural language processing, knowledge graph and structured data.



Make Your Website Smarter with AI-Powered SEO: just focus on your content and let the AI grow the traffic on your website!

Courtney McGhee

WooRank

Machine Learning in Action for Search Engine Optimization

Machine Learning in Action for Search Engine Optimization

In this post, I’ll walk through the analysis of Google Search Console data combined with a machine learning clustering technique to provide an indication on what pages can be optimized to improve the organic traffic of a company website. I will also highlight the lessons I learned while using machine learning for an SEO task.

Interestingly, website owners when I propose to use their data are usually very relieved that AI can take care of the mundane, repetitive SEO work like analyzing GSC data; this allows the clients of our SEO management service and our own team, to focus on more complex, value-adding work such as content writing, content enrichment, and monetization.

Machine learning is fun

This experiment is designed for anyone: no specific coding skill is required. A good grip on Google Sheets is more than enough to get you started in the data mining of your website’s GSC data.

We will use Orange, an open source tool built on top of Python, for data mining and analysis that uses a visual programming front-end (a graphical user interface that lets you do what a developer would do using a Jupyter notebook environment and Python, yay!).

You can install Orange from Anaconda, a popular Python data science platform, or simply download it and install it from their website. We will also use data from a web crawler to extract information about the length of the title and the length of the meta description. This can be done using a WooRank account, Sitebulb or any other web crawler of your choosing.  

Stand on the shoulder of giants

Dealing with machine learning is indeed a paradigm shift. The basic idea is that we provide highly curated data to a machine and the machine will learn from this data, it will program itself and it will help us in the analysis by either grouping data points, making predictions or extracting relevant patterns from our dataset. Choosing the data points and curating the dataset, in machine learning, is as strategic as writing the computer program in traditional computer science. By deciding the type of data you will feed the machine you are transferring the knowledge required to train the machine. To do so, you need the so-called domain experts and when I started with this experiment I came across a tweet from Bill Slawski that indicated me the importance of comparing search impressions to clicks on a page as the most valuable piece of data from the Google Search Console.

I also spotted another valuable conversation on the topic between Aleyda Solis and Cyrus Shepard.

By reading this I decided to compile a dataset composed of the following attributes. The first 6 coming from GSC and the other 2 coming out of the crawling of the pages.

The overall idea, as explained by Bill Slawski, is to rewrite the title and the meta description of pages that receive a good number of impression and a low number of clicks.

“Willing to know more about what data is provided by Google Search Console? Read it all here on the WooRank’s Blog.”

As we learned from Aleyda another important aspect to winning the game is to focus only on pages that have already a strong position (between 3 and 5 she says). This is extremely important, as it will speed up the process and bring almost immediate results. Of course, the bracket might be different for smaller websites (in some cases working with pages with a position between 3 and 10 might also be valuable).

How do I get the data from Google Search Console into Google Sheet?

Luckily GSC provides fast and reliable access to your data via APIs, and you can use a Google Sheet Add On called searchanalyticsforsheets.com that automatically retrieve the data and stores it in Google Sheet without writing a line of code. It is free, super simple to use and well documented (kudos for the developing team 👏).

If you are more familiar with Python you can also use this script by Stephan Solomonidis on GitHub that would do pretty much the same work with only a few lines of code.

In my dataset, I wanted to have both queries and pages in the same file. A page usually ranks for multiple intents and it is important to see what is the main query we want to optimize for.

How can I merge two datasets in one?

Aggregating data from the crawler with data from GSC can be done directly in Orange using the merge data widget that horizontally combines two datasets by using the page as a matching attribute. I used, instead, Google Sheets with a combination of ARRAYFORMULA (it will run the function on an entire column) and VLOOKUP (this does the actual match and brings both title length and meta description length in the same table).  

=ARRAYFORMULA(VLOOKUP(A2:A,crawl_html!A6:C501,{1,2},false))
ARRAYFORMULA(VLOOKUP(search_key,range,index,[is_sorted]))
  • search_key (the attribute used in the matching)
  • range (the sheet with the data from the crawler)
  • index (the columns from the crawler dataset that we want to import  for the length of the title and of the meta description)   
  • is_sorted (typically set to FALSE since the two tables we’re merging don’t follow the same order)

Prepare data with loving care

Data curation is essential to obtain any valid results with artificial intelligence. Data preparation also is different for each algorithm. Each machine learning algorithm requires data to be formatted in a very specific way and before finding the right combination of column and yield useful insights I did several iterations. Missing data and wrong formatting (when migrating data in Orange in our case) have been issues to deal with. Generally speaking for missing data there are two options, either remove the data points or fill it up with average values (there are a lot more options to consider but this is basically what I did in the various iterations). Formatting is quite straightforward, we simply want Orange to properly see each informative feature as a number (and not as a string).

The dataset

The dataset we’re working with is made of 15784 rows each one containing a specific combination of page and query. We have 3 informative features in the dataset (clicks, impression, and position) and 5 labels (page, query, CTR, title and meta description length). Page and query are categorical labels (we can group the data by the same query or by the same page). CTR is a formula that calculates clicks/impression * 100 and for this reason is not an informative feature. Labels or calculated values are not informative: they don’t help the algorithm in clustering the data. At the same time, they are extremely useful to help us understand and read the patterns in the data.  

Dataset configuration in Orange

Dataset configuration in Orange

Introducing k-Means for clustering search queries

When looking at thousands of combination of queries across hundreds of web pages selecting the pages that have the highest potential in terms of SEO optimization is an intimidating task. This is particularly true when you have never done such analysis before or when you are approaching a website that you don’t know (as we do – in most cases – when we start a project with one new client that is using our technology).

We want to be able to group the combination of pages that can be more easily improved by updating the title and the snippet that describes the article. We also want to learn something new from the data that we collected to improve the overall quality of the content that we will produce in the future. Clustering is a good approach as it will break down the opportunities in a limited number of groups and it will unveil the underlying patterns in the data.

A cluster refers to a collection of data points aggregated together by a certain degree of similarity.

What is k-Means Clustering?

K-Means clustering is one of the simplest and most popular unsupervised machine learning algorithms. It will make inferences using only input features (data points like the numbers of impressions or the number of clicks) without requiring any labeled outcome.

K-Means will average the data by identifying a centroid for each group and by grouping all records in a limited number of cluster. A centroid is the imaginary center of each cluster.  

The pipeline in Orange

Here is how the flow looks like in Orange. We’re importing the CSV data that we have created using the File widget, we’re quickly analyzing the data using the Distribution Widget. We have the k-Means Widget at the center of the workflow that receives data from the Select Rows Widget (this is a simple filter to work only on records that are positioned in SERP between 3 and 10) and  sends the output to a Scatter Plot that will help us visualize the clusters and understand the underlying patterns. On another end, the k-Means sends the data to a Data Table widget that will produce the final report with the list of pages we need to work on and their respective queries. Here we also use a Select Rows widget to bring in our final report only the most relevant cluster.  

The data analysis pipeline in Orange

The data analysis pipeline in Orange

The distribution of rankings.

Here is how the distribution of rankings looks like.

The silhouette score in k-Means helps us understand how similar each combination is to its own cluster (cohesion) compared to other clusters (separation).

The silhouette score ranges from 0 to 1 (a high value indicates that the object is well matched to its own cluster). By using this value the algorithm can define how many clusters we need (unless we specify otherwise) and the level of cohesion of each group. In our case 3 cluster represent the best way to organize our data and to prioritize our work. From the initial 15784 samples (the rows in our dataset) we have now selected 1010 instances (these are all the combination with pages in position 3-10) that have been grouped by k-Means.   

k-Means configuration

k-Means configuration parameters

What is the data telling us

We will use Orange’s intelligent data visualization to find informative projections. In this way, we can see how the data has been clustered. The projections are a list of attribute pairs by average classification accuracy score that shows us the underlying patterns in our dataset. Here are the top 4 I have chosen to evaluate.

1. Focus on high impressions and low CTR and here is the list of pages to optimize

CTR vs Impressions

Scatter Plot #1 – CTR vs Impressions  (the size of the symbols indicates the CTR)

There is no point in working on cluster C1, either there are very little impressions or the CTR is already high. Where it hurts the most is on C3 and following we have C2 cluster.

We have now a total of 56 combinations of pages and queries that really deserve our attention (C2 and C3). Out of this batch, there are 18 instances in C3 (the most relevant group to address) and this basically means working on 16 pages (2 pages are getting traffic from 2 queries each).

The final report with the pages to work on

The final report with the pages to work on

This is the list for our content team to optimize. New titles and improved meta description will yield better results in a few weeks.

2. Positions don’t matter as much as impressions

Scatter Plot #2 - Positions vs Impressions  

Scatter Plot #2 – Positions vs Impressions

Our three clusters are well distributed across all selected positions. We might prefer – unless there are strategic reasons to do otherwise – to improve the CTR of a page with a lower position but a strong exposure rather than improving the clicks on a higher ranking result on a low volume keyword.

3. Write titles with a length between 40 and 80 characters

Google usually displays the first 50–60 characters of a title tag. MOZ research suggests that you can expect about 90% of your titles to display properly when contained under the 60 characters. From the data we gathered we could see that, while the vast majority is working under 60 characters we can still get a healthy CTR with titles up to 78 characters and not shorter than 38 characters.   

Scatter Plot #3 - CTR vs Title Length

Scatter Plot #3 – CTR vs Title Length

4. Write Meta Description with a length between 140 and 160 characters

At the beginning of May last year, the length of meta description on Google has been shortened after the last update in December 2017, when the length was extended up to 290 characters. In other words, Google is still testing various length and if on a desktop it displays 920 pixels (158 characters) on mobile you will see up to 120 characters in most cases.   

Meta description length in 2019 according to blog.spotibo.com

Meta description length in 2019 according to blog.spotibo.com

This means that the correct length is also dependent on the percentage of mobile users currently accessing the website. Once again we can ask the data what should be the preferred number of characters by looking at clusters C2 and C3. Here we can immediately see that the winning length is between 140 and 160 chars (highest CTR = bigger size of the shapes).   

Scatter Plot #4 - CTR vs Meta Description Length

Scatter Plot #4 – CTR vs Meta Description Length

Make Your Website Smarter with AI-Powered SEO: just focus on your content and let the AI grow the traffic on your website!

Courtney McGhee

WooRank

What’s next?

These are really the first steps towards a future where SEOs and marketers have instant access to insights provided by machine learning that can drive a stronger and sustainable growth of web traffic without requiring a massive amount of time in sifting through spreadsheets and web metrics.

While it took a few weeks to set up the initial environment, to test the right combination of features and to share this blog post with you, now to process hundreds of thousands of combinations anyone can do it out in just a few minutes! This is also the beauty of using a tool like Orange that, after the initial setup, requires no coding skills.    

We will continue to improve the methodology while working for our VIP clients, validating the results from these type of analysis and eventually improve our product to bring these results to an increasing number of people (all the clients of our semantic technology).

Keep following us and drop me a line to learn more about AI for SEO!

10 Artificial Intelligence Software for SEO

10 Artificial Intelligence Software for SEO

Artificial Intelligence is all around us. From Siri to Alexa, to Google Home, it’s consuming the age we live in. We have found ourselves relying on a voice in a device to help us with the simplest of tasks. Luckily, content marketers can utilize this advanced technology to assist with search engine optimization techniques.

WordLift has mastered the art of Semantic AI, and we are excited to see this process grow beyond just our company. All over the web, companies are utilizing this to cut down the time and effort needed from SEO specialists, by the click of a button.

We have divulged into the top 10 Artificial Intelligence Search Engine Optimization Software, showing you exactly what makes each unique from the others. Jim Yu, the CEO and founder of Bright Edge, recently released an article in which he divided these SEO tools into three categories:

  • insight,
  • automation and
  • personalization.

We have broken these tools into the corresponding categories to help you understand how you can integrate them into your SEO workflow.

Make Your Website Smarter with AI-Powered SEO: just focus on your content and let the AI grow the traffic on your website!

Courtney McGhee

WooRank

Insight Tools

Bright Edge

Bright Edge is a platform that contains several modules to help content marketers with optimizing their content. The software includes; DataCube, Hyperlocal, Intent Signal, Keyword reporting, Page reporting, Content recommendations, Share of voice, Site reporting and story builder.

The most unique feature is their Hyperlocal add-in. This aspect allows users to map out keywords in a specific region; either a country or city. Bright Edge’s Content Recommendations gives the opportunity to read through precise suggestions on each page. It personalizes each page on your site according to what that specific page contains.

The platform provides a unique way to view how various SEO changes impact the brand. Story builder combines data from several pieces of the website to create aesthetic tables and charts, making it easier to decipher the data.

MarketBrew

This software is unique in how quickly it distributes information to the consumer. MarketBrew provides each company with step by step on-site training, as well as a breezy plan to implement the program. The software prides itself on its search engine modeling, producing information in only 1 and a half hours.

Their process involves coding a base search model, and in turn adjusting it so that it fits your target search engine; to which they claim they can accommodate any search engine. Their machine learns the exact algorithms that include which search engine you are wanting to use. This tool provides the user with a precise description of what distinguishes the first result from the second one; such as the HTML content or even the META description. This cuts off time that a user spends manually analyzes the inner workings of the results.

MarketBrew also conveniently provides the user with exact ways to resolve the issues with your ranking, which can then be tested again within hours. This software overall provides a great visual explanation as well as step-by-step ways to swiftly and resourcefully improve your site.

Can I Rank?

Can I Rank gathers information from various Search Engine Optimization websites, then takes the extra step to elaborate with suggestions. Their artificial intelligence method works with providing the user with data that leads them in the right direction to boosting their content, backing it up with more than 200,000 websites.

Can I Rank offers a keyword difficulty score to allow the user to judge which exact keyword will work for their specific website. The analysis is all done by a machine-learning system that focuses heavily on data as opposed to strict opinions. This website is efficient for those who want that data to back up why they should change and doesn’t leave you clueless on what to adjust.

Overall, Can I Rank lives up to their name by showing users exactly what sets them apart, and what they can do to improve that.

Pave AI

Pave AI is an Artificial Intelligence based tool that turns Google Analytics data into helpful insights to improve your everyday marketing strategy. Its algorithm integrates marketing data from different platforms (such as Adwords, Facebook Ads & Twitter Ads) and analyzes them, making it easy to understand what works and what can be improved.

Pave AI offers personalized reports and data-driven recommendations, crossing your data with 16+ million possible combinations to identify the most relevant insights across all marketing channels. We recommend this tool if you wish to cut the time spent on analytics and you’re in need of a quick tailor-made solution to turn meaningful insights into effective marketing strategies.

Automation tools

Wordlift

logo WordLift

WordLift offers Artificial Intelligence for three facets of websites on WordPress; editorial, business, and personal blogger. Receiving a 4.7 out of 5 stars from WordPress itself, this plug-in analyzes your content into categories of; who, what, when, and where. WordLift processes your information by creating new entities, allowing you to accept them and select internal links for your content. This program also suggests open license images, which reduces the time used on Googling for images.

WordLift offers many unique features, such as; creating timelines for events, utilizing Geomaps for locations, making chords to show which topic relates to the others. WordLift, above all other of these platforms, adds the most distinctive effects to your WordPress website.

 

Dialogflow

Dialogflow is the basis of voice search on any platform such as; Google Assistant, Alexa, Cortana or even Facebook Messenger. This program is supported by Google and runs with natural language processing.

Dialogflow uses named entity recognition to analyze the spoken phrases from user to process the requests. The process includes providing the machine with several examples of how particular question could be phrased. In each case, the user must define an “entity” to show what is the most pertinent part of the statement spoken. From here, the information is spoken and relayed back to the consumer.

Dialogflow provides a helpful guide on their website to help users with the beginning process of getting Alexa or Siri to do just what you want them to do!

Curious to see a use case? Meet Sir Jason Link, the first Google Action that integrates Dialogflow and WordLift AI.

Andrea Volpini


 

Alli AI

Alli AI offers several AI-powered SEO features to improve and optimize your website content strategies. The tool provides the user with an easy and powerful way to increase traffic, build quality backlink and scale business outreach.

Alli AI uses Machine Learning technology to simplify SEO process through an all-in-one software tailored for each client and packaged into a pretty nice UI. The process includes planning your SEO strategy, finding backlinks, getting code and content optimizations in addition to tracking your traffic progress.

Furthermore, Alli AI boasts of having created a human tool, as it gives users the feeling of actually dealing with a person and not a machine.

Albert

Albert is an Artificial Intelligence powered software designed to manage your digital marketing campaigns and maintain a constant level of optimization in order to reach your business goals.

The software provides an out-and-out self-learning digital marketing ally designed to take care of every aspect of digital campaigns. Its features include autonomous targeting, media buying, cross channel execution, analytics and insights.

Albert is the perfect match for those who usually spend a lot of time on digital campaign optimization and who are looking for a powerful tool to reach a better allocation of budget between channels. Albert will advise which time and place engage with more customers and provides a constant growth of campaigns towards the set goal. The software also offers suitable recommendations for improvements that require human action such as best practice recommendations, budget shifts, creative performance etc.

Personalization tools

Acrolinx

Acrolinx is a game changer for those in the content marketing and advertising sector. The thought process drastically changes when it comes to optimizing search results. Developed at the German Research Center for Artificial Intelligence, Acrolinx works with 30 tools across the web; such as Microsoft Word or Google Docs, giving you much flexibility with how you promote your content. However, Acrolinx only supports; English, German, French, Swedish, Chinese and Japanese.

The software defines their evaluation technique with a “scorecard.” They make sure to ask what type of voice you are trying to achieve, to make accurate suggestions for you. Acrolinx works alongside, Salesfore.com, WordPress, Drupal, Adobe Marketing Cloud, and many more. The company provides an efficient guide to make sure that you are creating good content.

OneSpot

This software is unique from the others in that it focuses mainly on the consumer journey, with it’s patented “content sequencing” section. OneSpot generates personalized content after viewing a website user’s history on the internet. The company structures itself into three segments; OneSpot OnSite, OneSpot InBox, and OneSpot ReAct. Each facet of the company focuses specifically on that medium.

Through all of these, OneSpot creates a unique “content interest profile” for each user who visits your site. This profile allows the software user to create a deeper connection with consumers and be able to better target new visitors. OneSpot gives users a great way to expand a relationship with consumers through multiple mediums.

Follow us at Wordlift for more insights on SEO, or sign up for a free trial and get the full AI SEO experience.

World Summit AI

World Summit AI

11-12 October 2017, Amsterdam

The World Summit AI is the first industry organized tech summit for the entire applied AI ecosystem. The summit will take place on the 11 and 12 of October at the Gashouder in the Westerpark, Amsterdam.

More than 150 speakers will be there to share with a crowded audience the most interesting findings and challenges of AI. There will be key persons from the most interesting companies which are facing this industry, such as IBM, Watson, Facebook, Intel, Google, Apple, Netflix, Alibaba, Uber and many others. International organizations such as the UN, NASA and UNICEF, top universities and research institutes will be also there.

All these people are literally shaping the future of AI with their experiments, products, and tests.

How A.I. is disrupting web writing according to FREEYORK’s founder

How A.I. is disrupting web writing according to FREEYORK’s founder

What if artificial intelligence was the nurturing humus that the publishing industry and blogs need to bloom again? What if the future of blogging was in the virtual hands of an army of machines that can work together with professional writers to build and spread knowledge? This is the story of Samur Isma, founder and publisher of the online design magazine FREEYORK, which publishes 25-30 articles a week employing just two editors. How do they do this? Let’s look closer to understand Samur’s visionary model.

A.I. is a mindset.

FREEYORK - Illustration

Eclectic and creative, Samur is halfway between tech and design with a strong entrepreneurial mindset. After starting his career as a freelance graphic designer, he has studied computer science at the Eastern Mediterranean University in Mersin and, then, at the Technical University of Wroclaw – where he graduated in computer science and marketing. In 2009 he founded FREEYORK, and nowadays he divides his mind and his time between his daytime job at IBM as a project manager, the management of his editorial project FREEYORK, and the organization of the Startup Weekend in Wroclaw. And he still manages to have some fun!

Samur Isma with a part of the team

Samur Isma with a part of FREEYORK‘s team: Samur is holding the camera, while the girl on the right (covering her face) is an editor, the one on a left contributes to FREEYORK, the guy on the left (in sunglasses) is FREEYORK‘s Business strategy advisor and consultant, and the tall guy on the left, helps FREEYORK with graphic design.

FREEYORK: the editorial project

Born as a community-driven platform, FREEYORK aims to spread the works and stories of upcoming artists.

Previously, designers and other members of the community used to submit their artworks and their stories, reaching a wide audience of design-lovers. Until a few weeks ago, all the work was done by two editors, plus Samur: just three persons were covering photography, design, illustration, street art, architecture, fashion, and food. Now, content submission is available again for website members, through a newly rolled-out system.

FREEYORK is an eye wide-open on all kind of visual contemporary art and publishes a huge amount of inspiring content that helps artists and studios to get known and design-lovers to find new artists and creative ideas from all around the world.

FREEYORK's Homepage

Unleashing the power of A.I.

FREEYORK’s editorial team has a secret weapon to stay ahead of their competition, and that secret weapon is A.I.

Together with Samur and his small editorial team, there is a kind of cyber-team composed by A.I. tools, whose activities are now part of the magazine’s editorial workflow. Day by day, A.I. is helping the human team to do a better job in content writing, editing, and organization.

Our current workflow involves usage of three A.I. tools: an A.I. that writes the content of an article, another A.I. that analyses it for grammar mistakes and replaces words that don’t fit into a context, and WordLift” explains Samur. The first step is to collect some materials on a topic. After finding a few sources, we are giving them A.I. to rewrite. The second step is to analyze what first A.I. wrote, fix grammar mistakes, and replace those words that don’t make any sense. Finally, we let WordLift annotate an article and think of a catchy headline. A.I., unfortunately, is not good at this yet!
So, basically, there is an A.I. working at each stage of the editorial process: writing, editing, and organizing. Oh, wait! What about human editors? What will they do in the future of the web? Here is what Samur has to say about this:
It scares a lot of people for their future. Especially those who’re working with numbers. I think that writers that use a lot of statistics and numerical data have a bigger chance to be replaced by A.I. As a perfect example, we can take The Associated Press that is using A.I. to write Minor League Baseball articles. That must be an alarming sign for sports writers. As for FREEYORK, I’m hoping to find a perfect solution that will combine writing and editing in one tool and on the top of that if it should read the text and think of a catchy headline. But nothing ever will substitute a well-written writer’s opinion on a subject. In the future, I’m hoping to form a brilliant team of editors that will write long-posts expressing their opinion on an artist’s work, exhibitions, installations, and so on. Who doesn’t appreciate a well-written article?”
Put this way, A.I. is more an opportunity than a threat, both for writers and publishers. State-of-the-art A.I. is ready to free journalists and writers from boring and repetitive tasks in their work day. What do you do when A.I. is quicker and cheaper than humans to write news and analysis? As a writer, you can focus on critics, opinions, and storytelling: that’s human stuff and no machine can make it better than professionals.
FREEYORK's entity - New York

See New York from FREEYORK’s point of view. Isn’t it wonderful?

As a small publisher competing with bigger players, Samur is working on a new business model which will rely less on display advertising.
I wanted to make WordPress more intelligent and that’s exactly what WordLift does” states Samur. “At first, when I introduced WordLift to the team, they were skeptical about it and stated “Why we need this, tags are doing the same job with a less effort anyway, but I kept on pushing because there is a huge potential in this.
Samur is not ready to share the details of his new – A.I. powered – business model, but I’m sure we’ll come back on it to see where this adventure is going to land. 🤖 Meanwhile, he is seeing WordLift’s effect on his key metrics:
We approached WordLift while experiencing a decrease in our organic traffic. After a few months using it, our organic traffic reached and exceeded previous figures and it is still growing at a stable rate.
Our brainy CEO, Andrea Volpini, is analyzing Samur’s data to better understand the impact of our plugin on FREEYORK: I promise we’ll come back soon with more information and insights about it!
Update: the results of Andrea’s analysis have been presented at SEMANTiCS 2017 in Amsterdam. 🎉

 

Stand out on search in 2019. Get 50% off WordLift until January 7th Buy Now!

x