The vision of organizing content and creating – out of millions of web pages – a Giant Global Graph was groundbreaking. When Web inventor Sir Tim Berners-Lee’s blogged about it back in 2007 it was clear that something was happening. Giant Global Graph (GGG) really was a definition that he introduced to clarify howweb of data was emerging from the web of documents.   

He writes:

«So the Net and the Web may both be shaped as something mathematicians call a Graph, but they are at different levels. The Net links computers, the Web links documents. 

Now, people are making another mental move. There is realisation now, “It’s not the documents, it is the things they are about which are important“. Obvious, really.»

Later he continues:

«Then, when I book a flight it is the flight that interests me. Not the flight page on the travel site, or the flight page on the airline site, but the URI (issued by the airlines) of the flight itself. That’s what I will bookmark. And whichever device I use to look up the bookmark, phone or office wall, it will access a situation-appropriate view of an integration of everything I know about that flight from different sources. The task of booking and taking the flight will involve many interactions. And all throughout them, that task and the flight will be primary things in my awareness, the websites involved will be secondary things, and the network and the devices tertiary.»

I have been following this path in the last ten years — really. I actively played a role in the field of applied research to evaluate the impact of these technologies and to understand how, knowledge extraction, NLP and semantic technologies (now also called applied AI), could improve content management systems, publishing workflows, and content findability.

After these intense two days at SEMANTiCS 2017, the 13th European Conference on Semantics Systems in Amsterdam, I can finally see this whole vision becoming a reality. Knowledge graphs are not just crucial for the improvement of various machine learning and cognitive computing tasks, they are at the core of leading edge organizations like Electronic Art. They serve as complex content models to compete in today’s digital world.

Before incorporating WordLift as a startup we spent these last five years in harnessing the complexity of these technologies and I am proud now to hear esteemed managers at C-level, top notch consultants and even academics recognizing WordLift as a first mover in the digital marketing automation to cleverly use the entire stack of semantic technologies.

While there is a broad universe of computing challenges that are now interesting for the semantic web community and again large enterprises and institutions are undertaking significant investment to move from legacy databases to linked data infrastructures — imagine 100+ years of research documents being managed and produced by IET (The Institution of Engineering and Technology) becoming a giant graph, or scientific publishers of the size of Springer Nature, with their annual turnover of EUR 1.5 billion, moving to semantic graph databases — Semantic SEO is still in its infancy in this industry, and real five stars linked data publishing for websites (without astronomical budgets) is really only possible with WordLift.

The recent uptake of our product also means that we can finally experiment with these technologies by iterating on all kinds of enhancements and by measuring their immediate impact on a wide range of different websites.

In the Freeyork.org case that I presented at SEMANTiCS 2017, we had the unique opportunity to see how enriched articles performed against not enriched articles in terms of page views and sessions but also in terms of engagement metrics like average time spent on the page, session duration and number of pages visited per session. The results that we measured are impressive and not only important for the happy users of our service, but are paving the way for a completely new generation of AI-driven SEO tools powered by semantic technologies that combine knowledge extraction with high-quality graphs to help editors focus on their stories and let machines find the perfect audiences for it.

WordLift for SEMANTiCS 2017

The key findings from the freeyork.org use case.

In this sense, @RamiaEl the editor in chief of @Tharawatmag, has probably written, a few days ago, one of the best reviews for our plugin.

 

If I have to look ahead, the challenges that we need to face with WordLift and within the emerging market sector of automated SEO really are twofold:

  • building the business infrastructure around the technology to help us scale (Aaron Bradley and Eamonn Glass from Electronic Arts have been very clear to this regard – Simplify, Scale and Standardise)
  • improving the quality of the data that we use to structure content and the quality of the data that we generate and publish. The leading edge, when you’re creating intelligent content, as more people, will begin to use semantics, is going to be on the quality of the data. Machine learning here is a key player but, still, I haven’t seen many solutions where it has been effectively applied to data curation, cleaning, and interlinking.

cyberandy at SEMANTiCS 2017I will probably blog more about the conference in the next few days and I am sure that all the ideas and the experiments that I have discussed, planned and evaluated in these two days are going to help inform the way AI powered SEO will evolve in the next few years.

Networking with like-minded people, visionaries and researchers from all over the world (along with cycling in a stormy weather at full speed) is absolutely a great way to spend my time and to keep on improving our product. 😉

  

Sei pronto per la nuova SEO?
Comincia a usare WordLift oggi stesso!