Affiliate marketing continues to boom as an industry. Made possible by the rapid advance of digital technology, the simple practice of distributing customized links has become a reliable source of income for those capable of reaching relevant audiences — people deemed ‘influencers’ with opinions on products and services likely to be mirrored by their followers.
But ever since Twitter first got mainstream attention, we’ve seen a move away from in-depth reviews and roundups and towards brief and formulaic social media posts. Aspiring celebrities litter their accounts with #ad posts offering nothing more than generic endorsements with mandated hold-the-product-and-smile photos.
With all the clutter, an impartial observer might well conclude that there’s way too much affiliate marketing ‘content’ already. They’d be wrong, though. In fact, the affiliate marketing industry needs content more than ever. Why? Let’s get into it.
The demand for content isn’t going down
The digital content landscape is much like an insatiable deep-sea behemoth — no matter how much it consumes on any given day, it’s always in need of fresh, varied material shortly after. And though bland influencer posts get a lot of attention (and presumably do get results), they’re like bite-sized snacks, light and inconsequential. No one browses the glossy photos of a celebrity Instagram account and burns out on high-quality content.
Indeed, the average internet user is going to consume a broad range of content types. They might browse social media channels on their phone while commuting, read long-form articles throughout their working days, then watch YouTube videos after getting home (with newer generations favoring video). Each type of content they consume provides a fresh affiliate marketing opportunity, and consuming too much of one type isn’t going to sour them on others.
And new technology is going to keep bringing new content types. We’ve already seen the world of podcasting become absolutely enormous (largely supported by affiliate marketing, notably), and it’s possible that VR content will be next. Since any kind of content can be supported through affiliate links, there’s a lot of space waiting to be filled.
People respect transparent sponsorships
Consider the ongoing struggles of digital advertising models. Tired of ads that affect their online experience and emboldened by access to ad-blocking tools, internet users have quickly lost their willingness to put up with invasive advertising methods. Even as programmatic technology squeezes ever-higher levels of efficiency from PPC, the industry suffers at the hands of user reluctance and marketing saturation.
Affiliate marketing, though, can be done seamlessly without detracting from the content or engaging in any rhetorical shenanigans. And it doesn’t even require any pretense. It’s entirely possible to have a strong and productive seller/marketer arrangement without hiding anything from the prospective buyers — in fact, being entirely brazen can be very effective because people like being approached with honesty.
When an influencer produces a high-quality video series openly sponsored by a particular brand, it makes both parties look good. The brand earns plaudits for financing good content and the influencer gets to show off improved production values. Provided the content is good enough, followers won’t care about the promotional nature — and they’ll be more likely to want to pointedly click on an affiliate link to support the brand (as opposed to doing so unknowingly).
Social proof is enormously powerful
With every day that passes, the internet gives us more e-commerce opportunities and more product information. No matter what you’re looking to buy, you’ll be able to find countless models, versions, configurations, and prices, with every business you encounter eager to claim that only their product is worth your time — ignore all other contenders.
Since we can’t reach out to touch items through the digital realm, we are required to judge for ourselves whether any given proposition is really worth our time, and it’s hard to do that when we face so many similar options. That’s why we rely so heavily on social proof. We need people whose opinions we trust to give us some guidance and help us figure out which products are worth our money and which brands are worth our time.
While social proof has always been important (we are social animals, of course), it was less so when the internet was newer and people were inclined to give sites the benefit of the doubt.
Following numerous high-profile cases of user data being leaked, and a general push towards higher security standards through things like HTTPS, users are on high alert, and not inclined to take unnecessary chances. If you can establish yourself as an expert in your field, people will absolutely listen to what you have to say.
Discerning buyers are increasingly thorough
We’ve established that internet users are a lot more cautious than ever before when it comes to the companies they trust with their data or their money, but this isn’t purely a result of the aforementioned data leaks — it’s also a generational thing. Younger generations have reached maturity with the internet available to them, and feel perfectly comfortable engaging in large amounts of online research before making big decisions.
Someone from an older generation might go into a large store, ask the assistant which camera they should buy, and then go with that option — someone younger would be far more likely to take an in-depth look at the features and search for a comprehensive breakdown to read. And since tastes vary, they might look at various different pieces of content before finding one coming from their kind of perspective.
Combine the average buyer’s desire for thorough analysis with their eagerness to find an influencer operating on their wavelength and you get an affiliate marketing world that always has room for good content from fresh faces.
The viable marketing pool keeps growing
Affiliate programs are far more common and well-rounded than ever before. The cost-effective nature of the model has been consistently demonstrated, and since detailed analytics make it easy to tell where a page visitor came from, the range of companies supporting affiliate marketing out of the gate continues to expand.
Note that the end result of an affiliate marketing arrangement needn’t be someone buying a product — it could be someone using a service, or downloading a file, or visiting a page. Through call-tracking software and the establishment of sophisticated analytics goals, you can place a monetary value on almost any action, online or offline. And where there’s value, there’s an opportunity for affiliate marketing.
To get the ball rolling, try throwing together some niches on a whim. Here are some quick tips:
Start with terms like “best”, “top” and “roundup” — they’re clear markers for affiliate reviews because they immediately get to the point.
Think of a subject that you can usefully comment on and add a product or service associated with it (e.g. “Best motorbikes” or “Top cycling gloves”).
To find a query with less competition, add on some additional terms that you can optimize for. Try terms related to purpose(“Best scarves for jogging”), location(“Top headphones in Chicago, Illinois”)or pricing(“Budget tax software roundup”).
Once you find something without too much competition for rankings, start looking into affiliate schemes for those products — if you can’t find anything, contact the seller directly to see if you can arrange something manually.
(Note: Be careful that you choose something with a fairly static range. For instance, “Bluetooth speakers in New York” should return a product set that will change infrequently, while “Houston businesses for sale” won’t be so useful because they’re one-time deals and could sell while you’re writing the content (content is great for real estate if you’re close to the deals, but not if you’re just doing affiliate work). If you’re going to create high-quality affiliate content, make sure it can continue to deliver value on an ongoing basis.)
As you can see, there’s a remarkable amount of uncovered ground in the affiliate marketing world. In fact, there are so many different searches carried out every day that the idea of the affiliate marketing world being totally saturated is ludicrous. You may not be able to grab the low-hanging fruit at this point, but if you diversify your affiliate work, you’ll still reap the benefits.
Wrapping up, the affiliate marketing world needs content more than ever before for the following reasons (and possibly more):
No matter how much content is created, people always want more.
Sponsorships are readily accepted today.
Social proof is only getting more powerful. In-depth research needs new perspectives.
More things can be marketed than ever before.
If you’re just getting started in the affiliate marketing world, or you’ll be trying it for a while, don’t get discouraged by the apparent saturation of basic Instagram influencer posts. That isn’t the only kind of viable content — you can reach your audience elsewhere, and if you make your content good enough, the results will amaze you.
Patrick Foster contributes to Ecommerce Tips — an industry-leading ecommerce blog dedicated to sharing business and entrepreneurial insights from the sector. Check out the latest news on Twitter @myecommercetips.
Did you know that over 3.5 billion searches take place on Google every day? This simply means that to get a piece of traffic and boost conversions, you need to appear on the first page of Google. And, to do so, you need to have an SEO strategy. Well, link prospecting certainly can help identify relevant opportunities for your website.
When implemented properly, SEO can double your site’s visibility in the SERPs, drive more traffic to it, help you address the right customers, boost their conversions and, above all, give you a chance to build a solid brand name.
After all, it’s 2018, and no one trusts businesses that are not online. In other words, SEO has become an obligatory investment for any business that wants to stay relevant.
Now, you don’t have to be a seasoned marketer to know that guest blogging is one of the most significant SEO practices. It’s a powerful way to build links and is basically synonymous with doing off-site SEO.
However, many digital marketing experts claim that this technique is dead. Just remember Google’s Matt Cutts, who claimed that“guest blogging has gotten too spammy” in 2014.
However, his judgment could have been wrong. Maybe guest blogging is still alive and kicking. You just need to know how to implement it properly.
So, what is the idea behind Link Prospecting?
Finding quality guest blogging opportunities may seem simple at the beginning. You run a couple of Google searches and make a list of content-rich sites in your niche, where you can publish your guest articles.
But, this sounds too good to be true. Namely, when you take a closer look at your list, you will understand the challenge you’re facing. Not all the sites on your list are worth connecting with, for instance. No matter if it’s a bad content strategy or low PA or DA, once you spot a poor-quality blog, you should run away screaming.
So, you need to do a more complex, advanced analysis and separate the wheat from the chaff. This is what link prospecting is about – finding quality and relevant sites in your niche that will give your SEO efforts an actual boost.
Why is Finding Quality Link Building Opportunities Important?
The idea behind writing awesome content and publishing it on quality sites is earning quality backlinks. Your backlink portfolio is the decisive factor for Google when assessing your site’s value. If it notices that there are numerous highly authoritative and quality links pointing back to your domain, it will consider it relevant and boost its rankings in the SERPs.
Generating exceptional backlinks can also boost your overall domain authority, expand your target audience, prove your expertise, and help you establish a recognizable brand. This is also an opportunity to build relationships with the influencers in your niche and boost your exposure. Namely, once they see that the top players in your industry share or link to your posts organically, your target audience will trust you more.
How to Know which Sites are Valuable Enough?
To make sure you find the right prospects, you first need to set your objectives clearly. For instance, if you want to boost your authority in your niche via guest posting, you need to become a regular contributor on all major sites in that industry – so, just make an actionable list and go!
On the other hand, if you just want to earn some organic and high-quality links, guest posting will be much simpler for you. Of course, you will have a much longer list of prospects to connect with and publish your work. All you have to do is check the site’s DA, see if they published guest posts before, and reach out to them.
Once you select the right targets, you need to see who their target audience is and what their niche is. You should also check traffic and see if people visit their site, as well as pay attention to their backlink portfolio, the quality of their articles, and engagement metrics like the number of shares, likes, comments. These are all some key performance indicators that tell you whether the site is worth your attention.
Finding Quality Link Prospects
Once you set your goals, understand the metric you need to track, and what sort of sites you should be looking for, you can start your search. Here are a few most effective link prospecting ideas you should keep in mind:
Automate your link prospecting efforts using link building tools. These tools will analyze and choose only quality link building opportunities for you, give you invaluable data about your prospects, show their contact emails, helping you find the right sites and connect with them much faster.
Conduct competitor analysis to monitor and replicate their most effective link building strategies.
Take the time to produce original images and include them in your piece – that can become an effective SEO strategy to bring traffic back to your site!
Look for influencers to boost your authority. To do so effectively, you can use Twitter search or its advanced options or simply use a link prospecting tool.
Back to Us
Link prospecting is an immensely important part of building valuable backlinks. It helps you publish your content on quality sites that will really bring value to your SEO. Most importantly, it helps you improve your visibility, expand your target audience, and position yourself as authoritative. And, these are just some of a myriad of practices you may use to find relevant link building opportunities.
Emma Miller is a digital marketer and blogger from Sydney. After getting a marketing degree she started working with Australian startups on business and marketing development. Emma writes for many relevant, industry related online publications and does a job of an Executive Editor at Bizzmark blog and a guest lecturer at Melbourne University. Interested in marketing, startups and the latest business trends.
As search engines move toward voice search, mobile personal assistants adoption is growing at a fast rate. While the transition is already happening, there is another interesting phenomenon to notice. The SERP has changed substantially in the last couple of years. As Google rolls out new features that appear on the “above the fold” (featured snippets, knowledge panels and featured snippets filter bubbles) those allow us to understand how voice search might look like.
In this article, we’ll focus mainly on the knowledge panel, why it is critical and how you can get it too.
The Knowledge Panel: The Google’s above the fold worth billions
The knowledge panel is a feature that Google uses to provide quick and reliable information about brands (be them personal or company brands). For instance, in the case above you can see that for the query “who’s Gennaro Cuofano” on the US search results Google is giving both a featured snippet (on the left) and a knowledge panel (on the right).
While the featured snippet aim is to provide a practical answer, fast; the knowledge panel aim is to provide a reliable answer (coming from a more authoritative source) and additional information about that brand. In many cases, the knowledge panel is also a “commercial feature” that allows brands to monetize on their products. For instance, you can see how my knowledge panel points toward books on Amazon that could be purchased in the past.
This space on the SERP, which I like to call “above the fold” has become the most important asset on the web. While Google first page remains an objective for most businesses, it is also true, that going toward voice search traffic will be eaten more and more by those features that appear on the search results pages, even before you get to the first position.
How does Google create knowledge panels? And how do you get one?
Knowledge panel: the key ingredient is Google’s knowledge vault
When people search for a business on Google, they may see information about that business in a box that appears to the right of their search results. The information in that box, called the knowledge panel, can help customers discover and contact your business.
In most cases, you’ll notice two main kinds of knowledge panels:
While brand panels provide generic information about a person or company’s brand, local panels offer instead information that is local. In the example above, you can see how the local panel provides the address, hours and phone of the local business. In short, that is a touch point provided by Google between the user and the local business.
Where does Google get the information from the knowledge panel? Google itself specifies that “Knowledge panels are powered by information in the Knowledge Graph.”
What is a knowledge graph?
Back in 2012 Google started to build a “massive Semantics Index” of the web called knowledge graph. In short, a knowledge graph is a logical way to organize information on the web. While in the past Google could not rely on the direct meaning of words on a web page, the knowledge graph instead allows the search engine to collect information on the web and organize it around simple logical phrases, called triples (for ex. “I am Gennaro” and “Gennaro knows Jason”).
Those triples are combined according to logical relationships, and those relationships are built on top of a vocabulary called Schema.org. In short, Schema.org defines the possible relationships available among things on the web.
Thus, two people that in Schema are defined as entity type “person” can be associated via a property called “knows.” That is how we might make clear to Google the two people know each other.
From those relationships among things (which can be people, organizations, events or any other thing on the web) a knowledge graph is born:
Example of a knowledge graph shaped on a web page from FourWeekMBA that answers the query “Who’s Gennaro Cuofano”
Where does Google get the information to comprise in its knowledge graph? As pointed out on Go Fish Digital, some of the sources are:
In short, there isn’t a single source from where Google mines the information to include in its knowledge panels.
Is a knowledge panel worth your time and effort?
Is it worth it to gain a knowledge panel?
A knowledge panel isn’t only the avenue toward voice search but also an organic traffic hack. It’s interesting to see how a good chunk of Wikipedia traffic comes from Google’s knowledge panels. Of course, Wikipedia is a trusted and authoritative website. Also, one consequence of knowledge panels might be the so-called no-clicks searches (those who don’t necessarily produce a click through from the search results pages).
Yet, as of now, a knowledge panel is an excellent opportunity to gain qualified traffic from search and get ready for voice search.
As search is evolving toward AEO, it also changes the way you need to look at content structuring. As Google SERP adds features, such as featured snippets and knowledge panels, those end up capturing a good part of the traffic. Thus, as a company, person or business you need to understand how to gain traction via knowledge panels. The key is Google’s knowledge graph, which leverages on Google knowledge vault.
It is your turn now to start experimenting to get your knowledge panel!
DBpedia has served as a Unified Access Platform for the data in Wikipedia for over a decade. During that time DBpedia has established many of the best practices for publishing data on the web. In fact, that is the project that hosted a knowledge graph even before Google coined the term. For the past 10 years, they were “extracting and refining useful information from Wikipedia”, and are expert in that field. However, there was always a motivation to extend this with other data and allow users unified access. The community, the board, and the DBpedia Association felt an urge to innovate the project. They were re-envisioning DBpedia’s strategy in a vital discussion for the past two years resulting in new mission statement: “global and unified access to knowledge graphs”.
Last September, during the SEMANTiCS Conference in Vienna, Andrea Volpini and David Riccitelli had a very interesting meeting with Dr. Ing. Sebastian Hellmann from the University of Leipzig, who sits on the board of DBpedia. The main topic of that meeting was the DBpedia Databus since we at WordLift are participating as early adopters. It is a great opportunity to add links from DBpedia to our knowledge graph. On that occasion, Andrea asked Sebastian Hellmann to participate in an interview, and he kindly accepted the call. These are the questions we asked him.
Sebastian Hellmann is head of the “Knowledge Integration and Language Technologies (KILT)” Competence Center at InfAI. He also is the executive director and board member of the non-profit DBpedia Association. Additionally, he is a senior member of the “Agile Knowledge Engineering and Semantic Web” AKSW research center, focusing on semantic technology research – often in combination with other areas such as machine learning, databases, and natural language processing. Sebastian is a contributor to various open-source projects and communities such as DBpedia, NLP2RDF, DL-Learner and OWLG, and has been involved in numerous EU research projects.
Head of the “Knowledge Integration and Language Technologies (KILT)" Competence Center at InfAI,DBpedia
How DBpedia and the Databus are planning to transform linked data in a networked data economy?
We have published data regularly and already achieved a high level of connectivity in the data network. Now, we plan a hub, where everybody uploads data. In that hub, useful operations like versioning, cleaning, transformation, mapping, linking, merging, hosting are done automatically and then again dispersed in a decentral network to the consumers and applications. Our mission incorporates two major innovations that will have an impact on the data economy.
Providing global access
That mission follows the agreement of the community to include their data sources into the unified access as well as any other source. DBpedia has always accepted contributions in an ad-hoc manner, and now we have established a clear process for outside contributions.
Incorporating “knowledge graphs” into the unified access
That means we will reach out to create an access platform not only to Wikipedia (DBpedia Core) but also Wikidata and then to all other knowledge graphs and databases that are available.
The result will be a network of data sources that focus on the discovery of data and also tackles the heterogeneity (or in Big Data terms Variety) of data.
What is DBpedia Databus?
The DBpedia Databus is part of a larger strategy following the mission to provide “Global and Unified Access to knowledge”. The DBpedia Databus is a decentralized data publication, integration, and subscription platform.
Publication: Free tools enable you to create your own Databus-stop on your web space with standard-compliance metadata and clear provenance (private key signature).
Integration: DBpedia will aggregate the metadata and index all entities and connect them to clusters.
Subscription: Metadata about releases are subscribable via RSS and SPARQL. Entities are connected to Global DBpedia Identifiers and are discoverable via HTML, Linked Data, SPARQL, DBpedia releases and services.
DBpedia is a giant graph and the result of an amazing community effort – how is the work being organized these days?
DBpedia’s community has two orthogonal, but synergetic motivations:
Build a public information infrastructure for greater societal value and access to knowledge;
Business development around this infrastructure to drive growth and quality of data and services in the network.
The main motivation is to be finally able to discover and use data easily. Therefore, we are switching to the Databus platform. The DBpedia Core releases (Extraction from Wikidata and Wikipedia) are just one of many datasets that are published via the Databus platform in the future. One of the many innovations here is that DBpedia Core releases are more frequent and more reliable. Any data provider can benefit from the experience we gained in the last decade by publishing data like DBpedia does and connect better to users.
We’re planning to give our WordLift users the option to join the DBpedia Databus. What are the main benefits of doing so?
The new infrastructure allows third parties to publish data in the same way as DBpedia does. As a data provider, you can submit your data to DBpedia and DBpedia will build an entity index over your data. The main benefit of this index is that your data becomes discoverable. DBpedia acts as a transparent middle-layer. Users can query DBpedia and create a collection of entities they are interested in. For these sets, we will provide links to your data, so that users can access them at the source.
For data providers our new system has three clear-cut benefits:
Their data is advertised and receives more attention and traffic redirects;
Once indexed, DBpedia will be able to send linking updates to data providers, therefore aiding in data integration;
The links to the data will disseminate in the data network and generate network-wide integration and backlinks.
Publishing data with us means connecting and comparing your data to the network. In the end, DBpedia is the only database you need to connect with to in order to get global and unified access to knowledge graphs.
DBpedia and Wikidata both publish entities based on Wikipedia and both use RDF and the semantic web stack. They do fulfill quite different tasks though. Can you tell us more about how DBpedia is different from Wikidata and how these two will co-evolve in the next future?
As a knowledge engineer, I have learned a lot by analyzing the data acquisition processes of Wikidata. In the beginning, the DBpedia community was quite enthusiastic to submit DBpedia’s data back to Wikimedia via Wikidata. After trying for several years, we had to find out that it is not as easy to contribute data in bulk directly to Wikidata as the processes are volunteer-driven and allow only small-scale edits or bots. Only a small percentage of Freebase’s data was ingested. They follow a collect and copy approach, which ultimately inspired the sync-and-compare approach of the Databus.
Data quality and curation follow the Law of Diminishing Returns in a very unforgiving curve. In my opinion, Wikidata will struggle with this in the future. Doubling the volunteer manpower will improve quantity and quality of data by dwindling, marginal percentages. My fellow DBpedians and I have always been working with other people’s data and we have consulted hundreds of organizations in small and large projects. The main conclusion here is that we are all sitting in the same boat with the same problem. The Databus allows every organization to act as a node in the data network (Wikidata is also one node thereof). By improving the accessibility of data, we open the door to fight the law of diminishing returns. Commercial data providers can sell their data and increase quality with income; public data curators can sync, reuse and compare data and collaborate on the same data across organizations and effectively pool manpower.
In this article, we analyze how we can optimize the content on our website to gain premium real estate in Google SERP, by providing hyper-relevant information for the Google Knowledge Graph.
What is Google Knowledge Graph?
The Knowledge Graph is a vast database launched by Google on May 16, 2012 designed to provide more useful and relevant results to searches using a semantic-search technique. Find out more about the Google Knowledge Graph.
Every day, here at WordLift, we spend a great amount of time talking with experts in the digital marketing world and experimenting new ways to stand out on Google and Bing by getting better at organizing knowledge.
To help websites improve their SEO our secret weapon is to create a knowledge graph that is openly accessible to crawlers and linked from the content of the website using structured data markup.
What is a Knowledge Graph?
A knowledge graph acquires and integrates information into an ontology and applies a reasoner to derive new knowledge.
(Lisa Ehrlinger and Wolfram Wöß – University of Linz in Austria)
The term knowledge graph has been frequently used in research and business, in close association with Semantic Web technologies, linked data, web-scale data analytics, and cloud computing. At SEMANTiCS, a few years ago, a research paper titled “Towards a Definition of Knowledge Graphs” by the Institute for Application Oriented Knowledge Processing of the University of Linz was presented to propose a definition of the knowledge graph that focuses on data modeling and reasoning.
A knowledge panel is information about a business, a person or a topic in a box that appears to the right of Google search results. The information in the box is powered by Google Knowledge Graph and provides all sort of relevant facts about an entity.
Knowledge panels are a great way to gain visibility in Google search results and an entry point also for voice search in most cases. There are mainly two types of panels:
Local Panels that display information about a business that has an open Google My Business account
Brand (or personal) Panels that display information about an organization with a certain degree of authority. In our case having an article in Wikipedia was helpful to gain this panel.
While it is much harder to influence Google in creating a branded or personal knowledge panel we have succeeded in several cases with both organizations and persons that did not have a presence in Wikipedia.
Have a look below at the knowledge graph panel of TheNextWeb and all the information that it provides to online users.
Why creating your own Knowledge Graph improves SEO?
Imagine the knowledge graph behind your website as the scaffolding that lets crawlers and bots access to your content in a smarter and more efficient way. Much like Google uses the graph, as the engine to power up its search results, a graph that describes the content of a website helps machine understands what this content is really about.
Whether is a featured snippet showing on the SERP of Google or an app providing an answer using the voice like Cortana, Alexa or the Google Assistant, in the back, everything depends on the data that connects articles and facts in a machine-friendly form.
This is why having your own knowledge graph helps you make your content easier to be found and more accessible. Let’s dive into the practice and let’s try to ask the Google Assistant something like “what is Semantic SEO“. You will get as answer a snippet of content taken from this same blog.
What is Semantic SEO on the Google Assistant
The more metadata we make available to semantic search engines like the one used by Google Assistant and the easier it gets for these machines to understand the relevancy of our content in relation to a specific intent. Let me give you another example of content findability in the new world of personal assistant search optimization where a knowledge graph comes into play.
Here below the query to trigger is “tell me something about WordLift“. In this specific example, the Google Assistant proposes to the user the invocation of a Google Action called Sir Jason Link that can match this request.
The Google Action – Sir Jason Link – has been created using the graph data behind this website much like in the previous example.
The Google Assistant has analyzed the content of the Google Action (imagine a Google Action much like an app for the Google Assistant or the equivalent of a skill for Amazon Alexa) and probably has seen that the content matches the content on this website. The assistant is, therefore, suggesting to users, that might not know Sir Jason Link, to invoke it when asking for our product.
There is more SEO value than featured snippets, voice search and personal assistant search optimization in creating a linked graph with the metadata of a website.
In today’s digital world, publisher and readers are overwhelmed with information and it gets increasingly complicated to discover the content we really want. Semantic Technologies, like WordLift, do the magic and help publishers create better content while guiding readers in finding the content they want.
In SEO terms, articles enriched with semantic information, improve their findability by making information extraction more efficient. Concepts mentioned in an article are annotated and linked with extensive knowledge bases (such as DBpedia, Wikidata, Geonames and the Google Knowledge Graph) to provide search engines with key indications on why a specific piece of content can be relevant for a given search intent.
More importantly – all the information is structuredin a graph – this means that a search engine can process, all it has to know about an article, much like we do when looking at the nutrition facts label on a pack of spaghetti. All the relevant information is condensed in a label that is easy to read and organized in a standard way.
How Google uses the Knowledge Graph to answer your questions?
In this webinar organized by Jason Barnard, I had the opportunity to discuss with Bill Slawski and Cindy Krum how Google is using the Knowledge Graph in its AE algorithm and how things really work. If you want to dig into the topic of Knowledge Graph and SEO watch it now!
OK, so how is WordLift’s Graph getting smarter?
Just like kids, when starting to learn a language start with the names of the things they see around them, the vocabulary that editors could create with WordLift was primarily made of concepts and names.
Just like other major knowledge bases like DBpedia and Wikidata, WordLift‘s Knowledge Graph has been built around concepts (or entities) and the relationships between these concepts.
As we progress, and the use-cases we deal with become more mature, WordLift‘ graph is getting smarter to support new business cases and to help us improve the findability of online content.
Our main goal with WordLift Snowball was really to improve the linked data graph in order to:
Be able to compute and analyze the relationship between entities and articles being annotated. Here, as a side effect, we will have a lot more links from the graph to the articles and this will facilitate the indexation of articles,
Improve how smart agents (or crawlers) access information about entities using the semantic technology language of RDF and SPARQL this basically means, for instance, that we can handle queries to event-related questions like:
What are the next events in Paolo Alto?
What are the upcoming talks with Gennaro Cuofano?
How much does it cost to attend the Meetup on AI & ML for WordPress?
Check out below a sample dialogue that Sir Jason Link (WordLift‘s powered Google Action) can support thanks to this new update.
Asking Sir Jason Link about Gennaro’s upcoming event.