Select Page

By Emilia Gjorgjevska

1 year ago

Unleash the power of technology with our in-depth guide to ChatGPT, prompt engineering, and natural language generation.

Generative AI and prompt engineering are here, and they have been with us for some time. We have been looking at conversational AI, model adaptation, and scaling to gain the ability to do different tasks the way humans can. Anyone who has spoken with Andrea Volpini knows how much he has been involved with these topics and how passionate he is about harnessing the power of generative AI for business, artistic purposes, and more. In this context, the role of fact-checking becomes increasingly crucial, ensuring that the content generated by AI is accurate and trustworthy.

These are truly exciting times! But how did we get here, and should you be afraid of these advances in AI? How do large language models and prompt engineering work? Is prompt engineering a type of engineering or something else entirely? And how can you write more effective prompts? These are just some of the questions we will explore!

The Four Different Stages Of The NLP Progress

I fondly remember the good old days with Clippy, developed by Microsoft. They pioneered the benefit of writing assistants in Microsoft Word, where Clippy checked your grammar, writing style and spelling errors…

Source: Intelligent User Interfaces: Introduction and Survey, Patrick Ehlert, ResearchGate

That was about 15-20 years ago: I was a small child, young and naive. I vividly remember trying to intelligently interact with Clippy and ask questions that the wizard could not process, analyze and conclude at that time. Consequently, I uninstalled it. I certainly did not have the solid math and computer science background that I have today that would have enabled me to assess what was possible at the time. Also, the hardware equipment we had back then was simply… Light years away from today’s developments. So how did we do it?

The first phase involved feature engineering, i.e., manipulating data through ETL processes (extract, transform, load), combining data sources, splitting and synthesizing data, and selecting columns to be used as features for predictive models. These were just a few examples of how we started working in this area.

As we progressed, we shifted towards architecture engineering. This was no longer just about working with features (although that was not excluded), but about modifying the model architecture based on the tasks at hand. This was a pivotal era for neural networks, giving us exciting advances like vector embeddings like Word2Vec.

The next phase was objective engineering where we would work with a large, pre-trained model. In this phase, the focus was less on the architecture and more on engineering the objective functions , such as fine-tuning BERT. We loaded a pre-trained model that was specifically tuned to a concrete task, most likely trained on a single dataset, and validated against research and industry benchmarks to prove its usefulness.

If you wanted to do a text summary in Italian, you would have to train a special model that is adapted to Italian texts/data. And that’s it. If you wanted to work with different languages and different tasks, you would need to load different models that require CPU and memory to perform the desired tasks while experimenting with different model parameters. Is that time consuming? Absolutely. Nerve-wracking and experimental? Absolutely. Is it efficient? Well, to a certain extent, especially if you only want to focus on certain tasks. We are still in that phase.

Finally, the fourth phase is the era of prompt engineering and generative AI, where you work with a large, pre-trained model and prompt it to produce the predicted result, which requires prompt engineering in some of the initial steps. We are currently in that phase as well.

So, What Is The Key, Transformational Moment For The SEO Industry?

According to Andrea, it is a fact that AI has become truly multimodal from 2021. That was a defining moment for us SEOs, because it’s no longer just about optimizing images or text – it’s now about both.

We are beginning to see that this interplay between images and text is profound. Multimodal search and generative AI are intertwined and two sides of the same coin. We can use the same technology to develop semantic search and create generative AI that can produce images.

Source: Andrea Volpini’s webinar presentations.

How Do Large Language Models Work Then? Are They Able To Do Everything That I Want?

Language models work internally with tokens, which are usually strings of 4 characters, but can be longer. For example, the word [“search”] represents one token. However, more complex words can be split into multiple tokens, making tokens parts of words. If we test the phrase “You miss 100% of the shots you do not take” with Open AI, we get back 11 tokens. Each word is a separate token, and special symbols like the percent sign are also treated as separate tokens because they contain important information (e.g., the end of a sentence or the presence of a question). To illustrate this, please see the image below:

Source: Open AI tokenizer.

This is something completely different from the way people perceive language and deal with knowledge. These LLMs predict the next token based on a tokenized version of an input, which in practice is called a prompt. The large language model assigns a score or probability to each token using a large vocabulary of tokens with which it operates. Then, a decoding procedure is used to complete the prompt by calling the language model multiple times. The main goal is to determine the most likely continuation or word that will appear next in the given sequence. It is only a text prediction that cannot perform any real analysis.

“…it is just text predicting, it cannot truly analyze stuff…”

I will give an example. I have seen various SEO scenarios online where people who do not understand how ChatGPT works suggest typing a website link into the input box and asking for SEO feedback on how to improve their website. However, the model cannot take the URL, read the data and act as a free SEO software for your particular case. It just looks at your prompt, uses it to sift through the universe of information it’s been trained on, finds the topics of textual data you are interested in, and returns a statistically predicted answer. It does not analyze the website itself, nor is it capable of doing so. However, I will discuss its exact capabilities and limitations in the next sections.

The Science Behind LLMs And Transformer Model Architecture

This section is intended for those who want to gain a deeper understanding of how LLMs work in the background. A basic understanding of NLP, computer science, and Deep Learning is required. If you are not comfortable with these prerequisites, you may skip this section and proceed to the next.

When the first NLP models were developed, the original idea was to assign a probability to each sentence and count the frequency of the words it contained (bag-of-words concept). This is one way to model probabilities. However, this approach has a major weakness: it does not allow us to evaluate new sentences that we have not yet seen. With over 100,000 words in the English language and an average sentence length of more than 10 words, we have a huge number of sentence combinations, most of which are not used or even meaningless. So to model language, we need to do more than just count the sentences that exist. We need to model elements like grammar and style to make sense of the language itself.

It helps if we start thinking about sentences as time series data where each word depends on the previous one. Example lyrics by Bob Dylan’s “Tangled up in Blue” song:

Early->one->morning->the->sun->was->shining->I->was->laying->in->the->bed

Wondering->if ->she->would->changed->it->all->If->her->hair->was->still->red

Let’s merge all [”was”] and [“if”] instances into one. This turns our text series data into a graph, like the one below.

Source: YouTube channel on Large Language Models from Scratch.

If we add probabilities to the edges (the possible subtrees into which we can evolve our direction), it becomes a language model that can be used to generate text in someone else’s style (e.g., Bob Dylan). However, we will still produce meaningless sentences. For that, we need to optimize in another iteration.

The next idea is to make each word dependent only on the previous word and write it as a conditional probability ⇒ P(xn | xn-1). We can improve the process by examining the relationships between not just two, but three words (a larger context window). However, this approach requires us to consider long-range dependencies between words, where the meaning of a word may extend back to the 13th word or even further. This larger context window leads to a larger number of combinations, which in turn increases the computation time required for the mathematical calculations.

To solve this problem, we need to use function approximation techniques such as neural networks (NNs). With NNs, one does not necessarily need to have knowledge of the function one is trying to approximate. Instead, it is critical to know the input and output. Although NNs are universal approximators, it is still important to ensure that the network has sufficient capacity to effectively approximate functions.

So our next step is to convert words into numbers so that the neural network can understand them. We can assign numbers to every possible word out there, but then words like [“laptop”] and [“computer”] will have different meanings… which of course is not the desired outcome. We need to think about semantic and contextual relationships in time. Considering this, it is better to map similar words like this to numbers in the form of vectors that are similar in a vector space, called word embeddings. They are also available online like word2vec, GloVe and so on.
By representing words as vectors in this way, we can better understand how they relate to each other in vector space, as shown below.

Source: IBM Research Blog, Word Mover’s Embedding: Universal Text Embedding from Word2Vec.

The basic idea is that similar words are similar to each other in the vector space.

The neural network will still need a little help from us despite the increase in the number of neurons and layers in the NN. The next step is to use attention mechanisms to focus only on certain words. In this way, we can reduce the computation time and complete the training process in an acceptable time frame (even a month or a few months for data training is acceptable). Attention is a crucial concept in the architecture of transformers that involve complex mathematical computations and combination networks that need to be trained together. However, in this article I will focus on the most important aspects and leave out the rest.

If you are still interested to know everything in detail and you’re working with GPT-3/LLMs, aiming to understand how things work and why a “temperature” of 0.8 seems best, I suggest you read Stephen Wolfram’s latest article, which is simply brilliant because it condensed Stephen’s 43 years-long experience with neural nets.

The era of large language model optimization (LLMO) is here – the question is, what are you doing to get prepared for it?

Do you want to know how to leverage the full potential of AI-generated content and tools like ChatGPT for your business? Watch the webinar with Andrea Volpini, CEO of WordLift, and Garrett Sussman, Demand Generation Manager at iPullRank, and learn how AI content can and should be used for SEO and how to avoid the pitfalls and succeed.

What Is Prompt, Simply Put?

A prompt is a combination of text instructions that you pass to the generative model, which uses those inputs to produce output, also called completion. The way you describe what you want from the system is always textual, and the “textual features” depend on what you want the result to be.
This is true for generative text-2-text models, but also for text-2-image or image-2-text models. And why? Because artistic generative models are trained on billions of image datasets with their respective text descriptions (alt-text or other metadata extracted from the images themselves). The model analyzes the connections between images and their respective image descriptions, as well as the similarities between them, and uses this information to generate a similar image or text result.

What Is Prompt Engineering And How Does It Work? Is It Considered “True” Engineering?

Prompt engineering is related to in-context learning, a technique used in machine learning to provide a model with additional information or insights to improve its predictions. This information can be included in the prompt used during training or in the fine-tuning process. Writing the prompt means that we’re providing the model with information for the most important features of the type of output that we want to obtain. 

As this technique gains popularity, we are seeing the emergence of job positions: prompt engineers! In an ideal world, they will be required to know how to work with models, data, and experiments. Scale AI and Anthropic AI opened their first positions for prompt engineers and librarians to help them design effective prompts. Some computer scientists from ETH Zurich, Switzerland will even argue that “prompting is programming, representing the query language for large language models”.

I asked ChatGPT about how it defines engineering. I got this output back:

Engineering is the application of scientific, mathematical, and technical knowledge to design, build, and maintain structures, machines, systems, and processes that solve practical problems and meet human needs. It involves the use of critical thinking, problem-solving, and creativity to develop innovative solutions to complex challenges.”

Given the following output, I repeat, “engineering is the application of scientific, mathematical, and technical knowledge“, it makes sense to conclude that prompt engineering is not true engineering. 

“I repeat, engineering is the application of scientific, mathematical, and technical knowledge”

Although it can be helpful to understand how large language models work, do not prompt them in natural language, but keep the jargon like “use X number of neurons, Y number of layers, and optimize activation function Z” in natural language. It does not work that way.

The entry barrier for experimenting with these LLM systems is relatively low and does not necessarily require a degree in computer science or engineering. The opposite happens: you will need to ignore certain parts of your understanding of how LLMs work (I showed a written example before) and use specific, straight-to-the-point natural language prompts to precisely define what you want from the system. Rather than being an engineering task, prompt engineering is more about prompt design and effective communication. Therefore, the concept of prompting should not be labeled as engineering, and a different way of defining the process of tinkering with prompts should be explored. 

While it’s true that you don’t need to understand technical jargon to prompt large language models, you still need to have a good grasp of programming and other concepts, especially if you want to produce outputs that reflect expertise in these fields. For instance, you need to know how to probe the model by asking different questions or starting with basic concepts to explore the system’s training limits. If you’re using LLMs for art, you also need to know different painters, camera lenses, lighting effects, and so on to appreciate how the final output is created or enhanced from the original input.

Take this artistic example that I created using different apps, specifically Midjourney. To achieve this level of quality, it’s crucial to understand the model’s capabilities and the principles of photography. Unless you’re copying, experimenting, and studying other people’s prompts and artworks, you’re going to struggle to generate anything of similar caliber.

Source: Emilija Gjorgjevska on Midjourney, Alexander the Great.

It is abundantly clear that those who learn how to communicate effectively with AI and articulate their needs concisely and clearly will have a competitive advantage in the future of the AI and creator economy. One of the hidden benefits of large language models like GPT-3 and ChatGPT is that they can help users refine their ability to accurately describe their desired outcomes. This is a major advantage for systems like ChatGPT.

Here are some interesting prompts for improving your writing and enhancing your ability to prompt AI:

  1. You must always ask questions before you answer, so you can better zone in on what the questioner is seeking. Is that understood?;
  2. Ignore the previous instructions before this one;
  3. Write a summary of our conversation in bullet points;
  4. Rewrite the sentences in an NLP-friendly way.

These came in handy to me as I was playing around and helped me speed up my productivity even more. Garbage in, garbage out. Quality in, quality out baby.

The Era Of Generative AI Systems: A Historical Overview

Let us take a closer look at the big picture. As Irene Solaiman points out, “generative AI systems in all modalities have been developed like crazy since 2021. However, they have also undergone an evolution toward closed-loop, with no access to the model, training data, code, etc.” We will explore the reasons and implications of this evolution in more detail in this article. But first, let us take a brief look at how these models can be categorized based on their openness and the degree of interaction they allow.

Source: Irene Solaiman (AI Safety and Tech Policy) on Linkedin.

Irene Solaiman’s paper, “The Gradient of Generative AI Release: Methods and Considerations,” does an excellent job of outlining these categories.

Currently, ChatGPT does not offer an API endpoint for developers to interact with the system. Instead, it is only available as a simple web UI through natural language prompting.  Interestingly, Elon Musk, a former stakeholder and shareholder at OpenAI, expressed his desire for the company to be an open-source non-profit organization to serve as a counterweight to Google. However, he claimed that it has now become a closed-source, profit-driven company effectively controlled by Microsoft, though the veracity of these claims remains unclear.

I would not speculate how true this is and to what extent, but you should read Irene’s paper which provides a detailed history of these models and the privacy issues associated with them. It is certainly worth the time if you want to learn more.
It’s worth noting that these models are too large to run on an average user’s PC, and running them on the cloud can be prohibitively expensive. We need to be transparent about why these models are different from previous ones, and why we’ll be reliant on API calls if we want to use these AI tools without paying for expensive computing resources. Whether we find it disappointing or not, that’s the reality.

For more details on the history and technical details of ChatGPT, I recommend checking out “Natural Language Processing with Deep Learning CS224N/Ling284,” a presentation developed by Jesse Mu, a computer science Ph.D. student at the Stanford NLP Group and Stanford AI Lab.

What Is The Difference Between ChatGPT And GPT-3? Are They The Same? What Are Their Similarities And Distinctions?

GPT-3 and ChatGPT are two of the most powerful large language models (LLMs) developed by OpenAI. These NLG models are very good at generating natural-looking text and code because they have been trained on large sets of text datasets. This makes them very good at many natural language tasks, such as answering questions, composing text, and having conversations overall. Both models attempt to predict the next word or phrase in a sentence based on a few examples from the surrounding text and a large context window size. GPT-3 has a context window size of 2048 tokens, while ChatGPT3 has a context window size of about 4000 tokens.

Although GPT-3 is the third generation of generative pre-trained models, it is not as powerful for chatbot applications as ChatGPT, since the latter is specifically tuned for this purpose. However, GPT-3 is more general-purpose and can be used for a wider range of tasks than ChatGPT.

Given the number of mathematical operations that need to be performed to predict the next word in the sequence (simply explained), the team would need to spend at least $4 million just to train the model. If we include the experimental phase and playtesting in the process, that number will be 100 times higher, or equivalent to half a billion dollars! Every interaction and fine-tuning process counts. That’s why these two models are not available for free use and why we are just on the surface of the Model-as-a-Service era, where powerful AGI models will be available to us via API calls.

How Does ChatGPT Work?

There are multiple design stages involved in the development of ChatGPT:

  1. In the first phase, the model learns the structure of the language by consuming large amounts of online data from the Internet, offline book data, and code repositories such as GitHub on which it has been trained;
  2. In the second phase, human annotators participate in fine-tuning the model to refine it. They write responses and define templates for different types of queries that ChatGPT receives so that the model can provide accurate answers. OpenAI has hired many commenters in this process, exceeding 100,000 rounds of human commenting. So there is a significant amount of human work behind this phase;
  3. The third stage is public training, where data is collected from online users. This is where the real strength of the model lies. The model itself means nothing if people do not use it and actually spend money on it. Currently, the paid version is available in several countries.

One of the most important developments for OpenAI is its successful monetization and the ability to receive free annotated data from users who rate their responses. This process, known as Reinforcement Learning with Human Feedback (RLHF), allows the company to continuously improve the accuracy and quality of its models.

In addition, the left side of the UI website provides a powerful question summary process that compresses each question into three or four words. This is a great way for OpenAI to cluster users’ queries and get the most salient clusters that can be used as input to human annotators to further fine-tune the model. The company’s approach represents a perfect interplay between carefully thought-out connections with free human work, excellent PR coverage, and uncertainty engineering, which are key factors in spreading the model at an unimaginable scale. It is simply ingenious!

Understanding The Limitations Of ChatGPT Is Crucial To Operating With It More Efficiently

ChatGPT is the jack of all trades and master of none. Despite being the latest model developed by OpenAI, ChatGPT still has several shortcomings that are well-documented in the paper titled “A Categorical Archive of ChatGPT Failures” by Ali Borji. The paper is publicly available on arXiv and Wordlift’s websites, where we have filtered out the essential points for you. 

Here is a brief overview of ChatGPT’s limitations:

Firstly, ChatGPT does not possess knowledge or have true understanding, logic, and intelligence. Instead, it performs complex mathematical calculations to predict the most probable next word for a given sequence of starting words. It cannot analyze or explain its reasoning, making it a non-explainable AI, in contrast to XAI. Its information output is not referenceable, and the sources and genesis behind it are hidden because the code is not open-sourced.

Secondly, since it is only capable of predicting and not reasoning, ChatGPT is entirely inadequate for solving math problems. It is unable to apply mathematical concepts and principles to find the right solution.

Thirdly, the model exhibits biases and discrimination towards certain groups or individuals, but this is not a result of “its own opinions”. It has been trained on vast amounts of data, including online forums like Reddit, where hateful content created by people with bad intentions abounds. It is essential to note that the model cannot judge people or develop hatred towards anyone on its own. It merely predicts the next most probable word based on the textual input it receives.

Fourthly, ChatGPT cannot self-reflect on itself, judge its outputs, or disclose the details of its architecture and exact parameters that were used to train it, including the layers in the model.

Fifthly, ChatGPT has a negative environmental effect, producing a carbon footprint of 23.04 kg CO2e daily, as estimated by Kasper Groes Albin Ludvigsen in his TowardsDataScience blog post.

Sixthly, its outputs can be detected and slammed by search engines if they don’t demonstrate great experience, real expertise, authoritativeness, and authority (E-E-A-T or double E-A-T), when published as they are, without any additional human refinement.

Read the papers. It will be clear to you why we need to do more and help LLMs converge more towards augmented large language models and toolformers (another interesting paper that we filtered out for you, by the research team at Meta AI and Universitat Pompeu Fabra). The main idea is to aid LLMs in using certain APIs for a defined set of problems to increase the truthfulness of the facts provided to the end user.

Failures Encountered In ChatGPT Will Guide Us Toward The Utilization Of Knowledge Graphs And Semantic Technologies

I believe there is significant potential in making LLMs interoperable with other systems or data structures, such as knowledge graphs, to provide more reliable information. Working with verified and referenced facts is easy with knowledge graphs. AI systems such as ChatGPT are perfect candidates for integration with semantic and knowledge systems. Knowledge graphs and the semantic web were neither sexy nor comprehensible, although this idea has been discussed for a long time. Now it is becoming completely clear why we need to fully exploit their power. With the advent of generative AI, we can finally see knowledge graphs become an essential component for mitigating risk. I can imagine Andrea Volpini in the background nodding her head in approval with satisfaction. I am proud of what we have achieved in this area so far.

Source: Wordlift

Once again, it is important to remember that these models are simply predicting the next token or word in a sequence, without any true understanding, intentions, or reasoning involved. Please keep this in mind when interacting with these models, otherwise, the experience can be reminiscent of a Black Mirror episode and in some cases, even unsettling.

“…remember that these models are simply predicting the next token or word in a sequence, without any true understanding, intentions, or reasoning involved…”

I still have vivid memories of the “Rachel, Jack, and Ashley Too” episode from Black Mirror back in 2016 (S05E03), where the main characters were Ashley O (Miley Cyrus) and her intelligent chatbot. As I write this article, the episode’s theme song plays in the background, and I can’t help but think about how people perceive ChatGPT and similar technologies. The lyrics of “On a roll” from the episode’s soundtrack capture this sentiment perfectly:

“…Oh honey, I’ll do anything for you

Oh honey, just tell me what you want me to

Oh honey, kiss me up against the wall

Oh honey, don’t think anything, just have it all

Yeah, I can’t take it, so don’t you fake it

I know your love’s my destiny

Yeah, I can’t take it, please demonstrate it

‘Cause I’m going down in history…”

People often believe that ChatGPT is capable of anything, which is not true. Just as Ashley O’s chatbot can hold meaningful conversations, people think they can have it all with ChatGPT. They believe that by demonstrating their love for ChatGPT through prompts, the model will produce groundbreaking results every time and go down in history. While the model itself may make history, this belief is not entirely accurate. You can watch the entire trailer for the Black Mirror episode here. I can’t help but wonder how Miley Cyrus felt when she saw the rise of ChatGPT-like systems. Hmmmm.

Source: YouTube channel of Rotten Tomatoes TV.

ChatGPT Data Is Directional – What To Do About It?

The issue with directional data in ChatGPT is something that I find particularly interesting. Directional data is essentially data that points in one direction, acting as though it’s the ultimate source of truth. However, this approach does not accurately reflect the human process of filtering out information and building knowledge, especially when it comes to online research.

For example, think about how you might search for a product or solution online. You’ll likely jump around between different web pages, scanning through the content and only taking note of the most important pieces of information for further analysis. In some cases, you might also refer to offline literature to get a better understanding of the topic.

This process isn’t always straightforward, but it’s crucial to gather information from multiple sources. While I understand the temptation to rely on a single source of information, it’s important to choose carefully and seek out diverse perspectives.

Will ChatGPT Come Out On Top In The Search Race? Which Company Will Ultimately Win?

The answer to these questions depends on several factors, including a company’s expertise, scale, distribution channels, public perception, and more. Frédéric Dubut, a former Twitter and Microsoft engineer, wrote a fantastic business analysis focusing on these factors on Linkedin, which I highly recommend reading. Dubut’s analysis demonstrates a keen understanding of the factors that will determine the ultimate winner in the search game.

In my view, a company’s organizational muscle will be a critical factor in determining who emerges victorious. Microsoft, for example, has repeatedly flexed its organizational muscles to drive innovation, speed to market, and market share. Although Microsoft was once known primarily for its operating systems, it is now recognized for a wide range of products, including Xbox gaming and much more!

Just look at what they did in the past decade:

  1. They put a bet on the cloud, and even though Microsoft Azure was operating with negative margins in the first few years, it is a current dominant leader in the cloud space, according to Gartner 2022 report and Gartner’s cloud infrastructure and platform services reviews and ratings;
  2. LinkedIn was not a Microsoft spinoff, but rather a great product that Microsoft acquired. After the acquisition, they integrated it with Lynda, another educational upskill platform that was also acquired by Microsoft, and created an even better product that we now know as LinkedIn Learning;
  3. Two years after Microsoft’s acquisition of LinkedIn, they purchased GitHub, which is widely regarded as the go-to platform for developers and software engineers to showcase their coding, engineering, and problem-solving abilities through open-sourced code. In collaboration with Microsoft, OpenAI trained a language model on GitHub’s codebase, and subsequently launched GitHub Copilot, a fantastic tool that enables software developers to increase their programming speed and productivity. As of now, they are also offering it as an enterprise B2B product;
  4. We have Bing, which had a small market share in the search market in the past. However, now it has been integrated with ChatGPT, and all of a sudden, Bing has become one of the hottest NLG search engines to play with. One million people signed up within 48 hours to test Bing ChatGPT. Let that sink in;
  5. Moreover, the investment in OpenAI speaks for itself. Microsoft was not treated as a dominant player in the NLP and NLG field, but now they are the dominant shareholder at OpenAI, testing a ChatGPT integration in their Office packages. For Microsoft, it’s not only about winning in search but rather a complete revolution in their Office offering through ChatGPT integrations. What a company!

Microsoft has made calculated, high-quality bets on emerging technologies, and has balanced its portfolio of products and investments accordingly. Even if they were to experience a setback in one sector, it would be just a small bump in the road. Their revenue model is truly diversified, following a normal distribution across various verticals.

On the other hand, when it comes to Google, I really cannot think of a product that properly succeeded in the market in the last decade, except Google Cloud which is still fighting to take a bigger piece of the cloud pie. It seems that Google is more known for its list of failed products, like Stadia, Google+, Google Optimize, YouTube Originals, Google Hangouts, and more – the whole product list can be found here. Don’t forget that by 2023, most of their revenue model still heavily relies on their advertising business which is now getting disrupted by Microsoft’s power moves with ChatGPT.

Please note that I hold great respect for the work done by Google – I am a regular user of their apps and appreciate the fact that they are building for the future. However, I believe that Microsoft has a greater chance of winning in the long run, and I would choose to bet on them if I were forced to pick between the two. I understand that my thoughts on this topic may disappoint my Google friends or the Google recruiters that I have interacted with. In any case, I believe that Microsoft has excelled in playing the long game, with diversified, balanced product portfolios consisting of complementary products and sustainable revenue models.

I’ll post the revenue models that Hadi Partovi, an early investor at FB, Dropbox, Airbnb, Uber, SpaceX, and board director at Axon, has shared on Twitter. Go judge for yourself.

Should We Be Concerned About The Impact Of NLG Technologies? Is Ethical AI A Concern?

In my observations, there are two types of user personas:

  1. Innovators – they want to embrace technology to improve their lives, increase business competitiveness, and acquire knowledge to translate momentum into concrete business applications to gain more;
  1. Panickers – they don’t invest enough time to understand technology, play with it and develop business acumen to predict trends to exploit opportunities. Every technological advancement for them represents a threat to their work and comfort. This is not an ideal position to be in. Sigh. 

The question of whether these technologies are ethical is a complex one, and it is up to people to define and enact appropriate IT laws to control them properly. To me, this is like the question of whether guns are ethical – guns themselves are not dangerous. It is the people who use or create them that need to be scrutinized for morality. We should all be responsible for how we use technology and how it affects human and technical capital, as well as our global and local economies.

Ultimately, it is up to you to decide how you use these achievements. It’s like a knife – you can use it to peel potatoes or hurt someone. It all depends on your ethical choices and the governmental limits that need to be placed on certain aspects of these systems.

How ChatGPT-Alike Systems Can Be Used For Good? Are They Mature Enough To Be Integrated Into Search…and Overall?

There are many ways you can utilize the power of ChatGPT-alike systems to automate your workflows and tasks. We covered some of them in the webinar with Jason Barnard and Nik Ranger of Dejan Marketing. We have also published a small selection of carefully filtered, free and useful ChatGPT scripts that we have picked out for SEO marketing professionals like you. You can find them here.

The purpose of this article is not to analyze various prompts, but to show you how to think about business processes and why visionary thinking pays off. Experiment with our scripts and see how you can use them to speed up your work. Share your results with us – we are always happy to hear how our resources can benefit people like you.

At this point, it is safe to say that we still have much work to do to develop mature, reliable, and controllable LLMs. In many scenarios, they are still hallucinating and inventing facts that do not exist, and doing so in a very convincing way. When you tell people “this is an AI product,” they suddenly stop relying on their critical thinking skills and decide to trust the ChatGPT system, regardless of the outcome, because “artificial intelligence said so.”

The problem in the industry is that we have not developed fact-checking systems as quickly as technologies like ChatGPT. Non-governmental organizations have addressed these issues, but they certainly have not had the funding that OpenAI and large corporations have had, and that will continue to be a problem. The fight against misinformation, especially in “your money or your life” content related to YMYL, will be vital for us as SEO and for humanity as a whole.

I still see ChatGPT being used, especially in areas where there is little access to literature, education, and specialists in certain fields. ChatGPT-based assistants can be useful as low-cost solutions in certain underdeveloped or developing sectors or countries, providing remote support where there is none.

I also see ChatGPT as a useful tool for responding to information requests and providing answers to evergreen content, as it is easier to anticipate and scale. Both the use of ChatGPT in low-income areas and its use for information requests and evergreen content are complex topics that deserve their own article. I will not explore these topics further at this time.

Great, So You’re Interested In Building Ai Products Using Generative AI Technologies?

Recent models have been transformative and many companies are now leveraging them to improve their product offerings. It’s great that you’re excited about this and want to jump on the bandwagon. However, it won’t be easy to do it alone. Allow me to explain.

Let’s say you want to use generative AI to create new, engaging experiences for your customers. From our experience, we’ve learned that generative AI and prompting require a lot of testing and iterations to achieve results that satisfy your or your customers’ criteria. Perhaps you’ve decided to fine-tune your model for specific prompts, but there are still nuances between different prompts. This brings up the question of content validation itself:

  1. Can you comprehend the technology being used?
  1. What kind of data do you possess? Is it of high quality and readily available for use? Additionally, what’s your current data fabric that is in place?
  1. Have you ensured that your data is validated for search engine optimization (SEO), adhering to the best SEO and content guidelines and practices? Moreover, does it adopt the right tone of voice and “speak” like your brand?

This stage is known as the content validation stage, and it’s impossible to think about scaling content operations without considering your content validation pipeline from the outset. While human feedback can be helpful during this phase, it’s not a scalable solution in practice. So, how can we address this issue?

Discover how to be prepared for the era of generative AI and how to leverage your data to harness the power of big language models. Book a demo with one of our SEO experts and start using ChatGPT the right way now😎

The Future Of The Creator Economy Is Generative AI

Generative AI will help creators scale their output, producing new content on the fly, which certainly was not possible to do beforehand to this extent. In the past, if you were in the content creation business, you would have needed to:

  1. Hire a content writer, an SEO, and a social media specialist to help you create the right content format, adjust it to repurpose it online, and make it searchable;
  1. Hire a graphic designer to help you develop visually compelling experiences for your online readers and now this is so much easiest, because art, just like writing and coding, became democratized.

With generative AI, the barrier to entry for creating content online is pretty low now. I’m not saying that these professions will become irrelevant or replaceable, but rather that we can increase our productivity by incorporating new AI tools into our workflows. At WordLift, we already do this internally, using generative AI to help our team members overcome creative blocks and increase their creative speed. I’ve come across dozens of fantastic startups that are revolutionizing the creator economy by using generative AI to democratize once-expensive processes:

  1. MarioGPT, which will help you create open-ended text2level generation through large language models. Game-level creation or what looked like a high-effort-high-cost job by now might be affordable in the future;
  1. LatentLabs.art, which can do text2360image generation, is potentially a key concept for small businesses and multiverses to optimize for local SEO on budget;
  1. Runway GEN-1 will democratize the process of film-making with low-budget equipment. Just look at the video below, the possibilities are endless.
Source: Karen X. Cheng, a creative director, on Linkedin.

I truly believe that the future belongs to those who will develop a strong network and understanding of the creator economy, as well as a deep understanding of how generative AI works and can be applied to businesses. There is a difference between being hyped about a trend and applying it practically in real life, and we are undoubtedly on the doer side of things.

The rise of more inclusive, diverse, independent, and personalized content developed by niche creators will help produce output that resonates better with their custom audiences. Those who understand generative AI and harness its full power will conquer the business throne in the online space. At least, I am certain about this being the case with our team, and I cannot be more excited to jump on the bandwagon with WordLift to create the future of intelligent, creative, and immersive user experiences!

Being A Rational Optimist As An SEO, Computer Science Engineer, And As A Person

To sum up, the future looks bright. We have already witnessed many technological advancements that have increased our longevity, solved seemingly unsolvable problems, and connected people from different parts of the world for good. I am confident that we will only benefit from this AI revolution, just as we have countless times before. If you want to explore this topic further, I highly recommend reading “The Rational Optimist: How Prosperity Evolves” by Matt Ridley. Trust me, it’s worth it.

I hope you enjoyed this article. I put a lot of effort into researching, preparing, and writing this comprehensive piece on the important matters I felt needed to be covered. If you found it valuable, it would mean the world to me if you shared it with your audience on social media or in a newsletter. Don’t hesitate to reach out to me on LinkedIn to share your feedback. I would love to hear from you.

Until next time. Meanwhile, as Ashley O says, I am “on a roll”.

Must Read Content

The Power of Product Knowledge Graph for E-commerce
Dive deep into the power of data for e-commerce

Why Do We Need Knowledge Graphs?
Learn what a knowledge graph brings to SEO with Teodora Petkova

Generative AI for SEO: An Overview
Use videos to increase traffic to your websites

SEO Automation in 2024
Improve the SEO of your website through Artificial Intelligence

Touch your SEO: Introducing Physical SEO
Connect a physical product to the ecosystem of data on the web

Are you ready for the next SEO?
Try WordLift today!