Select Page

By Andrea Volpini

2 years ago

Learn how to enlarge and enhance your images with Super-Resolution, and improve structured data using a state-of-the-art deep learning model.

In this blog post, we’ll talk about Super-Resolution for images, a technique that can enlarge and enhance images from our website to improve structured data markup by using generative AI and deep learning model.

I will introduce the Python script that will let you use TensorFlow to increase the resolution of any image (JPG or PNG formats) for free. We will use a deep learning model called Enhanced Super Resolution Generative Adversarial Network (ESRGAN in short) publicly available in the Tensorflow Hub, a repository of pre-trained machine learning models that we can use with just a few lines of code.

👩‍🔬 Want to jump right to the Colab? It’s here!

AI Image Upscaling

Intuitively if we get an image, to increase its resolution, we would spread the pixels out and fill the holes by copying the values from its closest pixels. This technique is called nearest neighborhood but doesn’t work well in most cases. Let’s review a practical example.

Original
(detail) Nearest Neighborhood
(detail) Super Resolution

As seen from the picture above we’ll start from a single low resolution image (original) and we will upscale it to obtain a super resolution image that preserves the original details.

This is done  by leveraging the ability of the ESRGAN neural network to hallucinate details by using previously learned information.

In other words, to replace the missing details, the network will make its guess on what pixel will go in what position. This is possible as the model has been trained on the pixel mapping between low-resolution and high-resolution images. It is worth mentioning that by simply downsizing the resolution of images we can easily build a training dataset for this purpose.

In this tutorial we will run inference on an existing pre-trained model for image enhancing.

Why Do We Need Hi-Resolution Images In SEO?

While it is common to discuss the importance of image compression in SEO, it is less frequent to discuss why we need to boost the resolution of images

Learn more about SEO image optimization, see our latest web story🤩

Image Compression in SEO

Images represent in general 22.8% (mobile) and 15.5% (desktop) of the total page weight on most websites according to the http archive.

Therefore compressing images greatly improves the user experience and it is an important practice to follow.

There are tons of things that you can do to reduce the size of an image. In this workflow we will use Pillow (PIL), a well-known open source imaging library for Python, that allows us to open, manipulate, and optimize images in bulk. More about PIL and bulk SEO image optimization can be found here 👇. 

Image Optimization For Structured Data

Images accompanying content though, whether we deal with news articles, recipes or products are a strategic element in modern SEO and often an overlooked one. 

Large images, in multiple formats (1:1, 4:3 and 16:9) are needed by Google to present content into carousels, tabs (rich results across multiple devices) and Google Discover. This is done using structured data and by following a few important recommendations: 

  1. Make sure to have at least one image for every content piece.
  2. Ensure that images can be crawled and indexed by Google (it seems obvious but it is not).
  3. Make sure that images are representative of the marked up content (you don’t want to feature an image of roasted pork for a vegetarian recipe 🙈).
  4. Use a supported file format (here is the list of file formats supported by Google Images).
  5. Provide multiple high-resolution images that have a minimum amount of pixels in total (when multiplying one size with the other) of:
    • 50.000 pixels for Products and Recipes
    • 80.000 pixels for News Article
  6. Add in structured data the same image in the following aspect ratios: 16×9, 4×3, and 1×1.
For every image we want to ensure that additional renditions are generated.

To learn more about image optimization for structured data markup I recommend you to read a great article by Barry Adams on optimizing images for News Articles and our checklist to rank for Top Stories. Both articles greatly emphasize on the importance of hi-res images in great detail.

Generating Multiple Renditions On The Fly

One nice thing that WordLift does automatically is to create for each and every image, in the structure data markup, the required version in 16×9, 4×3, and 1×1 aspect ratios. 

You can have a quick look at the markup that WordLift generates for a Product by clicking here. The only requirement for WordLift to automatically generate multiple hi-resolution images is that the original image has enough pixels. This means that, in the ideal scenario, you want to have on the smallest side of the image at least 1.200 pixels

Start Upscaling Your Images For Structured Data

Since it might not always be possible to have at least 1.200 pixels on the shortest side I came up with this workflow and here is how it works. 

The workflow uses a model that has been trained on the DIV2K Dataset (it is made of bicubically downsampled images) on image patches of size 128 x 128. The original paper is authored by Xintao Wang et.al. and builds on top of another seminal paper titled “The Super-Resolution Generative Adversarial Network (SRGAN)” by removing unpleasant artifacts that might otherwise accompany the hallucinated details.

1. Setting up the environment

The code is fairly straightforward so I will explain how to use it and a few important details. You can simply run the steps 1 and 2 and start processing your images.

Prior to doing that you might want to choose if to compress the produced images and what level of compression to apply. Since we’re working with PNG and JPG formats we will use the optimize=True argument of PIL to decrease the weight of images after their upscaling. This option is configurable as you might have already in place on your website an extension, a CDN or a plugin that automatically compresses any uploaded image. 

You can choose to disable (the default is set to True) the compression or change the compression level using the form inside the first code block of the Colab (1. Preparing the Environment). 

2. Loading the files

You can upload the files that you would like to optimize by either:

1. A folder on your local computer
2. A list of comma separated image URLs

In both cases you are able to load multiple files and the procedure will keep the original file name so that you can simply push them back on the web server via SFTP. 

When providing a list of URLs the script will first download all the images in a folder, previously created and called input.

Once all images have been downloaded you can run the run_super_res() function on the following block. The function will first download the model from TF Hub and then will start increasing the resolution of all the images x4. The resulting images will be stored (and compressed if the option for compression has been kept to True) in a folder called output. 

Once completed you can zip all the produced files contained in the output folder by executing the following code block. You can also change the name of the zipped file and eventually remove the output folder (in case you want to run it again). 

Running The First Experiment

As usual in SEO there is no learning until you test things out. We witnessed in the past a significant impact on News Article with the proper use of hi-resolution images. This is particularly true when it comes to Google Discover (you can read our checklist to optimize content for Google Discover) or AMP. We can find an explicit reference to the 1.200 px width requirement on Google’s “Get on Google Discover Feed” documentation. So, as a first test, we opted for a recipe website of a happy client of our SEO management service. In this specific implementation compression was already enabled server-side. We therefore set the option COMPRESSION_BOOLEAN=False and uploaded the files back after running the Colab. 

You can choose to crawl images from any website using your favorite crawler and immediately spot the list of files that would benefit the most from a resolution increase (basically any files that will not meet the 50-80.000 total number of pixels).

We began our testing with a first batch containing only a few images. It is important to carefully test the impact of any SEO automation before scaling it to the entire website. We always need to make sure that everything is working as expected.

We first checked that we didn’t change the page experience by reviewing the core web vitals, after applying the treatment. 

In only a few days we could immediately spot the treated urls (recipes where we increased the resolution of the featured image) appearing in the recipe carousels 🎉.

We are expecting a double digit growth rate on the clicks for the treated urls and I will share the data in the coming days. In the meantime we’re experimenting with the same techniques on other websites to further evaluate the impact of this automation.

Conclusion And Future Work

The optimization of images is an important tactic when dealing with structured data markup. There are already a few Super-Resolution APIs available such as the one provided by DeepAI or the Image Upscaler of ICONS8. They are both paid APIs and they seem to use SRGAN rather than ESRGAN. As we progress with gathering performance data from multiple websites I also plan to add to the code the option required to downsize the image to either 50.000 or 80.000 pixels. As of today you might end up having images that have a larger and unneeded resolution and this could negatively impact the overall page loading experience. 

Happy upscaling and don’t forget that you can use machine learning also to automatically describe the content of your images as shown in this tutorial 👀.

Frequently Asked Questions

What is Super-Resolution?

Image Super Resolution is a technique for increasing the resolution of an image from low-resolution (LR) to high-resolution (HR). This can be done by either increasing the number of pixels in the image (for example using the nearest neighborhood algorithm), or by using a technique called super-resolution.

Can Super Resolution help in SEO?

Yes, increasing the resolution greatly helps SEO for structured data markup. If images don’t have a high resolution, Google might not be able to represent the content asset with rich features. This is particularly true for products, news articles, recipes, and the likelihood of being featured on Google Discover.

Is It Possible To Increase The True Resolution Of An Image?

Given the principle of data inequality, it is impossible to increase the image resolution other than adding additional information into the process. Suppose the original image conveys the information for “X” and we encode “X” for creating an upscale version of the image. In that case, the resulting information “Y” will always be a subset of “X”. This is the reason why when we use machine learning, we transfer additional information “Y¹”. This information is what the network gained during its training, making it possible to increase the resolution of an image. 

Must Read Content

The Power of Product Knowledge Graph for E-commerce
Dive deep into the power of data for e-commerce

Why Do We Need Knowledge Graphs?
Learn what a knowledge graph brings to SEO with Teodora Petkova

Generative AI for SEO: An Overview
Use videos to increase traffic to your websites

SEO Automation in 2024
Improve the SEO of your website through Artificial Intelligence

Touch your SEO: Introducing Physical SEO
Connect a physical product to the ecosystem of data on the web

Are you ready for the next SEO?
Try WordLift today!