Google is leveraging MUM to improve the customer journey across Search, Lens, and other surfaces. MUM is multi-modal and encompasses vision, auditory, and language understanding simultaneously.

Traditional AI models handle one modality of information at a time. They can take in text, images, or speech — but typically not all three at once as MUM does. BERT, for example, can analyze the co-occurrences of words within the same page and understand their context.

MUM takes it to an entirely new level by reading terms in any language and by combining, for example, information in visual and tactile modalities.

This session will be a deep dive into entity-based content modeling, best practices for improving rankings on Google Lens, and an overview of how the latest research advances are being implemented on Google Search.

 

 

Organised by Kalicube in partnership with Wordlift.

Are you ready for the next SEO?
Try WordLift today!