Episode 157 contains the notable Digital Marketing News and Updates from the week of Apr 17-21, 2023.

1. Twitter Requires All Advertisers To Pay For Verification First – Twitter has informed all advertisers that they’ll have to sign up to either Twitter Blue or Verification for Organizations in order to keep running ads in the app. In effect, this now means that brands will have to pay Twitter $8 per month for a blue tick, or $1,000 per month for its Verification for Organizations offering – though brands that are already spending ‘in excess of $1,000 per month’ will soon be given gold checkmarks automatically.

The cheapest option would be to buy a Twitter Blue subscription for your brand, which will cost your business an extra $96 per year, and if you’re planning to run Twitter ads, that’s unlikely to have a huge impact on your annual budget.

You’ll also get a verified tick for your brand account, which could help to give your brand more legitimacy in the app even though the checkmark doesn’t seem to communicate the same level of authority or trust that it once did. Given that the blue checkmark can also be bought by anyone, as there’s no checking process involved – there’s no actual verification in Musk’s Twitter Blue process. That means that someone else could also register your brand name, and also get a blue tick for it.

Hmm. The question is: Should you pay for verification?

2. Instagram Allows You To Add Up To 5 Links In Your Profile Bio – Instagram has finally  launched one of its most requested feature updates, giving users the ability to add up to five links in their IG bio, expanding on its capacity to drive traffic. In the announcement, Instagram wrote:

Starting today, the update will make it easier for creators and other users to highlight their passions, bring awareness to causes they care about, promote brands they love, showcase their personal business, and more.

This is bad news for products such as Linktree, and other linking tools. Instagram’s opposition to external links has long been the key driver of usage for third-party link aggregator tools, but now, people will be able to replicate that capacity within the app itself, which will no doubt see many abandon their paid subscriptions to third-party apps.

But then again, some of these tools enable branding options that could still act as an enticement, along with more link display options. It’s also become such a standard behavior now that users don’t find it jarring, so maybe, some businesses will stick with third-party link tools, even with this new capacity available.

To add multiple links to your IG profile, head to ‘Edit profile’ > ‘Links’ > ‘Add external link’. From there, you can drag and drop to order your links as you’d like them to appear in the app.

3. Google: Just Because A Site Is Good Now, Doesn’t Mean It Will Be #1 Forever – Sayan Dutta asked Google’s John Muller that “Recently I am noticing that websites are being removed from Google News. My 3 years old site suddenly showing Not Live on Publisher Center. I saw that with a few of my sites.

Google’s John Mueller replied  that just because a site appears to be doing super well in terms of Google ranking and SEO today that it won’t one day degrade in value. John added, “just because something’s in Google News now doesn’t mean it’ll be there forever.

Sometimes sites just lose their luster, the topic may be not as relevant, or the content quality does not increase as its competitor’s content quality increase. Sometimes, sites change ownership and the new owners do not put in the work needed. Sometimes sites just can’t keep up with the speed on innovation.

There you go folks, SEO is not evergreen or perennial.

4. Google Adds New Return Policy Structured Data Support For Merchant Listing – A structured data type communicates to search engines that the data is about a specific data type. Structured data types have “properties” that provide information about the data type.

A new returns section has been added to the structured data type definitions within Google’s product structured data document. This is for merchant listings, not yet product snippets, and these new properties types apply to merchant listing experiences. This addition came on the same day that Google began showing shipping and return information in its search results.

The new MerchantReturnPolicy type has two required properties (applicableCountry and returnPolicyCategory). Required properties are not optional and must be present in the structured data in order to ensure eligibility for the rich results specific to MerchantReturnPolicy.

Google’s new section for the returns policy shopping experience eligibility explains that there is an alternate way to become eligible without having to configure the associated structured data. They recommend configuring the shipping settings return policies in the Google Merchant Center Help (details on how to configure it here).

5. Google Introduces New Crawler & Explains The Use Cases For Its Different Crawler Types – Google has added a new crawler to its list of Google Crawlers and user agents, this one is named GoogleOther. It is described as a “generic crawler that may be used by various product teams for fetching publicly accessible content from sites.” For example, it may be used for one-off crawls for internal research and development, Google explained. The GoogleOther crawler always obeys robots.txt rules for its user agent token and the global user agent (*), and uses the same IP ranges as Googlebot. The User agent token is “GoogleOther” and the full user agent string is “GoogleOther.”

Here is what Gary Illyes from Google wrote on LinkedIn:
We added a new crawler, GoogleOther to our list of crawlers that ultimately will take some strain off of Googlebot. This is a no-op change for you, but it’s interesting nonetheless I reckon. As we optimize how and what Googlebot crawls, one thing we wanted to ensure is that Googlebot’s crawl jobs are only used internally for building the index that’s used by Search. For this we added a new crawler, GoogleOther, that will replace some of Googlebot’s other jobs like R&D crawls to free up some crawl capacity for Googlebot. The new crawler uses the same infrastructure as Googlebot and so it has the same limitations and features as Googlebot: hostload limitations, robotstxt (though different user agent token), http protocol version, fetch size, you name it. It’s basically Googlebot under a different name.

At the same time, Google updated the Googlebot page, and listed the different crawler types & its uses that may show up in your server logs. Using this information, you can verify if a web crawler accessing your server is indeed a Google crawler instead of spammers or other troublemakers. Google’s crawler fall into three categories:

  • Googlebot – The main crawler for Google’s search products. Google says this crawler always respects robots.txt rules. Next time when you find crawl—**.googlebot.com or geo-crawl-**.googlebot.com in your server logs, know that Google Search crawler visited your site.
  • Special-case crawlers – Crawlers that perform specific functions (such as AdsBot), which may or may not respect robots.txt rules. The reverse DNS mask for these  visits show up as rate-limited-proxy-***-***-***-***.google.com
  • User-triggered fetchers – Tools and product functions where the end-user triggers a fetch. For example, Google Site Verifier acts on the request of a user or some Google Search Console tools will send Google to fetch the page based on an action a user takes. The reverse DNS mask for these visits show up as ***-***-***-***.gae.googleusercontent.com

P.S: Listen/watch the show to hear my perspective on why it  is important for any website owner to review server logs and keep the troublemakers away.

6. Google Removed Older Search Ranking Algorithm Updates From Its Ranking Systems Page – Google has updated its documented Google ranking systems page and completely removed the page experience system, the mobile-friendly system, page speed system and the secure site system rankings. You can spot the difference if you compare the live page to the archived page.

These removals make me wonder if any of these algorithm updates mattered at all to the overall Google ranking system?

7. Google To Remove Page Experience Report, Mobile Usability Report & Mobile-Friendly Tests From Search Console Report – In the coming months, Google will deprecate the page experience report within Google Search Console, the mobile usability report, and the mobile-friendly testing tool. The core web vitals and HTTPs report will remain in Google Search Console, Danny Sullivan of Google announced.

The original page experience report launched in Search Console in April 2021 and was designed for just mobile pages. Google added a desktop version with the launch of the desktop version of the algorithm in January 2022. Now that it is 2023, Google is going to remove that page experience report completely and “will transform into a new page that links to our general guidance about page experience,” Danny Sullivan wrote.

In December 2023, Google will also drop Google Search Console’s mobile usability report (originally launched in 2016), the mobile-friendly test tool (launched in 2016) and mobile-friendly test API. Google said this is not because mobile friendly and usability is not important, Google said, “it remains critical for users, who are using mobile devices more than ever, and as such, it remains a part of our page experience guidance. But in the nearly ten years since we initially launched this report, many other robust resources for evaluating mobile usability have emerged, including Lighthouse from Chrome.”

8. Google Adds Page Experience To ‘Helpful Content’ Guidance – Google added a new section for providing a great page experience to its guidance around how to create helpful content, Google explained. Google also revised its help page about page experience to add more details about helpful content. Here is what Google added to the helpful content guidance:

Provide a great page experience: Google’s core ranking systems look to reward content that provides a good page experience. Site owners seeking to be successful with our systems should not focus on only one or two aspects of page experience. Instead, check if you’re providing an overall great page experience across many aspects. For more advice, see our page, Understanding page experience in Google Search results.

There is a FAQ section at the bottom of the “page experience” documentation that you need to read through if you are maintaining or leading your SEO efforts. Here are some items from the FAQ section:

  • Without the Page Experience report, how do I know if my site provides a great page experience? The page experience report was intended as a general guidepost of some metrics that aligned with good page experience, not as a comprehensive assessment of all the different aspects. Those seeking to provide a good page experience should take an holistic approach, including following some of our self-assessment questions covered on our Understanding page experience in Google Search results page.
  • Is there a single “page experience signal” that Google Search uses for ranking? There is no single signal. Our core ranking systems look at a variety of signals that align with overall page experience.
  • Page experience signals had been listed as Core Web Vitals, mobile-friendly, HTTPS and no intrusive interstitials. Are these signals still used in search rankings? While not all of these may be directly used to inform ranking, we do find that all of these aspects of page experience align with success in search ranking, and are worth attention
  • Are Core Web Vitals still important? We highly recommend site owners achieve good Core Web Vitals for success with Search and to ensure a great user experience generally. However, great page experience involves more than Core Web Vitals. Good stats within the Core Web Vitals report in Search Console or third-party Core Web Vitals reports don’t guarantee good rankings.

Author