Googlebot does not crawl, follow or pass PageRank to Taboola’s recommendations.
Why?
- Taboola’s code is Client-Side Rendered (CSR)
- Taboola blocks access to its recommendations by robots.txt directive
- Taboola uses the rel=”nofollow” link attribute
Taboola’s Code is Client-Side Rendered (CSR): Taboola’s recommendation system is implemented on the publisher’s client-side. Meaning, it is necessary to fetch and render the resources that serve the content. While modern search engines are able to render javascript to some extent, they must first be able to fetch the resources that will enable them to start the rendering process.
Robots.txt directive: All links (sponsored and organic) redirect through an intermediary Taboola sub-domain. On that domain, there’s a Robots.txt directive that blocks all of Taboola’s resources, as recommended by Google in their sponsored content guidelines. See also Taboola’s compliance with sponsored content policies.
Because the recommendations are served from the blocked sub-domain, bots cannot fetch those recommendations, and therefore cannot render them.
The rel=”nofollow” attribute: all of Taboola’s recommendations are marked with the rel=”nofollow” link attribute. This is also in compliance with Google’s guidelines regarding paid links.
The rel=”nofollow” attribute makes sure Google doesn’t follow the links with the “nofollow” value, and that it doesn’t pass page rank to those pages.
“Use the nofollow value when other values don't apply, and you'd rather Google not associate your site with, or crawl the linked page from, your site.” Source.
How can we test that?
There are several tools/methods to see the ways crawling Taboola’s links is prohibited to Googlebot:
- Mobile-Friendly Test (by robots.txt)
- Chrome DevTools