Does duplicate content pass link juice?

Joined
Mar 15, 2018
Messages
53
Likes
10
Degree
0
Assuming your website gets an editorial link from a big news site. Many slightly smaller but still high authority news sites publish the exact same article on their website, including the link to your website.
Would this link still pass value from from each page?
 
This is a tough one to ever test properly because when we've had big media articles linking to us syndicated two things happen:

* A Good chunk of them are 'pure spam' scrapers which I'd imagine Google ignores
* The remainder tend to syndicate forever

Even if you get a case where the syndicated links do drop off it's hard to measure in isolation as it usually takes a while and by then the article that linked to you has itself become majorly buried and at the same time you've done 1000 other things.

It's rare you'd have two sites that are infrequently updated get two big pieces of media coverage and and for one of them the syndicated sites dropped off sooner so you could measure the difference. And even then you'd be working off a tiny sample which is less helpful these days with all the randomness baked into the algorithm to make testing harder.

So we have to rely on Google's answers:

http://www.pagetrafficbuzz.com/matt-cutts-advice-on-content-syndication/4062/ -- this one said 'kinda sometimes' but isn't referring to the exact situation you're discussing

https://searchengineland.com/google-warns-against-syndicated-article-campaigns-275753 -- more recent thoughts from the big G are that if it's a great piece of content that's linked to and it's not 'oversyndicated' purely for spammy purposes and doesn't have unnatural anchor text it's probably ok.

The fact they're making that warning would indicate that yes it does pass value as if it was simply ignored they wouldn't need to consider which end of the spectrum was good or bad and where penalties should apply.

And rely on 'similar' tests:

A client of ours from a while back was an enterprise client that used to get an unholy amount of their content syndicated (more than usual even for a big site). A lot of their articles had very good internal linking strategies applied. Getting them syndicated boosted the pages that were internally linked - articles without the internal links before they brought in better internal linking didn't see the same benefits.

So while this isn't the exact situation you describe, it does, further to the direct answers from Google above, indicate that links within syndicated content are at least useful some of the time.

Having said all of that, though, it's not really something you can 'control for' much. If you're repeatedly able to gain big media coverage and currently those articles get some additional benefit from links in syndicated content or not it's still just 'one hit' as part of your strategy. It's really just a purely academic exercise figuring out if the more syndicated ones might be worth more or not as surely nobody would turn down a Mashable link on the basis that Tech Crunch gets syndicated more etc (not saying that's true, just a made up example) so you'd go get the big publisher link regardless and it would have value.
 
I'd say it mostly depends on how much unique content the syndicating sites have.

If it's scraper sites like Steve mentions, then they count none. If it's real sites who just syndicate some content from other sites, then all of them pass linkjuice, regardless of how many times it is syndicated. I would think this kind of analysis is very easy for Google now.
 
Google employees have said in the past that they try to apply an invisible canonical to duplicate content, but I doubt that happens real-time in the live algorithm. It's likely a once-in-a-while offline calculation they add back in later.

That would mean that you get some juice from the page itself essentially being "redirected" to yours, and any links it accumulates.

The problem is, you can find plenty of instances of sites copy and pasting content and ranking for some traffic. So there definitely is a period of time where they don't catch the fact that it's stolen content.

I wouldn't worry about it doing too much harm. If it's an absolutely horrendous site or spammed one, I'd just disavow it if it includes internal links to your site. If it has a Source: link and it's not a spammed site, I'd let it ride.
 
Google employees have said in the past that they try to apply an invisible canonical to duplicate content, but I doubt that happens real-time in the live algorithm. It's likely a once-in-a-while offline calculation they add back in later.

That would mean that you get some juice from the page itself essentially being "redirected" to yours, and any links it accumulates.

The problem is, you can find plenty of instances of sites copy and pasting content and ranking for some traffic. So there definitely is a period of time where they don't catch the fact that it's stolen content.

I wouldn't worry about it doing too much harm. If it's an absolutely horrendous site or spammed one, I'd just disavow it if it includes internal links to your site. If it has a Source: link and it's not a spammed site, I'd let it ride.
The real problem is when the algorithm fails to recognize the original source of content as the one that should receive the canonical. When people scrape content and upload it to more authoritative sites it's not unheard of for the original site to be designated as the duplicate. Use rel=canonical everywhere, file DMCAs with Google and block IPs where needed.
 
Back