List of Yandex Ranking Factors

Let's start off with, Yandex isn't Google. But it was built as a Google clone. (It's great for Image Search since Google has destroyed it's image search when lookin for people though).

There are a lot of confirmations that search engines uses certain ranking factors to determine the quality of a website.

1. Direct Traffic As A Ranking Factor

Here is the Google clone using "direct traffic" as a ranking factor:


Meaning that Yandex/Google clone measures whether a website get traffic from OTHER SOURCES than SEO. Hmmmm... Who's been shouting that the longest?

Consider Yandex doesn't own a browser like Chrome, nor does it have a global DNS system (8.8.8.8) like Google offers.

Those 2 systems alone, Chrome, and the DNS, tells Google exactly how much traffic a website, new or old is getting AND the source of that traffic.

We haven't even added in Google Analytics that's installed everywhere on the net.

Google has Chrome, DNS, and Google Analytics to inform them how much traffic, direct, referring, and SEO, a website is getting. Yet Yadex would use that as a ranking factor before GOOGLE?

Conclusion: It should be crystal clear that Google knows how much traffic websites are getting and therefore having ONLY SEO traffic is a detriment to your long term goals.

2. Returning Users As A Ranking Factor

Google, Yandex, Bing, and everyone else can easily figure out if a user quickly goes back to the search results looking for more information, so it shouldn't be a surprise that Returning Users is a ranking factor for search engines:


Conclusion: Get more people to come back to your brand through as many avenues as possible.

3. Host Reliability As A Ranking Factor


Conclusion: Shitty hosting services create problems for your long-term SEO. That $1 AWS going through Heroku retarded setup you guys got going on to save $1-10 is costing you millions of dollars.

4. Backlinks from Top 100 Site As A Ranking Factor


Conclusion: Wikipedia links, backlinks from powerful sites, and heavyweights count more. Even if Wikipedia is nofollow it's still counts.

5. Wikipedia As A Ranking Factor


Conclusion: Should be obvious.

--

There is a more going on, Alex Buraks broke more of this down in a Twitter thread:

Part 1: https://twitter.com/alex_buraks/status/1618988134850785280

Part 2: https://twitter.com/alex_buraks/status/1619370810959093760
 
Conclusion: Wikipedia links, backlinks from powerful sites, and heavyweights count more. Even if Wikipedia is nofollow it's still counts.

I can't tell you how many times I had to explain that to potential customers and STILL DO to this day.
 
My favorite was the decrease in the value of links by age. That ten-year-old aged domain forbes link you're paying $1000 for an aged domain isn't as helpful as you think; I have suspected this for years with google. It has been proven many times by pbns that if you didn't rotate or add new links, the site would eventually go off a cliff, but nice to see it.

The majority of these people have already suspected it as happening; Yandex confirming it is nice, but how much of this google uses or weighs could be completely different, especially with how many updates we've had, but nice as a list of semi confirmed ranking factors.
 
3. Host Reliability As A Ranking Factor. (% of urls with error responses)
I wonder how they treat 404s and 410s?

I deleted 80% of my site a few months ago. The crawlers have been getting non-stop 404s. I realized this probably signals poor quality. So a few weeks ago, I 410'd all the deleted stuff. I can the "Pages > Not Index > 404s" report rapidly decreasing in GSC. I also noticed a pattern in the logs - Google is crawling the 410'd pages very frequently (hourly) and then stopping. I think they do this to verify it's really gone, and then drop it for good.

According to the Host Reliability factor, the % of urls with errors matters, and it looks like they really do treat 410s differently than 404s.
 
I wonder how they treat 404s and 410s?
404's and 410's have nothing to do with the host reliability, in terms of the server and data transfer infrastructure. These are problems created by the users or the site owner. Those HTTP response codes are actually signals that the server is responding correctly. If they respond with those signals quickly, that's an even better positive signal, I would think.
 
Back