Understanding Google RankBrain

CCarter

Final Boss ®
Moderator
BuSo Pro
Boot Camp
Digital Strategist
Joined
Sep 15, 2014
Messages
4,204
Likes
8,675
Degree
8
The biggest thing to take away from RankBrain discussions should be that RankBrain is about finding out the signals on an individual keyword level of what works best for each organic result. Meaning that RankBrain is segmenting ranking factors' importance on what it finds on a per keyword basis. RankBrain might find the best organic results for the keyword "SEO" heavily rely on title tags, so it will prioritize the title tag above pagerank or other factors. RankBrain may then find that the content on the page is more relevant and has yielded better organic results for the keyword "auto insurance" - so it will prioritize that ranking factor above the others.

Google finally realized that different segment/niches/industries should show different styles of website with different data structures.

If you are looking for 'food recipes' you are more inclined to find websites based on instructions versus looking for 'yachts for sale'. Yachts for sale website will have a different structure, different data, and should therefore have different priorities, cause one type of site might naturally have more content, images, or videos versus the other.

Take videos for example: videos have hardly any content except within the description and the user generated comments, so gauging what's best might come down to more title tags, and anchor text versus in-content ranking factors.

So far, RankBrain is living up to its AI hype. Google search engineers, who spend their days crafting the algorithms that underpin the search software, were asked to eyeball some pages and guess which they thought Google’s search engine technology would rank on top. While the humans guessed correctly 70 percent of the time, RankBrain had an 80 percent success rate.

Typical Google users agree. In experiments, the company found that turning off this feature "would be as damaging to users as forgetting to serve half the pages on Wikipedia," Corrado said.

Source: http://news.surgogroup.com/google-turning-lucrative-web-search-ai-machines/

What's important to take away is that RankBrain is always refining itself and being monitored.

All learning that RankBrain does is offline, Google told us. It’s given batches of historical searches and learns to make predictions from these.

Those predictions are tested, and if proven good, then the latest version of RankBrain goes live. Then the learn-offline-and-test cycle is repeated.

Source: http://searchengineland.com/faq-all-about-the-new-google-rankbrain-algorithm-234440

One thing that has always baffled me when talking to SEOs is they seem to think they are smarter than PhDs that work at Google. They also assume that Google, of all people, does not visit the same forums, blogs, and read the same content to keep up with what the latest techniques of 'manipulating' SEO is being done. Never forget:

Screen_shot_2012_10_31_at_10_31_12_8_14_43_P.png


It's literally the reason why so many SEO forums have died off and why conversations are going to pure skype and invite only groups to keep things "secretz". What you need to realize is SEO is going to become more "technical" and harder. In the extremely near future What might work in one niche DEFINITELY will not work in another niche; even within the same niche a set of keywords might respond to increased authority backlinks but that tactic may not work on another set of keywords.

It's not just about content or links, it's about understanding what's the best results on a per keyword/query basis for searchers.

This is probably the most comprehensive article regarding this so far on what the future looks like: Artificial intelligence is changing SEO faster than you think

Google's RankBrain is in the camp of the Connectionists. Connectionists believe that all our knowledge is encoded in the connections between neurons in our brain. And RankBrain’s particular strategy is what experts in the field call a back propagation technique, rebranded as "deep learning."

Connectionists claim this strategy is capable of learning anything from raw data, and therefore is also capable of ultimately automating all knowledge discovery. Google apparently believes this, too. On January 26th, 2014, Google announced it had agreed to acquire DeepMind Technologies, which was, essentially, a back propagation shop.

So when we talk about RankBrain, we now can tell people it is comprised of one particular technique (back propagation or "deep learning") on ANI (Artificial Narrow Intelligence).

[..]

Today’s regression analysis is seriously flawed

This is the biggest current fallacy of our industry. There have been many prognosticators every time Google’s rankings shift in a big way. Usually, without fail, a few data scientists and CTOs from well-known companies in our industry will claim they "have a reason!" for the latest Google Dance. The typical analysis consists of perusing through months of ranking data leading up to the event, then seeing how the rankings shifted across all websites of different types.

With today’s approach to regression analysis, these data scientists point to a specific type of website that has been affected (positively or negatively) and conclude with high certainty that Google’s latest algorithmic shift was attributed to a specific type of algorithm (content or backlink, et al.) that these websites shared.

However, that isn’t how Google works anymore. Google’s RankBrain, a machine learning or deep learning approach, works very differently.

Within Google, there are a number of core algorithms that exist. It is RankBrain’s job to learn what mixture of these core algorithms is best applied to each type of search results. For instance, in certain search results, RankBrain might learn that the most important signal is the META Title.

Adding more significance to the META Title matching algorithm might lead to a better searcher experience. But in another search result, this very same signal might have a horrible correlation with a good searcher experience. So in that other vertical, another algorithm, maybe PageRank, might be promoted more.

L9HJolx.png


This means that, in each search result, Google has a completely different mix of algorithms. You can now see why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed.

For these reasons, today’s regression analysis must be done by each specific search result. Stouffer recently wrote about a search modeling approach where the Google algorithmic shifts can be measured. First, you can take a snapshot of what the search engine model was calibrated to in the past for a specific keyword search. Then, re-calibrate it after a shift in rankings has been detected, revealing the delta between the two search engine model settings. Using this approach, during certain ranking shifts, you can see which particular algorithm is being promoted or demoted in its weighting.

Having this knowledge, we can then focus on improving that particular part of SEO for sites for those unique search results. But that same approach will not (and cannot) hold for other search results. This is because RankBrain is operating on the search result (or keyword) level. It is literally customizing the algorithms for each search result.

[..]

What Google also realized is that they could teach their new deep learning system, RankBrain, what "good" sites look like, and what "bad" sites look like. Similar to how they weight algorithms differently for each search result, they also realized that each vertical had different examples of "good" and "bad" sites. This is undoubtedly because different verticals have different CRMs, different templates and different structures of data altogether.

When RankBrain operates, it is essentially learning what the correct "settings" are for each environment. As you might have guessed by now, these settings are completely dependent on the vertical on which it is operating. So, for instance, in the health industry, Google knows that a site like WebMD.com is a reputable site that they would like to have near the top of their searchable index. Anything that looks like the structure of WebMD’s site will be associated with the "good" camp. Similarly, any site that looks like the structure of a known spammy site in the health vertical will be associated with the "bad" camp.

As RankBrain works to group "good" and "bad" sites together, using its deep learning capabilities, what happens if you have a site that has many different industries all rolled up into one?

tBYbK1N.png


First, we have to discuss a bit more detail on how exactly this deep learning works. Before grouping together sites into a "good" and "bad" bucket, RankBrain must first determine what each site’s classification is. Sites like Nike.com and WebMD.com are pretty easy. While there are many different sub-categories on each site, the general category is very straightforward. These types of sites are easily classifiable.

But what about sites that have many different categories? A good example of these types of sites are the How-To sites. Sites that typically have many broad categories of information. In these instances, the deep learning process breaks down. Which training data does Google use on these sites? The answer is: It can be seemingly random. It may choose one category or another. For well-known sites, like Wikipedia, Google can opt-out of this classification process altogether, to ensure that the deep learning process doesn’t undercut their existing search experience (aka "too big to fail").

The field of SEO will continue to become extremely technical.

But for lesser-known entities, what will happen? The answer is, "Who knows?" Presumably, this machine learning process has an automated way of classifying each site before attempting to compare it to other sites. Let’s say a How-To site looks just like WebMD’s site. Great, right?

Well, if the classification process thinks this site is about shoes, then it is going to be comparing the site to Nike’s site structure, not WebMD’s. It just might turn out that their site structure looks a lot like a spammy shoe site, as opposed to a reputable WebMD site, in which case the overly generalized site could easily be flagged as SPAM. If the How-To site had separate domains, then it would be easy to make each genre look like the best of that industry. Stay niche.

These backlinks smell fishy

Let’s take a look at how this affects backlinks. Based on the classification procedure above, it is more important than ever to stick within your "linking neighborhood," as RankBrain will know if something is different from similar backlink profiles in your vertical.

Let’s take the same example as above. Say a company has a site about shoes. We know that RankBrain’s deep learning process will attempt to compare each aspect of this site with the best and worst sites of the shoe industry. So, naturally, the backlink profile of this site will be compared to the backlink profiles of these best and worst sites.

Let’s also say that a typical reputable shoe site has backlinks from the following neighborhoods:

  • Sports
  • Health
  • Fashion
Now let’s say that the company’s SEO team decides to start pursuing backlinks from all these neighborhoods, plus a new neighborhood — from one of the CEO’s previous connections to the auto industry. They are "smart" about it as well: They construct a cross-marketing "free shoe offer for all new leases" page that is created on the auto site, which then links to their new type of shoe. Totally relevant, right?

Well, RankBrain is going to see this and notice that this backlink profile looks a lot different than the typical reputable shoe site. Worse yet, it finds that a bunch of spammy shoe sites also have a backlink profile from auto sites. Uh oh.

And just like that, without even knowing what is the "correct" backlink profile, RankBrain has sniffed out what is "good" and what is "bad" for its search engine results. The new shoe site is flagged, and their organic traffic takes a nosedive.

Some things are certain, though:

  • Each competitive keyword environment will need to be examined on its own;
  • Most sites will need to stay niche to avoid misclassification; and
  • Each site should mimic the structure and composition of their respective top sites in that niche.
To stay within the good graces of RankBrain, your site, it's structure, and it's backlink profile and any other seo signal associated with the top websites in your website should also be reflected on your own website. Meaning if 90% out of the top 10 or top 20 websites have SSL enabled by default, then your website better have SSL enabled by default. If the top sites have video channels associated with their brand OR videos embedded within their content, then your website better have the same elements.

If you're an outlier and want to rank your wordpress twentytwelve themed site with no images, no logo, and half-assed social media icons - you're not going to be around for too long not within these SERPs.

As @Broland said - "survival of the fittest!"

As I continue my research and finding on RankBrain I'll continue updating this thread and posting the latest.
 
Not to forget they are hiring millions of low wage workers (who are graduates for all it is worth) in India and maybe in some countries too that I am not aware of. They are not hiring themselves but they are giving out contracts for a lot of low tech manual work that are being used as inputs to their knowledge graphs. Damn they are relentless. I am not sure if they will always be the winner in this war for search in the long run, but they are definitely not complacent.

You can race with Algorithm, but I hate these manual workers. And their process is so fucking stringent for the smallest of tasks.
 
Good read...I'll keep on milking the foreign markets until they catch up.
 
  • Like
Reactions: Nat
You know, what's funny is, the more complex and sophisticated these things become, the simpler in concept they sort of become as well. For example, in effect, what RankBrain is doing, in terms of a general concept, is actually nothing new in the machine learning world. As evident by the mention that it is not a "real time" algorithm, it sounds to pretty much be a supervised learning model in some manner or another. In effect, prime the training data, run the model, assess the results, modify variables as necessary, rerun....and eventually push your optimized variables live, when you get close to the desired output. This is nothing new, even if the implementation is highly complex.

I'm reminded of a quote (albeit taken out of its original context) from Einstein:

A new type of thinking is essential if mankind is to survive and move toward higher levels.
-Einstein

In other words, in an ever-evolving age of machine learning, digital marketers and particularly SEOs will need to increasingly learn to take advantage of machine learning and big data tools to step up their game. For the average niche, this may not be necessary, and that level of analysis might actually be detrimental to forward progress.

For highly competitive niches, I can see it becoming increasingly more necessary (still keep an eye towards efficient analysis, don't get mired in the details).

In world class market segments ("niche" implies small-time), it has already been necessary for years. For example, almost all major websites that are largely search-oriented (think travel sites, major retailers/ecommerce sites, publicly traded companies) utilize search technologies whose ranking variables can be boosted or suppressed. Flipping those "switches" at scale, is usually backed by engineering teams running one or multiple ML training models across large data sets to verify the outcomes beforehand, because those outcomes represent serious revenue (like GDP-of-small-countries serious).
 
I don't necessarily think it's about SEO's believing they are "smarter than Google's PhD's" -- at least not on an individual level. However you look at it, the job complexity of the PhD vastly exceeds ranking a website or two, in a medium competition niche. It's much harder to create a catch-all net, than slipping thru the cracks with your "best practices" and semi-quality links. Obviously, if it was one-on-one, 99.9% of all SEO's would lose, but it's not.

RE RankBrain -- while it's interesting and all, it's just sounds like business as usual to me. Isn't this only really going to affect spammy sites that didn't deserve to be there in the first place?

If you're competing in a niche where title tags are the dog's bollocks, does that mean that strong links from authorities won't benefit you, or even worse? I think not. And your title tag game should always be on point regardless -- it's not like it should ever be one or the other.
 
If you're competing in a niche where title tags are the dog's bollocks, does that mean that strong links from authorities won't benefit you, or even worse? I think not. And your title tag game should always be on point regardless -- it's not like it should ever be one or the other.

You'd be surprised at what I've seen SEOs try to pass off as "websites". People come to me looking for help and guidance then show me websites where they didn't even bother to update the social links in their theme's footer from the web developer's links to their own. They create twitter accounts with no image header, and FB pages with no headers as well - all 100% lazy style and can't figure out why no one takes them seriously.

I've seen people half-ass the contact us page, I've seen people have links in their navigation that lead to blank pages. I've seen people that have stock images on their homepage and the default "Lorem Ipsum" on their homepage then try to do "outreach" to partner up with people in their industry and can't figure out why their "outreach" campaign sucks.

There is an attention to detail that the vast majority of people jumping into the "SEO" industry don't seem to comprehend or understand. Its like a scenario where if they where visiting their own website they wouldn't trust it for a second and definitely would not give them their credit card information - cause it looks half-assed. But then when it comes to their own website they half-ass everything and can't figure out why no one is taking them seriously.

Someone should start a BST just to have people submit their website and have someone take a look and point out ALL the problems that needs to be fixed to look "trustworthy". Cause it's very very obvious when you look at most of these sites from an outside perspective, but there seem to be blindspots when you look at it from your own eyes.
 
You'd be surprised at what I've seen SEOs try to pass off as "websites"...

LOL to paraphrase a famous quote, but with a different spin:

In times of lackluster work ethic, performing due diligence is a revolutionary thing.
-George Orwell

Pretty sad when the bar for becoming a "winner" can be as low as just filling out one's damn profiles and basic website elements consistently. :wink: Oh well, just makes it easier for those that care and those that try.
 
Back