Google Says Author Bylines Don't Help You Rank & Google Doesn't Check Author Credentials

Ryuzaki

お前はもう死んでいる
Moderator
BuSo Pro
Digital Strategist
Joined
Sep 3, 2014
Messages
6,140
Likes
12,859
Degree
9
Source: https://www.seroundtable.com/google-author-bylines-ranking-36684.html

Danny Sullivan is Google's current Search Liason and is responding to another article by The Verge, who had a lot of success and rustled many jimmies with the first one. The quickest summary is that, with my paraphrasing, "Author Bylines and Author Bios don't help you rank better and that Google does NOT check into author credentials."

I would venture to say that only half of that statement is actually true.

Much in the past has implied that authorship matters (in the sense of at least having an author instead of no author), and now the story is shifting. We all thought it was reasonable that Google might be constantly building out an AuthorRank signal since they tried that with Google+ and even put authors in the SERPs for a while. And now the story is "it's good for users so you should do it but we don't use any of it".

You can read the tweets compiled in the link above, but I've compressed it all here.

I would argue, and this is real-world, albeit anectdotal, evidence, that things like Bylines and Bios absolutely make a very visible, but not gigantic difference in your search visibility. And either they're lying outright or they've changed that "fact" with AI coming out and are now telling the "truth" with disregard to the recent past.

I was never convinced Google was building out author profiles on everyone in the sense that they were confirming with universities through some API that degrees were legitimate or whatever. But they're absolutely building out a profile on an author in the sense of that author being a typical "entity". In the same way they'd build out a profile on a brand, they do the same for individuals. That's how we end up with Knowledge Panels and "More About This Page From Around the Web" EEAT information.

It's up to each reader to decide what they're implying. But if they are implying there's no authorship "profiles" that's completely disingenuous. If you take it at face value that they aren't CONFIRMING credentials, then sure. But I bet you they're eating up credentials and hoping people aren't lying, and trying to use some kind of AuthorTrust type metric for some kind of threshold-based ability to trust the people aren't lying.

What say you?
 
Weasel speak.
They don't directly check. Just use models that totally incorporate topical relationships in a way that authorship bylines on a topic will 100% affect. Its basically impossible to build and index that doesn't do this and downright stupid to even try.
Same thing that gets a brand name into a topical category will default do that at scale with author identities. Especially for more uncommon ones with fewer word relationships.
 
And either they're lying outright or they've changed that "fact" with AI coming out and are now telling the "truth" with disregard to the recent past.

I think the answer is simpler, much of their algo is machine learning black box and like how ChatGPT doesn't have a bunch of variables you can change, neither does the Google algo, it is a black box trained on many different website factors and the result is not something that google completely understands.

What I speculate, and what I would do, is take the Google Quality Rating guidelines, have a bunch of people apply them to thousands and thousands of serps. Then have the bot download html copies like it does of all those pages, take all the off page factors, then feed it to your basic big brain machine learning algo and let it work.

We know that Google used to say they had hundreds of ranking factors, but now recently they say they have much fewer. Why? Cause they use AI and Rankbrain.

The end result might be that the machine learning algo applies a positive correlation to a "byline that matches a known entity", so it's not the byline in itself that Google grades, but the black box correlation with quality.

In addition we have Rankbrain, which is a machine learning algo, that tracks how pages perform on the first page. Have poor user experience and eventually you are moved down. This is an effective semi-automatic way to deal with spam and hacks.

These two combined also suggest why most updated no longer can be pinpointed. No one can tell what exactly they're targeting. What could happen is that the machine learning algo had time to crunch the numbers, or rather, Google devs told it focus more or less on some things and it makes new correlations, but that's what people need to understand, it is correlations, not simply causality.

However, all of this doesn't really explain what The Verge is complaning about. I've written about this before, but happy to do so again.

The poor quality of google and the homogenous results are a direct result of Sundar Pichai deciding to manipulate the SERPs with the stated goal of never allowing Donald Trump to be president again-

Since I'm up there in age with the most of you, I remember the internet of the early 2010s, which was a fucking wild west, primarily benefiting the bloggers, people who contributed actual independent research and content creation. These niche bloggers ranked very, very well in Google.

They all dissappeared since 2016. They wanted to hit types like Alex Jones, but as a result they destroyed all the niche blogs that were very active, very alive, all the niche forums and so on.

Nowadays the SERPs are sterile and useless, just a list of 50 websites that are pre-approved for most searches. The chance of anyone stumbling upon some radical content from search is very low, but on the other hand now we have Andrew Tate. Google made the web dumber while make it safer.
 
I would like to apply this to googles spaghetti code algo aswell:

In recent TED Talk about AI, Zeynep Tufekci points out that not even the people behind Facebook’s algorithms truly understand them:

We no longer really understand how these complex algorithms work. We don't understand how they're doing this categorization. It's giant matrices, thousands of rows and columns, maybe millions of rows and columns, and not the programmers and not anybody who looks at it, even if you have all the data, understands anymore how exactly it's operating any more than you'd know what I was thinking right now if you were shown a cross section of my brain. It's like we're not programming anymore, we're growing intelligence that we don't truly understand.
 
Marie Haynes, known for her focus on EEAT talks about bylines in the most recent podcast:

https://community.mariehaynes.com/posts/48458568

She goes into a bunch of stuff and I think it's overall an informative episode and I feel as if she mostly agrees with us here, namely that bylines increase trust and other user signals that in return might work on Rankbrain (or in other mysterious ways) as trust.

A bit of info in the podcast going a bit deeper into the algo and Rankbrain, worth listening to.
 
Back