VIEW: We're a decade from authority signals being deprecated in favour of 'quality content'

Steve Brownlie

Building Links
BuSo Pro
Digital Strategist
Joined
Nov 16, 2015
Messages
344
Likes
643
Degree
2
Back in the golden era of SEO before anyone considered penalties or that anyone might actually work at google this was 'quality content':

<h1>Best Injury Lawyer Wisconsin</h1>

If you're looking for best injury lawyer wisconsin, then you have come to the right place because accident lawyer wisconsin is ready to handle your case efficiently and quickly. Whether you've been hit by a bus wisconsin, or need workplace accident lawyer wisconsin we're ready to help on 555-1234-567.
....
Then you'd better have pages for:

* every variant on that query eg. 'accident attorney wisconsin' etc

* every town, county, and residential area in the state

and so on...



---
Some of you are lucky and don't even remember those times. But back then it worked and it worked well.

Now we're in an era where the same shenanigans is going on. Sure we don't have anyone pushing people to bold, italic and underline 57 keywords per 500 word filler article, written for 1c/word.

But what we do have is 'AI tools' to tell us what TF*IDF of phrases on competing pages is, or what 'semantically related keywords' need to be stuffed in to make a 'rounded' article on the topic etc.

So thousands of SEOs and internet marketers are duly creating documents which meet these criteria. Sure they hire better writers than we used to hire back in the day, and sure it's probably more interesting (vaguely) to the readers. Heck sometimes it's even written by a 'subject matter expert' eg Doctors writing for Drug Treatment Centers.

It's just the next evolution in the game - stuffing a different type of keyword (semantically related phrases and so on).
---
"Fair enough but what the hell is the point of this?"

Ok I've rambled long enough - here's the point.

How does Google or any current machine learning tool tell the difference between these levels of content if they all contain exactly the right words and are written by a competent writer?:

* A review by a writer who's never seen the product let alone used it but has written about all the correct features of such a product and added some apparently real opinions about it's shortcomings/benefits.

* A review by a writer who isn't that much of an expert in the topic but is an expert at writing about the topic and has actually tried it out so has both the right words and some slightly more accurate (though not always insightful/deep) recommendations and thoughts.

* A review by a genuine expert whose opinion on the product they are reviewing touches on tiny little details the others miss and really add value to the end reader but don't involve significantly different words to describe.

The truth is they can't. If you imagine the computing power required to even fact check a basic article let alone decide if those facts were pertinent or useful, it is beyond ludicrous.

So right now you can probably get away with the second one pretty damned well...

---

Let's look at an example in an area where I'm an expert - Travel and reward miles:

https://marginallycoherent.com/one-way-ticket-to-hell-how-points-and-miles-blogs-misrepresent-the-value-of-miles-to-sell-credit-de6de45d995e

I wrote the above last week. The tl;dr is points and miles blogs use ridiculous examples to make credit card miles seem more valuable. For example by picking obscure one way redemptions that price out high in cash then saying 'so I got xp/mile and my site only values them at yp/mile so I got 8x value'.

Imagine an AI that has to fact check all of these posts and somehow sort them. Firstly nobody has the same needs/takes the same redemptions so it has to consider what the average behaviour of miles and points users is and establish a value based on that in order to fact check the comparison yp/mile number.

Then it has to go to google flights and fact check the prices on the flights.

Then it has to understand the content sufficiently to analyse the point being made and sort the various articles discussing a particular type of point or a particular type of redemption and sort them by accuracy.

And it still has the wrong order at this point...

I think we're a million miles at this point from anything that any search engine does and we still have further to go. The typical use of points is a terrible way to use them as the typical person is an idiot. As Churchill pointed out the average voter isn't too smart...



So how do we establish the yp/mile comparison? The base value of the points (the immediate cash equivalent - eg cashing in Membership Rewards on the AMEX portal for cash?) or the ideal value or the typical value achieved by a savvy customer? Let's say somewhere in the middle - so work that out and keep it updated and constantly reassess pages based on their accuracy vs this calculation..?

How do we assess the real 'deal' being achieved or the 'value of the tip' being shared? Look at the likelihood anyone actually wants to book that trip? Maybe the actual travelers always book a return not a one way when they go from LAX to ICN?

In that case we need to understand that return trips need to be looked at for points and points + 1 way cash and so on...

Then what about booking timescales. Articles that compare last minute pricing to 'realistic' pricing where people plan their trip in advance etc...

It's an absolute minefield assessing who has a good tip/valuable advice even for such a simple question.

It's not going to happen soon - actually Google is moving more towards authority as trust and away from words on the page. You can tell whenever you try to find anything obscure these days.

https://twitter.com/tehseowner/status/1121661950708989952
https://twitter.com/dr_pete/status/1112767864442900487



So ‘finding things’ is becoming something Google is bad at because the short tail authority articles on sites with tons of links is crowding out page one completely even for some pretty long tail searches.

Authority of your site is still the only reliable thing they have to go on because right now everything else is too damned hard.

There are contributing factors to that authority, and endless algorithmic points tables - like having real doctors with real links to their site (that conveniently also links to your article on their ‘my contributions’ page) etc - but right now Google can only sort the ‘maybe this person is reviewing a meal delivery service that delivers dog food curry’ vs ‘meh at least it’s a real review of services that deliver human food’ is by putting pcmag first.

Unless AI has a huge leap forward we’re in this space for 10 years where authority is king.

Go build some authority.

 
Joined
Dec 31, 2016
Messages
469
Likes
283
Degree
1
Where does Intent factor into this?
It does in fact seem like Google is quite good at intent. Too good if you ask me. Why does Google know what I want to type before I do it?
I know why of course, because they track the sh*t out of everyone. Sometimes I watch a video and when I go to google something related, Google already knows, even if nothing about that search was semantically present on Youtube.
Authority and Intent, right? I find it increasingly easy to rank for "cheap" keywords just by including a feed aggregator search engine. That has to be intent, right? Google knows that "cheap" leads to multiple pages being opened.
 

Steve Brownlie

Building Links
BuSo Pro
Digital Strategist
Joined
Nov 16, 2015
Messages
344
Likes
643
Degree
2
Where does Intent factor into this?
I think that's a great point but probably a separate discussion. Sometimes Google does intent like witchcraft as you point out. Other times it has no fkn idea what I want... and perhaps it has no idea what anyone wants in those situations. I think as you go deeper in to the 'hard question' space the more it doesn't know which I guess is 1/2 my point in the OP.

Thanks for jumping in with some thoughts - definitely would like to hear your thoughts on intent more deeply/how you're exploiting it in the wild. If you fire up a thread let me know and I'll check it out.
 
Joined
Nov 5, 2015
Messages
20
Likes
20
Degree
0
You make some good points. It'd be a big feat if AI managed to get to that stage of analysing content in the near future. Specially when you take into account breaking news/trends how would it accurately return that if the dataset it was using for the AI wasn't mature and detailed enough to be able to give the results users expect.

Sometimes Google does intent like witchcraft as you point out. Other times it has no fkn idea what I want... and perhaps it has no idea what anyone wants in those situations.
I am seeing the latter more and more. I think some of it is down to changes Google are making to try and keep searchers on the google eco system.

Like some eCommerce intent searches have more and more PAA (questions) boxes. I don't want to see questions like what is the best xyz when I am searching a brand model. I don't even want to see it when I typing in a keyword like 32 inch pc monitor, if I did I would type what is the best 32inch pc monitor. Almost makes me feel like it is easier just to go on eBay or Amazon. Guess I am looking at that through my internet marketing glasses though.
 

Ryuzaki

女性以上のお金
Staff member
BuSo Pro
Digital Strategist
Joined
Sep 3, 2014
Messages
3,459
Likes
6,350
Degree
7
In addition, Google couldn't possible ever become an algorithm that looks at content only. That's not even to talk about the issue of canonicals, syndication, and all the other ways of knowing where the content is from.

I'm talking about scenarios where I could build 99 pages of the foulest, nastiest, trashiest content ever, but then produce the perfect piece of content for a high volume phrase that innocent people look at. Is Google going to rank me for that on the same site as the nastiness just because it's the best content at the page level?

They'll have to take into account domain-level relevance, they'll have to take into account brand (authority) metrics and signals, all to avoid issues like I described above. A lot of that relevance comes from external sources, so they'll have to look at links and the content on other sites to determine if you're a relevant and trusted source.

Even with an omnipotent, instantaneous AI that can comprehend the entire internet at the level Steve is talking about (understanding who's faking the funk or not), it'll still want to make sure it's not sending you to a garbage web page. Domain considerations will always be at play, and that means authority metrics will too, in my current opinion that I formed as I typed this, live on the spot.
 
Joined
Apr 5, 2017
Messages
131
Likes
86
Degree
0
You know what another word for authority is? Trust.

Do you know what Youtubers, instagrammers, influencers sell? Nothing but Trust.

Have you wondered why Youtube spends millions of dollars automatically translating youtube videos? Syntax, to find context.

Have you ever wondered why Paul Logan's Suicide Forest video was trending on page one on youtube? Because trusted authorities covered it

Are natural language processing and current machine learning technologies able to identify meaning from text and find relevant context from another source of text? Easily

Everything is already experimentally available. Google has a known history of giving trusted authorities higher weightages in what they say (trust) and Google is also known for experimenting heavily with new algorithms to rank or derank content.

Just look at the progress of OpenAI GPT-2 for text generation. Checking facts is much easier since facts denote binary sides. It just needs enough similar context from trusted sources to base it on. Does it mean diverse/unique opinion gets pushed aside, probably- but not surprisingly given the history of everything from what we eat to what we read and what we listen to.
 

Steve Brownlie

Building Links
BuSo Pro
Digital Strategist
Joined
Nov 16, 2015
Messages
344
Likes
643
Degree
2
Does it mean diverse/unique opinion gets pushed aside, probably- but not surprisingly given the history of everything from what we eat to what we read and what we listen to.
Very good point - I think that 'effect' is what I'm seeing and SEO Twitter is seeing when we're doing searches for long tails that Google was traditionally good at parsing and digging stuff up on and now we just see 'authority articles' that deserve to rank for just two words that we typed in as if the rest was ignored. If you imagine the same situation for a question type query about something contentious the articles attempting to answer it wouldn't even be on page one because 'general information' on 'authority/trusted sites' would have it locked down based on 2-3 words of the query.
 

Wills

BuSo Pro
Joined
Mar 28, 2016
Messages
12
Likes
16
Degree
0
Does it mean diverse/unique opinion gets pushed aside, probably- but not surprisingly given the history of everything from what we eat to what we read and what we listen to.
This is a scary thought, when looking at the medium to long-term implications. I've noticed it with health searches since the medic update. I always look for alternatives to the mainstream U.S. medical recommendations and they've gotten harder to find. I can usually find them with more specific searches, but will the average person try past the first or second search?

Google is the gateway to such a huge percentage of the world's information. What does it say for the future of information availability if it is bent on homogenizing/dumbing down/unifying the search results to present versions of a very similar "widely agreed-upon" perspective on a given query?
 

Steve Brownlie

Building Links
BuSo Pro
Digital Strategist
Joined
Nov 16, 2015
Messages
344
Likes
643
Degree
2
@GarrettGraff just shared this with me to check out - https://www.mytrafficresearch.com/learn/june-2019-google-core-update - tl;dr is similar to what I said in OP but it's based on tons of data from the core update rather than just my logical deductions!:

TL;DR: Core update may have increased the impact of domain authority and high quality links and reduced impact of having overoptimised content with perfect titles, keyword usage etc etc (all the things that are getting easier to game thanks to on page tools) in the latest core update. He goes on to conclude:

How else can you evaluate what is credible and what is not when everyone is producing similar content?

What makes the medical researcher's website more credible than the mommy blogger when it comes to giving medical advice?

Until artificial intelligence becomes so powerful that it can discern lies, false propaganda and inconsistencies within content, links remain one of the best methods to convey trust.
I'm not saying we're both definitely right it just makes some logical sense that with some parts of the algorithm becoming easier to game (thanks to tools) and others becoming harder to game (thanks to google killing the effectiveness of easy linkbuilding automated tools relative to years ago) that we'd see a slight pendulum swing in that direction.