Google Algorithm Updates - 2023 Ongoing Discussion

Status
Not open for further replies.
I read that post yesterday. It's all massaging the data. They create thresholds for everything, between "how much of a decrease" and then there's this most important part:

From there, we filtered out sites making under $100 per week in gross revenue, not because these sites aren’t important, but because comparing a 50% decrease in revenue on a site making $100 is vastly different from a 50% decrease on a site making $10,000.​
That is how they pulled off this massive lie. They cut out most everyone they could that decreased, claimed very few relatively decreased, and then the best part is at the end:

Now is not the time to adjust ad density settings because ad spend is ramping up. And when you change something like the number of ads you’re serving — especially if you’re lowering your density because someone without data claims too many ads are the cause of your troubles — you’re hurting your earning potential from all the othertraffic sources out there.​
Now, I'm in 100% agreement with them. Their ad densities are fully in line with Google's rules even at "max" levels allowed. But clearly they've taken a massive beating and are asking the relatively few remaining winners not to follow the rumor mill and reduce Mediavine's revenue's even further.

It's pretty obviously a PR spin and an attempt to stop the bleeding.
 
From there, we filtered out sites making under $100 per week in gross revenue, not because these sites aren’t important, but because comparing a 50% decrease in revenue on a site making $100 is vastly different from a 50% decrease on a site making $10,000.
And for anyone that doesn't know why this is up for scrutiny, they don't follow up with how many sites were excluded from the original 10,302.

They say
  • 607 were losers (5.8%)
  • 1,170 were winners (11.3%)
But who knows how many sites they excluded? It could have been 8,525, which changes those numbers to
  • 607 were losers (34%)
  • 1170 were winners (66%)
Excluding that one piece of data was intentional. Filter out the undesirables, then apply that number to the whole.

I'm not saying they're wrong for doing it; I'd do the same. I'm just saying we're in the business of manipulating the internet. It's good to recognize when someone else is doing so.

Women lie, men lie, numbers don't lie but you can manipulate numbers to tell the story you want to tell.
 
They didn't cut back the serps.
They removed an amount of answer box shit.
I would think that means there is more traffic to go around.

buttttttttt
The bitching from the mass content I write tons of articles brigade is insane.
Is it possible they just rerouted traffic to bigger keywords and videos and away from MOAR Words the website type stuff in some sneaky google way?
 
607 were losers (34%)
1170 were winners (66%)
This is a generous rendition of the numbers, too. To copy and paste from Boy's post, they claimed:
  • 607 were losers (5.8%)
  • 1,170 were winners (11.3%)
This is after they conveniently ignored 8,525 sites. We know this because their total number of sites they run ads on is 10,302 by their own admission. That's 10,302 minus 607 minus 1,170.

I say Boy's numbers above are generous because it simply rebalances the percentages while only looking at the remaining sites they didn't filter (those making $100 a week or more).

I think it's very safe to assume that 50% of the sites they ignored got hit. I think it's safe to say 75% or more got hit, which is why they chose to ignore them. Let's run the numbers both ways:

Assuming 50% of the ignored sites were negatively impacted:
  • 607 + 4,263 sites were losers (47.3%)
  • 1,170 + 4,263 sites were winners (52.7%)
So even giving them the most gracious benefit of a doubt, the numbers shift drastically. Let's be more realistic and look at the "75% of ignored sites took a hit" angle:
  • 607 + 6,394 sites were losers (68.0%)
  • 1,170 + 2131 sites were winners (32.0%)
Now we're coming more into alignment with reality. Since we know Google tweaked branding signals, all authority metrics like links, and pushed EEAT like (presumably) having a authenticated GMB profile... it's more likely the percentage of sites hit in their low-authority ignored bundle is likely higher than 75%.

One could, with this mostly-conjecture, imagine a scenario where 75% of their 10,302 sites took a hit and 25% did better.

This is all without considering the idea that they bunched sites that lost "only 5% to 20%" of traffic into the winners circle. It'd be real scummy to do this and I doubt they did, but if you read the article, they made a point of creating 5 classifications and then never mention it again. They created those classifications possibly just for their own data and insight, but then why mention them in the article. Why mention them if they weren't used in any further calculations?

Whatever! We can't be certain of anything more than they just bold-faced lied to some degree and expected everyone to just not say anything about it, which is pretty much par for the course these days in all sectors of life.
 
Now, I'm in 100% agreement with them. Their ad densities are fully in line with Google's rules even at "max" levels allowed
Google's SE and ad sales teams are completely separate. Sales will push for more ads, SE team will nuke you for bad experience.
 
Google's SE and ad sales teams are completely separate. Sales will push for more ads, SE team will nuke you for bad experience.
These are guidelines the search engine team follows set by the Internet Advertising Bureau and the Coalition for Better Ads, which Google is a part of. It's an agreement on what constitutes an acceptable experience for internet users, not for advertisers or publishers.
 
These are guidelines the search engine team follows set by the Internet Advertising Bureau and the Coalition for Better Ads, which Google is a part of. It's an agreement on what constitutes an acceptable experience for internet users, not for advertisers or publishers.
This. My direct reply from ezoic when asking them what they thought of ad density effects, decent reply.

As for SEO and how ads play a role, I can assure you that running ads on a site does not impact SEO for the most part. When it does negatively impact a site, ad density (the proportion of space ads take up compared to content) exceeds 30%. For example, on xxxxx.com 1 the average ad density for the month of September was 6.85%, well below what would be flagged and impacted by Google. At the end of the day if sites were penalized for running ads that would run counter to Google’s business plan. Sites without ads would be promoted to the top of SERPs and users would go predominantly to those sites, thus cutting deeply into Google’s ad revenue generation. Moderation is key."

The site they say has a density of 7% that I have with them, it looks like time square on desktop, ads absolutely everywhere including interstitials/popups. In my opinion the ads themselves may not be the factor but the negative usersignals they may generate can be. Also long-term link signals, most people don't want to naturally link to a site that looks like that.
 
Our friend over at Forbes use Google Ads and yesterday they had one of those ads that drop down and a popup about some bs at the bottom. There was like 20% of the screen, if that, left to read.

It's not just Mediavine.

What it comes down to is ad placement. If I see a site that has massive ad blocks above the title or several ad blocks early on in the content, then I stop reading. It's too much of a hassle.

I know people do it like this because it makes money, but it is objectively a bad user experience.
 
I know people do it like this because it makes money, but it is objectively a bad user experience.
It is, but it's also the most effective way to monetize large content sites that cover a fairly wide range of topics.

I don't think the sites getting dinged (including mine) in the recent updates were because of the ads, but because the majority of content on those sites were made specifically for search engine traffic.

It just happens that those are the sites that are heavy on ads.
 
but because the majority of content on those sites were made specifically for search engine traffic.
I'd bet a non-negligible portion of my net worth that "content made for humans" simply means "we don't need to depend on on-page SEO signals nearly as much anymore and are going to actively attack sites that do it past the title tag".

Meaning, once again, that Google actively taught and encouraged something and then turned around and punished you for it later instead of making it not count (like meta keyword tags or even spam links).

They still need it, of course, but (in this mental exercise) the threshold for over-optimization on-page has been lowered and anything over it is punished so that they can still use it without risking the SEO's actually winning. This is what we'd call a rug pull.
 
the threshold for over-optimization on-page has been lowered
I've been thinking this a lot recently. Got me questioning quite a bit of current ideas on-page optimization.

I'm de-optimizing all my alt tags across sites, for one.

actively attack sites that do it past the title tag
I'd like to think that it'd be anything past the title, URL and H1 - keywords will naturally appear in these places imo - but that might be an unreasonable assumption on my part.
 
I'd bet a non-negligible portion of my net worth that "content made for humans" simply means "we don't need to depend on on-page SEO signals nearly as much anymore and are going to actively attack sites that do it past the title tag".

Meaning, once again, that Google actively taught and encouraged something and then turned around and punished you for it later instead of making it not count (like meta keyword tags or even spam links).

They still need it, of course, but (in this mental exercise) the threshold for over-optimization on-page has been lowered and anything over it is punished so that they can still use it without risking the SEO's actually winning. This is what we'd call a rug pull.
Rumblestiltskin says ur almost there.
 
I have a new theory based on our discussion about the Neon Music site and further research.

As we noticed, they are propped up by backlinks from Wikipedia & more. I'm also thinking they are being propped up on social proof, and that G is using social proof, or "stamps of approval" as a ranking signal.

This may be old news, but here's what I think:

For example, if you write a review of a Diplo Concert, or Interview Diplo, and Diplo shares that article on his Facebook page. After that, every single article you write about Diplo will be at the top of the SERPs. Because Google says "Diplo approves this site, so they must be legit."

This can be applied to any niche you may be in. I have some anecdotal evidence of this being true on my own blog. The topics that are still performing well, have had those kind of big name social shares in the past, in addition to the topics that have backlinks.

Your mileage may vary, but this is the kind of theory that somebody could turn into an entire content strategy.
 
For example, if you write a review of a Diplo Concert, or Interview Diplo, and Diplo shares that article on his Facebook page. After that, every single article you write about Diplo will be at the top of the SERPs. Because Google says "Diplo approves this site, so they must be legit."
That's the basis of Google's PageRank.
 
I firmly believe, after analizing most, if not all, of the sites in my niche post-HCU update, that sites with diverse content (pages) have an advantage while SEO-only ones were wiped out.

It doesn’t matter what type of content. If the site has 100 ‘news’ articles and 50 SEO articles, all good. The same is with eComm sites.

This suggests to me that HCU update is leaning more towards content variety than quality, which is understandable, given the size of the internet.

Can anyone pinpoint a flaw or offer a counter-argument to this perspective?
 
I firmly believe, after analizing most, if not all, of the sites in my niche post-HCU update, that sites with diverse content (pages) have an advantage while SEO-only ones were wiped out.

It doesn’t matter what type of content. If the site has 100 ‘news’ articles and 50 SEO articles, all good. The same is with eComm sites.

This suggests to me that HCU update is leaning more towards content variety than quality, which is understandable, given the size of the internet.

Can anyone pinpoint a flaw or offer a counter-argument to this perspective?
What do you mean by SEO articles and 100 news articles?? Are the news articles, not SEO??

If I have say 10 calculators, and the rest SEO articles. Aren't the calculators also considered SEO??
 
@CCarter is it more beneficial to create unique pieces of content for each of the social media platforms or is it fine to just repost/redistribute the same one across all of the platforms?

Or what is it that you guys do?
 
@CCarter is it more beneficial to create unique pieces of content for each of the social media platforms or is it fine to just repost/redistribute the same one across all of the platforms?
Unique posts for each platform which fit the audience. You have to put in effort if you are serious. Eventually you'll figure out the platforms with the best results and should put more effort and content on those ones to minimize wasted effort.

Like I wouldn't go to Pinterest and expect a SAAS to make waves. I would still try and test because you never know, but if I find Pinterst isn't creating results that turn into money then I would reduce it and eventually eliminate it.

Test, test and test. Again everything I advice is for people genuinely trying to get results from social instead of just pretending to get brand/social signal to check some blackbox in Google.

All the brands which we analyzed were genuinely on those social platforms, engaging with the audience, and creating content to get brand awareness and sales. It was genuine.
 
Status
Not open for further replies.
Back