Recovering Failing SEO Sites with the Kitchen Sink Method

The best move is more improvement under the hood “audit style” while publishing and attracting links. It’s basically business-as-usual while improving all the kitchen sink items.
This is good to hear. I've been updating content and using Screaming Frog to periodically check for new problems to fix. Two examples:
  • I added social profile links to the footer. After a few weeks, I discovered they were nofollow links. Oops!
  • One page was reported to no longer be in the index. I realized it was a mega post and pretty crappy. So I rewrote it and requested indexing. Note: SF is able to detect this because I have it linked up to GSC and GA.
This really shows the importance of having a process for checking for problems and using a good tool.

I also didn't realize these latest Dec updates weren't core updates. Geez, 3 years in and I'm still a noob :smile:
 
Question about one of my sites which tanked. Desktop page speed is good. Problem is mobile!

Now I'm not sure what to do either i build up site from ground up? Or hire someone to fix it? I'm more keen for a first choice.

First Contentful Paint 1.9 s Time to Interactive 5.3 s Largest Contentful Paint 9.2 s Total Blocking Time 360 ms Speed Index 2.7 s

When I check top score "Assessment of basic online indicators: Successfully" Largest Contentful Paint (LCP) : 1.5 , First Input Delay (FID) 28 ms , Cumulative Layout Shift (CLS) : 0.03 only Interaction to Next Paint (INP) is on orange.
 
Question about one of my sites which tanked. Desktop page speed is good. Problem is mobile!

Now I'm not sure what to do either i build up site from ground up? Or hire someone to fix it? I'm more keen for a first choice.

First Contentful Paint 1.9 s Time to Interactive 5.3 s Largest Contentful Paint 9.2 s Total Blocking Time 360 ms Speed Index 2.7 s

When I check top score "Assessment of basic online indicators: Successfully" Largest Contentful Paint (LCP) : 1.5 , First Input Delay (FID) 28 ms , Cumulative Layout Shift (CLS) : 0.03 only Interaction to Next Paint (INP) is on orange.

First step, use performance profiler in the browser's Dev Tools to analyze. Set it to act like mobile device with a slow network. Look for the FCP event and check what's loading before that.

Then optimize.

You probably have lots of crap loading that should be deferred, chopped out, or minimized. Do you have any above the fold ads?
 
First step, use performance profiler in the browser's Dev Tools to analyze. Set it to act like mobile device with a slow network. Look for the FCP event and check what's loading before that.

Then optimize.

You probably have lots of crap loading that should be deferred, chopped out, or minimized. Do you have any above the fold ads?
Thank you I hired programmer because of one thing. Website was made so-so and then we fixed the holes (not best solution). Design will be same / or a bit better and built from ground up. This will fix bad coding which was most likely used and i bet the overall page quality and also speed will be much better.

Above the folds. Yes sticky Google adsense ad and on desktop Google sidebar ad which is quite large ad.
 
I implemented some of @Ryuzaki recommendations stated here, but more so the E-E-A-T recommendations in another thread that I was missing.

Going through more than 600 posts and adding outbound links, and reworking my most popular 50–100 posts that used to pull in the bulk of the traffic sure took a long time.

Like many, I lost 50-60% of my traffic. In my case, many articles that were previously ranking 1-3 dropped to 3-5 and those that were ranking 3-5 dropped to 7-8.

If my traffic doesn't come back after the new few updates, I'll be at a loss.

The end goal is to sell the site 2–3 years down the line, so I'm a bit concerned about the fake team I created when it gets to that point.

I didn't state any credentials, accolades or anything along those lines.

If/when traffic recovers, I'll update here.
 
but more so the E-E-A-T recommendations in another thread that I was missing.
Thanks, for reminding me. I added them into the opening post. These include:

Numbers 10, 11, and 12 added to the Branding Signals, which now encapsulates EEAT. These include Prove You're Real, Prove You're Worthy, and Use Schema & Microdata.

I also added number 13 to the On-Page SEO section for Outbound Linking.
 
I implemented some of @Ryuzaki recommendations stated here, but more so the E-E-A-T recommendations in another thread that I was missing.

Going through more than 600 posts and adding outbound links, and reworking my most popular 50–100 posts that used to pull in the bulk of the traffic sure took a long time.

Like many, I lost 50-60% of my traffic. In my case, many articles that were previously ranking 1-3 dropped to 3-5 and those that were ranking 3-5 dropped to 7-8.

If my traffic doesn't come back after the new few updates, I'll be at a loss.

The end goal is to sell the site 2–3 years down the line, so I'm a bit concerned about the fake team I created when it gets to that point.

I didn't state any credentials, accolades or anything along those lines.

If/when traffic recovers, I'll update here.
I got hit yet again in the latest update.

I'm now sure that the issue is that I'm affected by the Helpful Content classifier.

As what I did above didn't work, since the update was released I've been working on a few things.

1. Reorganised the hierarchy of my site.

This won't make any difference but I felt it improved relevance.

2. I deleted the manually curated list of posts at the end of the each article.

Not sure if this is an issue, but using exact match anchor text to link to posts may not have been the best idea.

Perhaps I should have restructured things so I only listed a couple of very closely related posts at the end of each article, but it was easier to delete at this point and then reassess.

Articles still have contextual links, but with this change, no longer is every post interlinked at least once.

3. Reworked all my H2s.

I mentioned somewhere before how this Q&A format may be an easy way to spot spam or unhelpful content, considering the influx of AI generated PAA sites, but no-one replied.

In my opinion, using a Q&A format is the most helpful way of formatting posts for readers, but I can see how this would be considered writing content for the search engines.

Before the Helpful Content update, this did allow my traffic to rapidly grow, though.

I stand behind my content, but I might have been caught in the crossfire.

I have 500 more posts to apply this change to.

If I regain my traffic, I believe that it would be because the (un)helpful content classifier would no longer apply.

I'm not exactly sure how these updates work, but I'm looking to fix everything by the end of the week and maybe it will give Google time to reassess things before the update finishes rolling out.

I think I read that the helpful content classifier is constantly running, though.

I'll update it anything changes.
 
Last edited:
2. I deleted the manually curated list of posts at the end of the each article.

Not sure if this is an issue, but using exact match anchor text to link to posts may not have been the best idea.
I don't think a manually curated list of related posts at the end of an article will make or break anything. It sounds like, but I doubt that you did this, that you may have linked to them using the exact match anchor text there. That doesn't make sense to me. When I do this, I link out using the full title of the post, which will contain the keyword as well as a bunch of other words. This isn't an exact match anchor, though.

For future reference, I just posted this in another thread that I think is worth a share here:

-----

My opinion is that internal linking is not something Google is willing to interfere with. Throughout the years they've been unanimous is that stance too. For instance...

Gary Illyes said on /r/TechSEO on Reddit in 2019, when asked "is there an internal linking over-optimization penalty?"... his response: "No, you can abuse your internal links as much as you want AFAIK."

Matt Cutts said on a Google Search Central in 2014, when asked the same question, "Typically, internal website links will not cause you any sort of trouble. Now, the reason why I say ‘typically not’ rather than a hard ‘no’ is just because as soon as I say a hard ‘no’ there will be someone who has like five thousand links – all with the exact same anchor text on one page. But if you have a normal site, you know…a catalog site or whatever…. you’ve got breadcrumbs…you’ve got a normal template there…that’s just the way that people find their way around the site, and navigate, you should be totally fine."

John Mueller said on Twitter in 2017, when asked "Are there any differences between internal or external score/juice/strong calculation formulas?", that there "definitely" is a difference. He also has said that anchor texts used for internal links do provide context to what the link is about, which supports Google always telling us to use descriptive anchor texts internally.

I've always stood by this same conclusion that's simply borne out of experience. I've used extremely optimized anchor texts internally (which I still do to this day, but not all exact match, but very close to it), and I've also tried to have them not be optimized much at all. I've never gotten in trouble over it as far as I can tell, and I've never recovered from any hit by de-optimizing them. I can't say that I've seen a benefit from doing it one way over the other because I never created a scenario where I could do a single-variate straight comparison. At the end of the day, I simply don't think it's an issue, at all.

-----

Anyways, what you're doing is definitely within the philosophy of the Kitchen Sink approach, but I wouldn't grasp at straws either. Unless there's a very apparent problem with something, you don't need to alter it.

An example would be a list of related posts at the end of the content. Countless sites do this and have done this throughout the ages and have been hit and not been hit. I think it's pretty clear that this isn't an issue or even contributing some small amount of negative weight (given that you aren't doing anything really far out there and manipulative).

I mentioned somewhere before how this Q&A format may be an easy way to spot spam or unhelpful content, considering the influx of AI generated PAA sites, but no-one replied.
Yeah, PAA sites took a beating at the time because it was clearly manipulation of a loophole. The weird thing was, though, Google was actively promoting questions to webmasters that didn't have enough high quality answers. This is a good example of where being an early adopter can backfire.

I get articles back from writers sometimes where every heading and the title is a question. It's pretty annoying, but you bring up a good point. The whole juxtaposition and kind of hypocrisy of the whole thing is, typing a question that a human would ask has become twisted into writing for search engines. That's why the entire Helpful Content idea of "Is this for humans or search engines" is ridiculous and more of a philosophical question. It's obviously always for search engines in some amount and for humans in some amount too. Anybody with a brain is going to take search engines into account if they work online.

I'm not exactly sure how these updates work, but I'm looking to fix everything by the end of the week and maybe it will give Google time to reassess things before the update finishes rolling out.
smithy, I don't know how "early" in the cycle you got all the other work done, but it definitely has to be done well before the next update launches so that the improvements are in place during Google's data collection and then data crunching phases before launch. Doing it during is a guarantee none of the new work will be considered. You're at the earliest point possible for consideration for the next update, though.

I think I read that the helpful content classifier is constantly running, though.
That's what I've read too, but I've also seen where Google says that these changes must be made and consistency must be proven, and that it can take months and months for the negative weighting to be lifted, as they're forcing "time" to play a role in them learning to trust you again (lmao, like getting brow-beaten by a spouse). But yeah, everything is time delayed so we can't get any obvious feedback from the algorithm. Otherwise we could manipulate it too easily.
 
I don't think a manually curated list of related posts at the end of an article will make or break anything. It sounds like, but I doubt that you did this, that you may have linked to them using the exact match anchor text there. That doesn't make sense to me. When I do this, I link out using the full title of the post, which will contain the keyword as well as a bunch of other words. This isn't an exact match anchor, though.

For future reference, I just posted this in another thread that I think is worth a share here:

My post titles are usually the same as the keyword, so I was using exact match anchors.

smithy, I don't know how "early" in the cycle you got all the other work done, but it definitely has to be done well before the next update launches so that the improvements are in place during Google's data collection and then data crunching phases before launch. Doing it during is a guarantee none of the new work will be considered. You're at the earliest point possible for consideration for the next update, though.

That's what I've read too, but I've also seen where Google says that these changes must be made and consistency must be proven, and that it can take months and months for the negative weighting to be lifted, as they're forcing "time" to play a role in them learning to trust you again (lmao, like getting brow-beaten by a spouse). But yeah, everything is time delayed so we can't get any obvious feedback from the algorithm. Otherwise we could manipulate it too easily.

It sounds like it will take some time, for sure.

It really sucks that I went from a website making 5 figures a month to just a couple of thousand. You have to be mentally strong in this game.

Now I'm not really sure what to do after I've made all my changes. It probably isn't worth focusing my time on this site anymore when it's uncertain what will happen. Perhaps it's best to post intermittently instead and move onto a new site.

@Ryuzaki, I have a question for you.

Previously, one of the main ways I improved EEAT signals was to attach an author to each article. I gave them all a meaty bio, marked up schema etc.

However, this kind of makes me uneasy.

Not only because if this website recovers, I will want to sell it at some point, which could be a hard sell for a buyer to drop hundreds of thousands of dollars on. But also for the algorithm, the EEAT isn't backed up by an online reputation like social profiles, articles published elsewhere etc. There's a non-existent knowledge graph for the authors.

I'm therefore thinking of hiring fact-checkers who have an online presence that clearly demonstrates they have a background in the subject areas.

But if I do this, I would want to remove the authors and only have the fact-checker schema and byline - i.e. each article would just say "Fact Checked by ..." with no author attached to the article.

Any thoughts on this?
 
But if I do this, I would want to remove the authors and only have the fact-checker schema and byline - i.e. each article would just say "Fact Checked by ..." with no author attached to the article.
Yeah, my thoughts are that you must have authors, period. Google wants to know who's responsible for the content directly, and if they have expertise and experience, etc.

This is digging a much deeper hole than I'm really interested in (and I'm in the exact same boat as you in terms of 5 figures to 4 figures and made up authors), but we can create profiles for them around the web, use HARO to get their quotes in other publications, do guest posts, etc. It can be done.

Not only because if this website recovers, I will want to sell it at some point, which could be a hard sell for a buyer to drop hundreds of thousands of dollars on.
Another thing simply to point out is the last site I sold for a quarter milly, I checked up on it and after all the hubbaloo about the NDA agreement and me keeping it a secret that I sold it (so the fans of the author wouldn't know it changed hands), they ended up stripping ALL authorship from the site recently. They even removed sitewide links to the /about/ and /privacy/ pages and all that. It's almost guaranteed that buyers will tank the site in my experience. Like with EEAT becoming a huge thing and they decide to go backwards with it. Turned out they didn't care about the authors at all.
 
I made one final change.

It seems like somewhere along the way Google reclassified my site as I branched out into other topics. So what my site started off covering is no longer the focus or applicable.

I therefore ended up deleting 70 posts (or around 12% of the content) along with their categories/subcategories that Google doesn't think I'm relevant for anymore. I didn't delete everything, as I moved some posts into categories where they would still make sense to be in.

Will this make a difference? I don't know. But I've thrown everything but the kitchen sink at the site, and now I'm out of ideas.
 
4) Indexation Bloat & Quality
Do a site:yourdomain.com search on Google and note the total indexed pages there. Look in Search Console's index reports and check the total there. Do they match with your number of published posts and pages? If they're low, you have a quality problem and Google doesn't want to waste resources on your crap. If it's high, then you have a bigger problem which is also dragging down your Panda quality score. Find it and fix it. I go as far as to noindex paged archive pages that are paginated, like /category/page-2/ but always allow the first page to be indexed. Always keep them all dofollow, because the page rank will still flow.

When I type in site:domain.com on google, I see 1,500 results. But when I check GSC, 670 pages are indexed, which is also the same number according to my sitemap.

Code:
site:https://domain.com = 1,500
site:http://domain.com = 1,500
site:http://www.domain.com = 0
site:https://www.domain.com = 0

I asked my host who said that HTTPS has been forced on my site, so it should be loading securely. I can't see archive or search results pages indexed either, which why these numbers are usually inflated.

Interestingly, I get very different results when I search site:domain.com/ and site:domain.com

The trailing slash seems to be making a big difference. 782 results with the trailing slash, and 1,500 results without the trailing slash.

I can't figure out why google says more than double the pages have been indexed.

Even when I scroll through all the pages on google, there definitely aren't 1,500 results.

No non-https pages are indexed either.

Does anyone have any idea what is going on here, and if it is a major issue?
 
Last edited:
@smithy, Google themselves get asked about this frequently enough that I'm aware of their kind of canned response, which is the site: operator isn't accurate. That's pretty much the answer that's always given with not many more details. I think there's even been talk about doing away with it.

I just checked one of my sites in Search Console with 922 pages indexed when it should be 923. So all is well there. If I do a site: search, it shows around 1,250. There's definitely some inflation there. I don't know if it's a discrepancy worth tracking down (if even possible since they won't let us scrape all the results), when Google has basically told us it's only generally correct, if that.

Search Console's data seems to be extremely accurate to me. It is by far the best source of data we have for our sites on the topic.
 
@smithy, Google themselves get asked about this frequently enough that I'm aware of their kind of canned response, which is the site: operator isn't accurate. That's pretty much the answer that's always given with not many more details. I think there's even been talk about doing away with it.

I just checked one of my sites in Search Console with 922 pages indexed when it should be 923. So all is well there. If I do a site: search, it shows around 1,250. There's definitely some inflation there. I don't know if it's a discrepancy worth tracking down (if even possible since they won't let us scrape all the results), when Google has basically told us it's only generally correct, if that.

Search Console's data seems to be extremely accurate to me. It is by far the best source of data we have for our sites on the topic.

What had me worried is that you mentioned how "If it's high, then you have a bigger problem which is also dragging down your Panda quality score."

But if I should just rely on GSC instead, and the discrepancy in results with the search operator is no longer an issue, then I guess I should just ignore it.

It's weird how it shows results for both the non-https and https versions. When I scroll through the results, they also seem to max out at 300, though I get traffic to pages that aren't shown in this 300.

The site search operator seems useless now.
 
What had me worried is that you mentioned how "If it's high, then you have a bigger problem which is also dragging down your Panda quality score."
Yeah, this holds true but applied to Search Console instead. I would recommend looking at Search Console's indexation data compared to how many pages you submitted in your sitemaps. So the concept would be "I've submitted a total of 300 pages for indexation but I have 5,000 indexed..." So in this case you might have been hit by the Japanese auto-generated pages hack or you have a zillion single-use tags or something. Same concept, just a different way of obtaining the data. The Coverage Report, they used to call it, is definitely the most precise place to get this data by far.
 
Back