How to handle low word count pages

illmasterj

BuSo Pro
Joined
Aug 5, 2018
Messages
452
Likes
386
Degree
2
I'm in the process of adding some pages to my site that have low word counts. We're talking 100-200 words total in the body along with an embed with some more content. Writing longer content for these pages would not help the user whatsoever.

By the time all of the pages are published, they will represent around 50% of URLs on the site. The rest are long blog posts in the 1000-4000 word range.

From a Google point of view, should I be worried about publishing a lot of low word count pages to my site in a short period of time?
 
For google.
You can make low word count pages rank just as good or better as long as they meet the queries criteria and users engage with them predictably.

My observation is that lower tech web crawler sites will treat you like shit and lower the 2nd order and later network values you get from them.
 
If these low word count pages fulfill and satisfy the intent of the search queries they're optimized for, you shouldn't have much of a problem (I wouldn't think). You can imagine dictionary and thesaurus websites as similar examples where a low word count doesn't equate to low quality.

I would personally be hesitant to do this because Google isn't perfect and can misunderstand things quite easily. And who knows how much manual classification goes into things like dictionary and thesaurus sites. I care more about my "indexation quality score" than just about anything else when it comes to SEO.
 
Do you have an example of a "lower tech web crawler site" @secretagentdad ? Also what do you mean by treat you like shit? Spin a bunch of spam links or something?
Alexa. And it’s many federators are the best examples.

Now it’s going away..... that’s gonna impact so many serps in my niche.
 
An alternative is to let them index but noindex all of the shorter word count pages.
to quote @Ryuzaki

No, quality != traffic.

You do NOT want to tell them not to crawl it. That would involve restricting access to the pages in the robots.txt file. But if you do that, then how will they crawl it to know that the page is throwing a 404 (gone) or 410 (permanently and purposefully gone) HTTP status code? That's what you want to tell them. That the page is gone, and thus they should deindex it. Or give them a noindex directive, which also requires crawling.

It's not about not crawling, it's about deindexing. All pages in the index are evaluated to form your indexation quality score. To deindex a page requires crawling. There's nothing wrong with having 404's and 410's on your site.

But if you restrict crawling, you'll end up in a worse situation than having low quality content in the index. You'll have blank pages in the index which will absolutely tank your quality score.

It's like Google is walking down a hallway with 3 doors. They open the first 2, write down that one is a bedroom and one is a bathroom, mention what furniture is in the room, what it's used for, how clean it is, etc.

Then they reach the 3rd door but you've locked the door with robots.txt. Their job is to tell users how many rooms (pages) there are on your site, but not necessarily what's in them. So they mark down that another room exists and index it as blank (unknown contents).

Maybe your jokester buddy snuck in the house and hung a sign on the door that says "dlangpap's barbie collection". That's an anchor text. Google will trust that and give your blank page a title of "dlangpap's barbie collection" and say "no meta description available". And on their side there's no content behind the door.

That's a zero quality webpage and what you get when you block crawling. Blocking crawling will make the problem infinitely worse.


https://www.buildersociety.com/thre...-seo-sites-with-the-kitchen-sink-method.5607/
 
Back