Detecting AI generated content for Outsourced Content.

Here's an image I ran across recently. AI detectors across the top, and AI writers down the left hand side. It shows how the detectors managed against each writer:

7GT9KjQ.jpg
 
Here's an image I ran across recently. AI detectors across the top, and AI writers down the left hand side. It shows how the detectors managed against each writer:
What are the implications of the spun content performing well regarding these AI detectors? Does it just mean that these detectors have been designed with a single focus in mind - detecting AI content? (Maybe I answered my own question).
 
I think the tests will only really help you catch really dumb writers who don't make any effort to use the AI tools correctly. The tests aren't very good and getting stuff to pass isn't very hard or time consuming. I'm glad they exist - if a writer seems to an editor to have suddenly had a writing style change and they've gone from 100% passing with their old content to something like 70%-90% now a combination of the two might be a good indication we need to have a word. But I don't know what you'd do in future if a writer turned up passing all the tests from day 1 and was always using AI writing...
 
Here's an image I ran across recently. AI detectors across the top, and AI writers down the left hand side. It shows how the detectors managed against each writer:

Based on that screenshot, which is from this post. Quetext seems to be the one that caught most of the ai content (or failed it), so naturally I tested it against some content. I also looked around their documentation, but was not able to find anything relating to ai content detection. It just detects plagiarism and it gives a percentage score.

Looking at the articles they used for the case study (which they share in that article) - they are all about "what is SEO" or around the keyword "SEO", which is probably why most of the detectors flagged the content.

I suspect that if you are writing about other more specific things that are not as covered as the topic of SEO. Most of them will pass the tests, but I need to test to backup my claim.
 
Last edited:
Main issue is false positive rate.
Does human written factual content more likely to show as AI-written or false positive by these tools?
 
Main issue is false positive rate.
Does human written factual content more likely to show as AI-written or false positive by these tools?
The more dry and factual your content was the more likely it is to get a false positive for sure. I wrote some guides years ago before these things even existed and several of them are failing.
And I do know my writing can be a bit boring at times but one of them in particular did really well and even got some big really natural (ie unexpected/just from ranking) links years after we stopped promoting it so I wasn't just churning out AI garbage before AI garbage was a thing or anything like that!

The flip side of that is that making AI content pass all the current detectors with near-perfect score isn't very hard...
 
I'd love to hear from some of the content agency people here, the steps they're taking at scale to ensure their writers aren't handing in AI content.

If you're a webmaster putting up an article or two a day, that's one thing, but what about an agency delivering 10, 50, 100+++ posts to clients everyday from a huge staff of writers? What are you doing to stop AI from slipping through the cracks?

It's something that spooked me a bit when all of this started to drop and I was putting together a team of writers for my own sites, and potentially for clients down the road.
 
I'd love to hear from some of the content agency people here, the steps they're taking at scale to ensure their writers aren't handing in AI content.

I think most are overthinking it/stressing too much about it. Best you can do is run the content in plagiarism checkers (copyscape, quetext, etc..) and make sure nothing is plagiarized. In my opinion, as long as the content is readable, accurate and provides the information the user is looking for - it should be good to go.
 
I think most are overthinking it/stressing too much about it. Best you can do is run the content in plagiarism checkers (copyscape, quetext, etc..) and make sure nothing is plagiarized. In my opinion, as long as the content is readable, accurate and provides the information the user is looking for - it should be good to go.
If you are paying good $$ for purely human written content, I think it is fair to expect AI tools be avoided (unless that is part of your strategy).

Going forward (when web is full of AI content),Google might consider non-AI original content as a signal for expertise/primary source.
 
I'd love to hear from some of the content agency people here, the steps they're taking at scale to ensure their writers aren't handing in AI content.

One way the AI revolution might have a positive effect is that people might not be willing to pay for fluffy, high noise-to-signal content anymore.

AI writing without proper guidance is fluffy and very repetitive, like bad SEO writing.

If you ask the AI how to write, what to write, with what data and research, then you can get good results.
 
My experience is that all the tools are useless except in detecting content where someone has literally gone into ChatGPT and told it to "write an article about widgets".

And the output from a prompt like that is going to be recognizable as AI written without a tool anyway.

Any writer using AI while still producing serviceable content is going to be generating paragraph by paragraph with a good outline, feeding it information/facts/ideas, using tools like Quillbot, then editing by hand. Most of the time doing this will give you something that no tool will detect.

At that point it comes down to assessing whether the content is actually good quality.

I'm still running outsourced content through AI detection tools but just because I get a "no AI detected" result doesn't it's not AI written.

From there I will just read it myself to check it's quality. If it's good quality, I will edit and publish. If it's bad quality, I won't use the writer again. If it's AI written it's probably bad quality and pretty obvious.

I honestly think the best way to tell if your outsourced content is AI written is to spend a good amount of time playing with AI tools yourself and understanding how they write. Once you do that it's pretty easy to tell.

If you scale and don't have time to read all your content, get an editor and train them to do the same.

My two cents.
 
My experience is that all the tools are useless except in detecting content where someone has literally gone into ChatGPT and told it to "write an article about widgets".

And the output from a prompt like that is going to be recognizable as AI written without a tool anyway.

Any writer using AI while still producing serviceable content is going to be generating paragraph by paragraph with a good outline, feeding it information/facts/ideas, using tools like Quillbot, then editing by hand. Most of the time doing this will give you something that no tool will detect.

At that point it comes down to assessing whether the content is actually good quality.

I'm still running outsourced content through AI detection tools but just because I get a "no AI detected" result doesn't it's not AI written.

From there I will just read it myself to check it's quality. If it's good quality, I will edit and publish. If it's bad quality, I won't use the writer again. If it's AI written it's probably bad quality and pretty obvious.

I honestly think the best way to tell if your outsourced content is AI written is to spend a good amount of time playing with AI tools yourself and understanding how they write. Once you do that it's pretty easy to tell.

If you scale and don't have time to read all your content, get an editor and train them to do the same.

My two cents.
After spending a lot of time with Davinci-003 specifically, I'm at the point where I truly cannot tell the difference between content that I have it produce and human writers. ChatGPT is even less structured and so I'm sure it's possible to get it to be undetectable to the human-eye fairly quickly. Of course, it still may be possible to pickup patterns in the text that indicate whether it's AI generated. I worry about this less today than I did a few months ago.
 
After spending a lot of time with Davinci-003 specifically, I'm at the point where I truly cannot tell the difference between content that I have it produce and human writers. ChatGPT is even less structured and so I'm sure it's possible to get it to be undetectable to the human-eye fairly quickly. Of course, it still may be possible to pickup patterns in the text that indicate whether it's AI generated. I worry about this less today than I did a few months ago.
And the good news is the chatGPT model is now available via the API - https://openai.com/blog/introducing-chatgpt-and-whisper-apis so you can really test your ideas at scale!
 
If you are paying good $$ for purely human written content, I think it is fair to expect AI tools be avoided (unless that is part of your strategy).

Going forward (when web is full of AI content),Google might consider non-AI original content as a signal for expertise/primary source.
100%. When clients pay for human writers why would they accept ai content? As a content agency, we won't be okay if our writers deliver us ai written articles no matter how amazing they read. When we hired em and paid em for the writings, we paid for to write the articles themselves. Plain and simple.

About the tools, yes you will get faulty and inaccurate results. They are tools after all. What I personally do to check if the contents are original or not is, I run em on multiple checkers. If I get similar results for a specific content, then I seriously consider some changes.
 
When we hired em and paid em for the writings, we paid for to write the articles themselves. Plain and simple.
I couldn't agree more. I'm all for using AI when it's appropriate but I do agree with the 'ethical AI' principle where you disclose it to your clients. We shouldn't be shipping AI work and pretending a human did it etc.
 
I wasn't really sure where to put this, as it doesn't warrant it's own thread. This thread works because if you see these words in your content, then you know you got cheated.

Otherwise it's just worth a good chuckle. So on ChaptGPT you'll see this after generating an article (the image is small, I know):

ZYxPtl0.png

There's a little prompt to let you regenerate a new response if you don't like the one you got. They also provide, which you can see, a "copy to clipboard" button that a lot of people aren't using. So what they inevitably do is end up also copying the "regenerate response" text.

Which means we can use a google dork like this:

"regenerate response" -chatgpt dog training

So you have the phrase in quotations to get the exact match, then we want to remove any instance or mention of chatGPT using the minus sign. Then type in your topic like normal. And what you'll find is something like this:

2hmJqWw.png

You can see all the goofballs being lazy and not checking the content before they publish it, on their sites, on social media, on youtube, etc.

Now if you want to dig in for the heck of it, you can search something like:

"regenerate response" -chatgpt site:domain.com

Where you replace domain.com with whatever specific competitor sites you're looking at. It won't be foolproof if they're good copy and pasters, but you can find out which competitors may be using chaptGPT content.

You can even run that on your own site to see if you've published something that a content agency delivered and it slipped by the both of you.
 
goofballs
Jesus fucking christ, these people are stupid.

But I am more concerned with the real A.I. threat:


And


And then there is the PURE A.I. Instagram thot, Ailice:


She doesn't exist. She is going to destroy ALL instagram thotting and give tiktokers a run for their money. Now we'll be competing against A.I. in the workplace AND in the Metaverse AND social media. Jebus.
 
Jesus fucking christ, these people are stupid.

But I am more concerned with the real A.I. threat:


And


And then there is the PURE A.I. Instagram thot, Ailice:


She doesn't exist. She is going to destroy ALL instagram thotting and give tiktokers a run for their money. Now we'll be competing against A.I. in the workplace AND in the Metaverse AND social media. Jebus.

I don't know. I had a good throwback laugh at this.

Meaning, as a man....

I've said for a couple of decades to other men how women look like the AI filter at 11pm in the club, but look like the unfiltered version at 7am the next morning and how it's such a lie ( the lie being makeup and spanx ), lol
 
I think the tests will only really help you catch really dumb writers who don't make any effort to use the AI tools correctly. The tests aren't very good and getting stuff to pass isn't very hard or time consuming. I'm glad they exist - if a writer seems to an editor to have suddenly had a writing style change and they've gone from 100% passing with their old content to something like 70%-90% now a combination of the two might be a good indication we need to have a word. But I don't know what you'd do in future if a writer turned up passing all the tests from day 1 and was always using AI writing...
Agree with you. Smart ones will give some personal touch & remove common AI patterns. Then AI detectors will show random fail/pass results.

I think a better way is to read random sections of the article, see if those sounds unnatural; plus randomly check if some "factual data" mentioned in the article is correct.

This way, that article will be considered genuinely helpful for readers. And Google doesn't seem to have problems with AI articles(well written) yet. They are even ranking pure garbage(not good from a conversion rate % point of view for webmasters though)

Sadly can't find any shortcut method as of now. If someone needs bulk content, s/he must have trustworthy editors trained to tackle this.
 
The number of these detectors is increasing every day. The funny thing is that these detectors are useless.
 
A few months ago I ordered an article from a cheap Fiverr writer. Ten seconds into the article I could tell it was AI. So I acted accordingly and never heard from the writer again. But that's not important. These are stupid folks who prompt the GPT to write an article about the topic/subject.

After a few months of in-depth messing with GPT, I can confidently tell you there's no way to detect AI content if done right.

Now, the problem with the "done right" approach is that it takes time, practice, and a deep understanding of the topic you're writing about. As I already wrote in one AI thread, any AI can write about broad topics such as "why the grass is green." If you prompt it right the output is amazing.

Here's what chat GPT says about this topic:

My prompt: Explain how to detect AI-generated content from outsourced writers in style of Albert Einstein

GPT answer:
Ah, detecting AI-generated content from outsourced writers is quite an interesting problem to solve. You see, my dear friend, the first step is to familiarize yourself with the characteristics of each. AI-generated content tends to lack a certain je ne sais quoi, a certain human touch that is difficult to replicate artificially. It may be consistent in style and structure, but it often lacks the subtle nuances and variations that make human writing so unique.

On the other hand, outsourced writing can vary greatly in quality, depending on the skill of the writer. However, it often carries a certain personal touch that reflects the individuality of the writer. This may manifest as distinct writing styles, varying word choices, or unique perspectives.

Now, to detect AI-generated content from outsourced writers, one must be vigilant and look for telltale signs of each. With AI-generated content, look for patterns of repetition or awkward phrasing that may indicate a lack of natural language processing. With outsourced writing, pay attention to inconsistencies in style or tone, as well as errors in grammar or punctuation that may indicate a lack of proficiency.

In short, to detect AI-generated content from outsourced writers, one must be observant and attentive to the subtleties of language and style. By understanding the unique characteristics of each, one can develop a keen sense for distinguishing between the two.

My prompt: Can you detect the text above as AI-generated? Please elaborate

As an AI language model, I cannot detect whether the text above was generated by AI or written by a human without additional context or information. However, based solely on the language and structure of the text, it appears to be written by a human.

The text above contains grammatically correct sentences, coherent paragraphs, and appropriate vocabulary usage. There is no evidence of repetition or awkward phrasing that may indicate the use of AI-generated content. Furthermore, the text does not exhibit any patterns or behaviors commonly associated with AI-generated content, such as the use of generic phrases or a lack of natural language processing.

In conclusion, based on the language and structure of the text, it appears to have been written by a human and not generated by an AI language model.

#####End of AI####

In conclusion (this time mine), if the writer does not understand the topic well enough so he can explain it simply to anyone, there's no AI tool that can help him.

If, on the other hand, the writer knows the topic, I see no problem in using AI to produce better content.

I rest my case.
 
I can confidently tell you there's no way to detect AI content if done right.

My prompt: Explain how to detect AI-generated content from outsourced writers in style of Albert Einstein

GPT answer:
Ah, detecting AI-generated content from outsourced writers is quite an interesting problem to solve. You see, my dear friend, the first step is to familiarize yourself with the characteristics of each. AI-generated content tends to lack a certain je ne sais quoi, a certain human touch that is difficult to replicate artificially. It may be consistent in style and structure, but it often lacks the subtle nuances and variations that make human writing so unique.

On the other hand, outsourced writing can vary greatly in quality, depending on the skill of the writer. However, it often carries a certain personal touch that reflects the individuality of the writer. This may manifest as distinct writing styles, varying word choices, or unique perspectives.

Now, to detect AI-generated content from outsourced writers, one must be vigilant and look for telltale signs of each. With AI-generated content, look for patterns of repetition or awkward phrasing that may indicate a lack of natural language processing. With outsourced writing, pay attention to inconsistencies in style or tone, as well as errors in grammar or punctuation that may indicate a lack of proficiency.

In short, to detect AI-generated content from outsourced writers, one must be observant and attentive to the subtleties of language and style. By understanding the unique characteristics of each, one can develop a keen sense for distinguishing between the two.

I rest my case.
Originality.AI says they're 89% sure this is AI.
 
Back