Content Filtering: Human-Crafted, AI-Generated or Collaborative Efforts
“EVLOUTION OF SEO Series”
As AI language models like ChatGPT become increasingly sophisticated, the online world is being flooded with AI-generated content across websites, social media, and more. This raises important questions about transparency, authority, and the ability for users to make informed choices about the content they consume.
The Challenge
A user searching for information on a topic could encounter pages and posts created entirely by AI with no clear indication of the authorship.
Currently, search engines like what Google is known for, do not offer distinction between human-written and AI generated content in their results. A user searching for information on a topic could encounter pages and posts created entirely by AI with no clear indication of the authorship. As AI language models improve, it will become harder to distinguish the two sources.
The lack of transparency surrounding AI-generated content creates significant challenges. We’re already seeing issues arise, with AI now producing a vast array of content beyond just text. This includes videos, images, reports, datasets, and even seemingly factual figures that can be misleading. As AI language models become more sophisticated, their ability to mimic human communication styles grows. While this offers users a wider variety of content formats and voices to explore, it also makes it increasingly difficult to distinguish between human-created and AI-generated content.
Another issue is that despite creator intent, the platform of note may not currently provide the option to publish said digital object with specific criteria in relation to AI. I ran into this exact issue when publishing the first few entries of the, “Ultimate Guide to the Zombie Apocalypse of the Future“. My intent as spelled out in the series, was to create a gamified series co-authoring “with AI”. However, I could not select an entity of “AI” as a collaborator leading to rejected publications of several entries in the series. It ultimately became more time intensive than expected so it went on pause until we work some things out. Eventually I removed the collaborative approach and the Technical Twin did find its way to the Amazon Kindle shelves.
Let Us Have Options
There are already examples plenty of other examples where AI-authored content has spread misinformation due to hallucinations or made up “facts.” As engaging as recent language models are, we’re not yet at a point where they can be “trusted” as authoritative sources on many topics. That said, AI could certainly play productive roles in content creation workflows by assisting human writers which is where these options come into play again.
The future of search will need “strong” filtering options to let users choose the type of content they prefer. Some potential filtering approaches include:
- Human-Crafted Content This option would only surface pages, articles and posts known to have been written by human beings with no AI involvement. Users could have higher confidence in the authenticity and accountability of the sources.
- AI-Generated Content Conversely, for users wanting to explore the cutting-edge capabilities of language models, results could be limited to content generated entirely by AI with no human involvement.
- Hybrid or Collaborative Human + AI Content This filter could favor content combining human authorship with AI assistance. For example, an article written by a person but with an AI handling tasks like research, ideation, outlines or drafting.
Ethical Considerations Abound Here Too
I could easily write a whole book on “Ethical Content Curation in the AI Era: The Case for Search Filters by Authorship Type.” The point is that by Providing transparency into whether content was human-authored, machine-generated, or a hybrid allows people to make informed decisions about what they consume online. It gets at issues of accountability, authority, and potential bias or hallucinations in AI outputs versus human judgement. Not going to lie there are some days when I don’t want to do this for example how much is the equivalent of “hybrid or collaborative”? What equates to help? Is it a single use? 5? What about forum posts? Our emails are already inclusive of AI Assistance possibilities do we need to declare there too?
We suspect the approach to implementing this new form SEO filtering will be an ongoing strategy and solution, much like it is today. As solutions to formal content labeling standards that specify the level of human and AI involvement in a piece come out we need to ensure the majority are not the only ones cared for. We need to ensure that voices are not stifled because they needed the assistance of AI for example: those with early onset memory issues that use AI to prompt for gaps in their historical memory (no one would know if they used search today so why should future articles indicate they used AI Assistance? Those recovering or impaired by trauma such as TBI (Traumatic Brain Injury) or a stroke could use AI to help them “talk” on a YouTube video, should they be penalized by SEO for that? Reading and editing your work if you are dyslexic might see a big reduction in stress by using AI. Future search engine algorithms that work in tandem with the (possibly) newly created filters cannot under any circumstances prevent voices like these from being found and thereby heard.
Another approach may utilize advanced (NLP) Natural Language Processing to attempt to detect human versus AI text on the fly. The challenge with this would be again hallucinations, ethical considerations (AI is not perfect), and creative intent. Given the rapid evolution of language models, this could be an incredibly complex challenge requiring frequent retraining and updates.
Whatever path gets taken, empowering users with granular search filtering choices for AI and human-crafted content may become essential well before we are ready for it – like in the upcoming US Elections. Allowing people to make informed decisions about what they want to consume will be imperative but we might not have any way to do so other than the human eye (and honor system) which right now is widely untrained.
While recognizing that generative content AI models are and will be incredibly powerful aids in many scenarios, there will always be an important role for authentic human communication, authorship and creative expression online. We must always remember the future is an intelligent collaboration between people and systems , and should never represent as a full scale replacement of one for the other.